Abstract: The rapid expansion of digital communication platforms has increased the spread of harmful and offensive content across text, images, audio, videos, and online comments. Manual moderation methods are inefficient and difficult to scale for large volumes of multimodal data. This project presents a Multimodal Harmful Content Classifier with Streamlit to automatically identify and classify harmful content. The system processes multiple input formats and converts non-textual data into textual form using OCR and Speech-to-Text techniques. Machine learning and natural language processing methods are applied to analyze extracted content. The classifier categorizes input as Safe, Offensive, or Harmful with a confidence score. A Streamlit-based interface provides real-time analysis and result visualization. The proposed system improves moderation accuracy and supports safer digital communication.


Downloads: PDF | DOI: 10.17148/IJARCCE.2026.151147

How to Cite:

[1] Harisha C J, Prof. Suma N R, "Multimodal Harmful Content Classifier with Streamlit," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.151147

Open chat
Chat with IJARCCE