Abstract: The widespread use of social media has led to an alarming increase in cyberbullying incidents, causing significant psychological and emotional distress to victims. This project aims to address this pressing issue by leveraging advanced deep learning techniques, specifically Long Short-Term Memory (LSTM) networks, to detect instances of cyberbullying in social media posts. While existing research primarily focuses on established languages, there remains a notable gap in resources for emerging languages. Thus, this project seeks to bridge this gap by developing a robust model that can effectively detect cyberbullying across various linguistic contexts. The project is structured into several key phases, beginning with data collection from a popular dataset repository like Kaggle. The collected data undergoes preprocessing to remove irrelevant information and convert text into a numerical format suitable for LSTM input. Subsequently, the LSTM model is trained on the processed data and evaluated using metrics such as accuracy, precision, recall, and F1 score. The model's performance is assessed on a test set to determine its effectiveness in identifying cyberbullying instances in social media posts. Through rigorous experimentation, the LSTM model demonstrates impressive results, achieving an accuracy of 95.6% on the test set. This high level of accuracy indicates the model's efficacy in accurately detecting cyberbullying behaviour. Furthermore, the trained model can be saved and deployed to make predictions on new, unseen data, thus serving as a valuable tool in combating cyberbullying and providing support to those affected by it.

Keywords: Cyberbullying detection, Long Short-Term Memory (LSTM), Social Media, Deep learning, Text analysis.


PDF | DOI: 10.17148/IJARCCE.2024.134142

Open chat
Chat with IJARCCE