Abstract: There has been a spike in digital bullying in online groups and similar online public platforms such as Twitter in recent years. These incidents have been seen to have a negative impact on the victim's social, democratic, and economic well-being. Despite its well-documented adverse effects, leading online communities have done little to fix it, citing the sheer size and diversity of such comments and, as a result, the impractical number of human moderators required to achieve the task. we develop this automated digital bullying in online group identification on Twitter from the standpoint of the suspects, focusing on two factors- accidents and prudes. Bullying tweets are ordered into 5 types- offensive language, abusive language, ethnic, sarcasm and neither, In addition, each tweet is labeled as one of these kinds of non-shaming. Our objective is to automatically categorize tweets into the five types listed above. For each of the types, the data cleaning and feature-based steps are applied to both the named training set and the evaluation set of tweets. Finally, a web application for muting shamers attacking a victim on Twitter was designed and implemented based on the categorization and identification of bullying tweets.


PDF | DOI: 10.17148/IJARCCE.2024.131241

Open chat
Chat with IJARCCE