Toxic Comment Classification
- Tech Stack: Python, NLP
- Github URL: Project Link
The stakeholder for this project is an online platform that hosts user-generated content, such as comments and discussions. The platform aims to create a safer and more inclusive environment by automatically identifying and moderating toxic comments.
The primary problem is the presence of toxic comments in user-generated content, which can contribute to a hostile online environment. The stakeholder aims to implement a machine learning model to automatically detect and filter out toxic comments, enhancing user experience and platform safety.