Smart moderation
- The internet has empowered us with the freedom of speech and expression online. Unfortunately, this has also led to misinformation, impersonation, pornography, and other violent content, making content moderation an increasingly complex yet crucial task.
- The team behind 500-backed microblogging platform Koo is keeping a close eye on content with the help of artificial intelligence (AI) and machine learning.
- “With Koo, there is an intent behind introducing these features. We are a thoughts and opinions platform and we want people to come and engage with each other in a healthy way,” said Rajneesh Jaswal, Head of Legal & Policy at Koo.
- The team demonstrated several ways as to how their new content moderation works:
- Pornography: If a user posts a video containing nudity or pornography, it will be pulled in around 5 seconds, but works of art are excluded.
- Violence: Whenever a user shares an image that contains gore or graphic violence, Koo will allow the post but offer an additional layer of caution. A blurred image will appear, with a warning message, and users will have the liberty to view the image, like, or comment.
- Fake news: According to the team, the platform runs a detection cycle every half an hour and can instantly take down fake news.
- Toxic comments and spam: Koo identifies such posts and hides them. Such posts will only be visible once users click on the Hidden Comments button. This feature works almost instantly.
- Along with these safety measures, Koo has also integrated ChatGPT for select users. With the AI chatbot, users can compose posts on anything with prompts.
- Read the full story here.