Naver uses smarter AI bot to eradicate malicious and insulting comments
SEOUL, AJU - An artificial intelligence-powered bot run by South Korea's top web service Naver got smarter to end the evil practice of cyberbullying with malicious and insulting comments that have driven celebrities to suicide. Naver vowed to keep upgrading its AI bot through big data learning for sound internet culture.
From June 19, Naver said its upgraded AI bot will expand the criteria for judging malicious and insulting expressions from abusive words to the context of sentences. Up until now, the bot has automatically detected and blinded comments with abusive language and slang.
Based on big data, the bot analyzes the characteristics of spoken-word comments with many abbreviations and typos and blinds them if they are deemed offensive or rude in consideration of the context of sentences even if they do not have slang. Habitual haters will be restricted from using comment services for a certain period of time.
South Korean web portals have adopted artificial intelligence technologies to filter out malicious comments and create a good online communication environment following the tragic death of singer-actress Choi Jin-ri, well known by her stage name Sulli, who committed suicide in October last year.
A month later, Goo Ha-ra, better known as Hara, committed suicide. Sulli and Hara had been close friends who shared sadness and loneliness. Cyberbullies have used online news sections, social media and other free-to-write web boards to express hate and anger targeting celebrities.
Kakao, the operator of South Korea's second web portal Daum, has blocked comment features for entertainment news. In February, Kakao allowed the users of Daum and Kakao Talk, a popular messenger app, to report comments or online posts that contain discrimination or hate content. Since then, Kakao has seen a steady decrease in abusive language and slang.