Online chess has a problem: AI flags Black vs White as hate speech

Indian scientist has discovered that the possible reason for the 24-hour shutdown could be the fallibility of Artificial Intelligence (AI) systems used by tech giants to monitor hate speech.

Last year, Agadmator, a popular YouTube chess channel with over a million subscribers, got blocked for not adhering to ‘Community Guidelines’. Now, an Indian scientist has discovered that the possible reason for the 24-hour shutdown could be the fallibility of Artificial Intelligence (AI) systems used by tech giants to monitor hate speech.

Ashique KhudaBukhsh, an avid chess player with a highly-creditable peak blitz rating of 2,100 and a PhD in machine learning, says his six-week experiment showed that words like ‘black’, ‘white’, ‘attack’ — common to those commenting on the battle on the 64 squares — can possibly fool an AI system into flagging certain chess conversations as racist.

The 38-year-old from Kalyani — it’s an hour’s drive from Kolkata — who conducted his research at Pittsburgh’s Carnegie Mellon University says the findings are an eye-opener to the possible pitfalls of social media companies solely depending on AI to identify and shut down sources of hate speech.

“If we try to monitor speech, just using AI, without any human moderation, these are some of the potential risks that might happen. This is what we tried to show through the chess example, which is easy to understand for everyone. Agadmator is very popular, so the channel getting blocked creates lots of news, but suppose it is a guy with say 10 subscribers, nobody will ever know what is happening,” Ashique told The Indian Express.

Host Antonio Radic was talking to Grandmaster Hikaru Nakamura when YouTube took down the Agadmator channel.

For their experiment, Ashique and a student, Rupak Sarkar, pored over 6.8 lakh comments by 1.7 lakh unique users from 8,818 videos on five popular chess channels, including Agadmator, MatoJelic and Chess.com. They then trained AI systems via machine learning algorithms with hate speech and non-hate speech data from a far-right website Stormfront and microblogging platform Twitter.

When these AI systems filtered the chess comments, about 1 per cent (approximately 6,800) were flagged as hate speech. Of these, 1,000 were manually checked and 82.4 per cent were ‘false positives’, confirming Ashique’s theory that chess conversations are being misread by AI designed to flag hate speech.

“Just innocent chess discussions like ‘White’s attack on Black is brutal’ or ‘Black should be able to block White’s advance’ were flagged as hate speech. When we manually checked comments flagged as hate speech, over 80 per cent of comments were innocent chess discussions. System is just noticing black, white, attack, kill, capture, and it triggers those hate speech filters,” he said.

After a Master’s in Computer Science in Vancouver, Ashique worked with Microsoft as a Software Developer in Seattle for a year, before obtaining a PhD at Carnegie Mellon University. His paper titled, ‘Are chess discussions racists? An adversarial hate speech data set’, was presented last month at the annual conference of Association for the Advancement of AI. Since then there has been a great deal of interest in the experiment, especially from Russia, the long-time global chess centre.

Ashique says that if tech companies don’t use ‘diverse training data’ and human moderation, then AI won’t be accurate as it won’t pick up the context of the use of certain words.

“Again, we don’t know what exactly happened inside YouTube. YouTube restored the channel in 24 hours. We just wanted to reconstruct the situation. We released a data set of 1000 chess comments, which the AI system by mistake flagged as hate speech. In the future, if someone wants to do research, they can try their system on the data set. If a lot of comments are flagged as hate speech, you know something is wrong with the system,” he said.

Source: Read Full Article