Medical News Google’s hate speech-detecting AI appears to be racially biased

Medical News Google’s hate speech-detecting AI appears to be racially biased

by Emily Smith
0 comments 65 views
A+A-
Reset

Medical News

Technology

12 August 2019

By Donna Lu

AI moderators have been touted as a solution to online abuseMaskot/GettyImages
Artificially intelligent hate speech detectors show racial biases. While such AIs automate the immense task of filtering abusive or offensive online content, they may inadvertently silence minorities.
Maarten Sap at the University of Washington in the US and his colleagues have found that AIs trained to recognise online hate speech were up to twice as likely to identify tweets as offensive when they were written with African-American English or by people who identify as African American.
This includes Perspective, a tool built by Google’s Counter Abuse Technology …

You may also like

Leave a Comment