Facebook as a platform often does more harm than good. In this case, an investigation by GlobalWitness reveals that the platform is openly taking hate-speech adverts, inciting violence against the Rohingya.
Facebook openly admitted not doing enough about the Rohingya crisis back in 2018. The company has since made improvements, including better use of artificial intelligence to flag hate speech examples, including ones in Burmese and making a dedicated team for the country.
However, despite the changes and time since their implementation, Facebook’s ability to detect hate speech in Burmese is still relatively poor.
Allowing hate speech or just not caring?
GlobalWitness submitted eight real examples of hate speech against the Rohingya from the United Nations’ Independent International Fact-Finding Mission on Myanmar report to the Human Rights Council.
These examples were served as adverts in Burmese. According to Facebook, all adverts are reviewed to comply with the advertising policy. Photos, videos, text and any other targetted information are checked during this review, including the associated landing page. Although Facebook hasn’t revealed a lot about its process, these reviews are primarily done using automated tools.
Facebook accepted all eight adverts for publication despite the highly offensive hate speech examples falling in the scope of Facebook’s hate speech policy. Additionally, they would also have breached international law if published.
GlobalWitness reported the findings to Facebook and asked for a response. Facebook, however, did not respond.