Twitter put out a plan to study the fairness of its algorithms on Wednesday to prevent racial or ideological bias in how the platform works.
The study is being led by Twitter’s ML Ethics, Transparency and Accountability (META) team, consisting of engineers, researchers and data scientists from across the company.
According to the announcement, the company is to conduct “in-depth analysis and studies to assess the existence of potential harms” in the algorithms they use.
Why forced Twitter’s hand?
Twitter’s use of ML impacts hundreds of millions of tweet per day. The company believes that the use of ML might sway the system. Instead of helping, it might ‘behave differently’, implying a bias towards certain racial subgroups or political ideologies when dealing with the content suggestion.
While there are several reasons why this study is being conducted, the most notable one is the amount of criticism the company has received over its image cropping (saliency) algorithm, which has been accused of being biased towards people with fairer skin.
Perhaps the best example for this happened in October 2020 when the company had to work on a fix for the aforementioned algorithm. Several users had called out the algorithm where it consistently picked out lighter-skinned people, disregarding the actual photo’s framing. A bunch of users ran their own experiments with fictional characters and even dogs.
The content recommendation algorithm across racial subgroups and content recommendations for different political ideologies across seven countries, the names of which haven’t been mentioned, are also under scrutiny. Once again, this is to determine if the algorithm is biased towards a particular racial subgroup or political ideology.
What’s Twitter trying to do with this study?
Twitter claims to ‘share’ their learnings and ‘best practices to improve the industry’s collective understanding of the topic and help them correct their approach and hold them accountable.
The flaws in the algorithm may cause to Twitter change their product — going as far as to remove an algorithm and give people more control over the images they tweet. We could also see new standards into how Twitter designs and builds policies considering their outsized impact on one particular community.
While this study’s result and its work might not necessarily lead to a visible product change, it will lead to heightened awareness and important discussions around the way Twitter builds and applies ML, as claimed by the company itself.
Someone who writes/edits/shoots/hosts all things tech and when he’s not, streams himself racing virtual cars. You can reach out to Yadullah at [email protected], or follow him on Instagram or Twitter.