Facing the uphill task of tackling election-related interference on its platform as India gets ready for polls next year, Facebook on Saturday said it is establishing a task force comprising “hundreds of people” in the country to prevent bad actors from abusing its platform.
“With the 2019 elections coming, we are pulling together a group of specialists to work together with political parties,” Richard Allan, Vice President of Policy for Europe, the Middle East and Africa (EMEA) told the media here.
“The team will have security specialists and content specialists, among others, who will try to understand all the possible forms of election-related abuse in India,” added Allan during a workshop on Facebook’s “community standards” in the capital.
Allan explained that while the disinformation linked to real-world violence is checked by the team mandated to maintain Facebook’s community standards, other forms of disinformation are handled by a different team of fact checkers.
“The challenge for the task force in India would be to distinguish between real political news and political propaganda,” Allan noted, adding that the team would be very much based in the country and would consist of both existing human resources working on these issues within the company and new recruits.
Facebook came under the intense scrutiny of policymakers in the US after allegations of Russia-linked accounts using the social networking platform to spread divisive messages during the 2016 presidential election surfaced.
Since then, it has stepped up efforts to check abuse of its platform by bringing in more transparency in the conduct of its businesses, including in advertisement policies.
Echoing Facebook CEO Mark Zuckerberg’s earlier comments on elections across the world, Allan said the social media platform “wants to help countries around the world, including India, to conduct free and fair elections”.
In April, Zuckerberg said Facebook would ensure that its platform is not misused to influence elections in India and elsewhere.
“Our goals are to understand Facebook’s impact on upcoming elections — like Brazil, India, Mexico and the US midterms — and to inform our future product and policy decisions,” he told US lawmakers during a hearing.
Facebook uses a combination of technology, including Machine Learning (ML) and Artificial Intelligence (AI), and reports from its community to identify violating content on the platform.
The reports are reviewed by members of its “Community Operations” team who review content in over 50 languages in the world, including 12 from India.
“By the end of 2018, we will have 20,000 people working on these issues, double the number we had at the same time last year,” he said.
“We are also working to enhance the work we do to detect violating content proactively,” Allan said.
Speaking at the 16th Hindustan Times Leadership Summit here later in the day, Allan said Facebook was cooperating fully with the investigating agency (Central Bureau of Investigation) in India with regard to the Cambridge Analytica data leak scandal.
The British political consultancy firm was in the midst of a huge controversy for allegedly inappropriately harvesting data from about 87 million Facebook accounts.
Without giving details of the impact that the Cambridge Analytica scandal had on India, he said that the impact was probably limited and that most users in India need not worry that their data was stolen.
“Yes, we have a responsibility to keep data safe. We employ some of the best security engineers in the world. When we notice a breach, we let people know immediately,” Allan said at the summit.
“We are investing more and more in countries outside the US so that they can tell us how our services should be designed,” he said, adding that India was an important market for Facebook and that it was strengthening the team in India to understand the local forms of hate speech.
Featured image by Tim Bennett