Facebook’s head of Civic Engagement, Samidh Chakrabarti, has admitted the site was “far too slow to recognise how bad actors were abusing [the] platform”, concluding that “at its worst” it has the potential to “spread misinformation and corrode democracy”.
In a blog post titled What Effect Does Social Media Have on Democracy? Chakrabarti also said that the company plans on hiring 10,000 staff to work on safety and security.
The piece examines a number of topics relating to the ways Facebook has been accused of empowering “bad actors” from foreign interference and echo chambers to political harassment. However, it starts by tackling “the elephant in the room”, the Russian spreading of fake news in order to influence the 2016 US Election.
“Although we didn’t know it at the time, we discovered that these Russian actors created 80,000 posts that reached around 126 million people in the US over a two-year period. This kind of activity goes against everything we stand for”, it explains. “It’s abhorrent to us that a nation-state used our platform to wage a cyberwar intended to divide society. This was a new kind of threat that we couldn’t easily predict, but we should have done better.”
Because the 2016 US presidential election was influenced specifically by Russian entities controlling fake Facebook Pages, Chakrabarti goes on to explain that social network is working to make this aspect of the site much more transparent.
“We’re making it possible to visit an advertiser’s Page and see the ads they’re currently running. We’ll soon also require organisations running election-related ads to confirm their identities so we can show viewers of their ads who exactly paid for them. Finally, we’ll archive electoral ads and make them searchable to enhance accountability.”
To stop the spread of fake news elsewhere, the site has also introduced new tools to make it easier to report content. You can now report any ad or post as “false news” by tapping the three-dot button next to it. Facebook uses third-party fact checkers to check the post’s veracity and if it’s deemed false, it is labelled accordingly and impressions will be reduced by 80%.
Facebook has also started displaying “trust indicators”, a new feature that enables publishers to display information including their “ethics policy, corrections policy, fact-checking policy, ownership structure, and masthead.”
What we do we think?
It’s good that Facebook is taking responsibility for removing fake news that appears on its site. However, if a post is unwaveringly determined to be false, I struggle to see why it shouldn’t be removed from the site altogether, rather than simply reducing its impressions.
It’s a valid point, but I can’t see Facebook giving up a substantial chunk of revenue to publishers. With this approach, there’s also the issue that Facebook would no longer be considered politically impartial if it were to strike deals with some publishers but not others.