Facebook is launching a feature that will try to help users decide what news articles on its platforms are legitimate, the company announced Thursday.
The tech conglomerate is attaching “Trust Indicators” to certain articles shared on the platform, which will provide information about the publishers. Such details will include the content creators’ polices on ethics, corrections, fact-checking, “ownership structure, and masthead.” Publishers have to upload the extra information through the Brand Asset Library, which is listed under Facebook’s Page Publishing Tools. To achieve its goal, Facebook is working with the Trust Project, “an international consortium of news and digital companies.”
“We believe that helping people access this important contextual information can help them evaluate if articles are from a publisher they trust, and if the story itself is credible,” Andrew Anker, product manager at Facebook, wrote in a blog post. “This step is part of our larger efforts to combat false news and misinformation on Facebook — providing people with more context to help them make more informed decisions, advance news literacy and education, and working to reinforce indicators of publisher integrity on our platform.”
Facebook announced in April that it was improving its “Related Articles” system, in which different content from various outlets on the same story is provided.
Facebook also announced in October that it was starting to experiment with offering “additional context.” That test is part of the more recent one announced Thursday, according to the company.
The social media company has been pressured to help determine which news stories are misleading or fraudulent, and subsequently purge the content from the platform. But even the most outwardly scientific processes are liable to subjectivity, and it’s not quite clear how technical or empirical its fact-checking undertakings will be.
Zuckerberg said last year that the company is going to partner with Snopes as one of its third-party fact checking organizations, even though it almost exclusively employs leftists and has dubious verification skills.
Facebook in general has been trying a lot of unique measures when it comes to combatting disinformation on the platform. It recently concluded a test in which it promoted comments mentioning “fake news” on certain posts to the top, making them far more likely to be viewed. Many users complained of the experiment, which either seemed to go awry, or wasn’t fully thought out in the first place.