Media entrepreneur Steven Brill thinks there’s something missing from all the efforts to separate fake news from the real kind: Some smart and discerning humans.
Faced with the waves of mis- and disinformation lapping up on social media, Brill is proposing to apply some reader-beware labels to Internet news sources. His idea: ratings, as determined by teams of independent journalists, that would enable readers to understand where their news — or “news” — is coming from.
Brill, a veteran journalist and founder of American Lawyer, Court TV and the late Brill’s Content magazine, has turned the idea into a fledgling company. NewsGuard is backed by about $6 million in venture funds from the likes of Publicis Groupe, a multinational ad agency, and the Knight Foundation, which has launched many journalism initiatives.
As Brill and his business partner, former Wall Street Journal publisher and columnist Gordon Crovitz, describe it, the New York-based company aims to assign a “reliability” rating — green for generally trustworthy, yellow for consistently biased or inaccurate, or red for deliberately deceptive — to some 7,500 sources of online news, based on an assessment by its teams of journalists. The rating would cover each site’s overall track record as a news purveyor. It wouldn’t apply to any specific article or journalist.
The ratings would be supplemented by what Brill and Crovitz call “nutrition labels” — a longer description of each site’s history, journalistic track record and ownership. The information would enable a reader to learn instantly that, say, a popular news site such as RT.com is a Kremlin-funded adjunct of the Russian government.
If “platform” giants such as Facebook and Google play ball — and so far NewsGuard has no commitment that they will — these assessments would be incorporated in search results, on YouTube videos and on the Facebook or Twitter postings that share the articles. Alternatively, individual users may someday be able to add a plug-in that would display ratings for each news site they accessed. The Good Housekeeping-type seals hold out the promise of appealing to marketers and ad agencies — hence, Publicis’s involvement — in that they could be used to form a “whitelist” of approved sites to keep advertisers from linking their brands to toxic content.
“Our goal isn’t necessarily to stop [fake news] but to arm people with some basic information when they’re about to read or share stuff,” Brill said. “We’re not trying to block anything.”
Ideally, he said, a user encountering, say, the website Whatisfracking.com in a Google search would quickly learn that the site is funded by a vested interest, the American Petroleum Institute. It would also instantly flag as “fake news” a site such as the Denver Guardian, which posted a bogus story about Hillary Clinton that was viewed by about 1.6 million people during the late stages of the 2016 presidential campaign.
NewsGuard aims to roll out its system in time for the midterm elections this year, but Brill and Crovitz acknowledge they have their work cut out for them. The venture has assessed and rated only about 100 of the 7,500 sites it hopes to tackle.
The project also faces head winds from the platforms that would figure to be its largest potential customers — most of which have undertaken their own media-rating initiatives amid the public and government outcry over fake news. Google, for example, adjusted its search algorithms last summer to push down “low-quality” content, such as Holocaust-denial pages.
Facebook, Google, Bing and Twitter also have partnered with a nonprofit venture titled the Trust Project that adds standardized disclosures from news publishers about the news outlet’s ethics and standards. (The Washington Post is a participant in this initiative.) And Facebook and Google have fact-checking projects of their own.
Despite NewsGuard’s ambitious approach to mapping the news and fake-news universe — Brill estimates the 7,500 or so sites it has zeroed in on account for about 98 percent of the news content seen by Americans — no single assessment or score can cover everything a news organization produces. Quality and expertise within a news organization vary; what the news department produces might be very different from what the opinion department turns out. One rating might not fit all.
Still, Brill says technology can’t do what humans can — such as pointing out which interests are behind a popular website. “Whatever algorithms Google has, it’s not working” to defeat the fake-news scourge, he said.
NewsGuard’s initial team of journalists includes Brill, Crovitz and executive editor James Warren, the former managing editor of the Chicago Tribune; and managing editor Eric Effron, formerly of Reuters, the Week and Legal Times. Brill said he expects to hire “three or four dozen” staffers and freelancers.
The fox has agreed to guard the henhouse. How benevolent. Usually it’s a joo that would offer to fix a “problem” that they are responsible for creating to install their “solution” they had in mind all along.
Oh wait, it IS a team of joos…business as usual.
I would definitely like to get rid of this “He said, she said” and clickbait titles crap. That’s why I like this site.
The “he said, she said” shit will be used as an excuse to gain the authority. Once the authority is in place, it will be used to shut down the truth.