Defund the Misinformation Slot Machine
By Daniel J Rogers, New York University (Eds: EMBARGO: 07:00 IST 02/16/2022) New York, Feb 14 (360info) Misinformation is a profitable business, and one of the most effective ways to slow its propagation is to take money away from the advertising that funds it unknowingly.
Misinformation is spread for a variety of reasons. The Russian government uses disinformation as part of its expansionist geopolitical goals. The Chinese Communist Party uses it to strengthen its political authority and increase its global influence.
Professional influence operators do this as guns on behalf of powerful industries, ranging from oil companies to tobacco companies. Trolls on 4chan often do this in the service of sheer nihilism.
But by far the most common and compelling motivation for spreading misinformation online is profit. It has led the world to a veritable misinformation crisis. The biggest companies in the world are the ones that provide the machines to capture and monetize public attention at scale. Today’s Internet is powered by companies that capture and profit from “clicks and eyeballs”.
Who provides the money for this machine? Sometimes audiences are monetized through the sale of merchandise or the solicitation of direct donations. Most often the money comes from advertising. Advertisers subsidize the web with over $400 billion a year in digital ad spend.
They pay into a complex ecosystem dominated by two oversized ad-tech platforms – Google and Facebook (each taking a hefty commission) – and their money eventually gets to content creators and publishers on the open web.
The income of these publishers and content creators is mainly controlled by the quantity and the purchasing power of the audience they capture. And so, our modern information ecosystem has become a race for eyeballs: a race won by the most salacious, infuriating, divisive and, above all, engaging content. On the lucrative side of this transaction, everyone wins. Publishers that capture an audience’s attention make money, as do platforms that take a commission from each ad placed. It is estimated that nearly a quarter of a billion US dollars a year go to subsidizing online misinformation.
Those on the other side of this transaction lose. Advertisers who pour money into this system end up having their brands appear alongside inappropriate content, damaging their reputation and costing them money.
It’s impacting what people choose to buy: Around 51% of 1,500 millennials and Gen Xers surveyed in 2020 were less likely to buy from a company with ‘unsafe’ brand placement and were three times less likely to recommend this brand to others.
The Global Disinformation Index (GDI) is a non-profit organization that seeks to balance this equation. Advertisers lacked data on where misinformation on the web was happening.
With this information, they could avoid these platforms in their automated advertising campaigns, protecting their brand and redirecting funds away from disinformation peddlers. A transparent, independent and neutral index of the so-called “misinformation risk” on the open web was needed.
The goal presented a technical challenge: Internet traffic is approximately distributed according to a power law, which means that a small number of top websites receive a large fraction of the traffic. But distribution also has a “long tail”.
This means that there are a large number of websites which, taken together, also capture a large amount of traffic, even if each of them does not do it alone.
For this reason, it was imperative that misinformation risk ratings be constructed using a hybrid of human ratings (in order to capture high profile media with the required levels of nuance and fidelity) and large-scale automation (to maintain parity with the large number of “long tail” sites).
The human part of the methodology aims to assess the journalistic integrity, and therefore the risk of misinformation, of publishers in more than 20 different media markets to date. This methodology is in line with the Journalism Trust Initiative, an international standardization effort promoted by Reporters Without Borders.
GDI assesses content and operational policies, looking for conflicts of interest, past evidence of misinformation risk, and breaches of journalistic standards as part of its assessments.
Meanwhile, GDI’s automated systems crawl hundreds of thousands of sites, evaluating millions of new content each week, identifying those peddling the various conflicting narratives of the day. When a particular site reaches a minimum threshold, it is flagged for further human review.
Ultimately, all of these processes feed into datasets that GDI then provides to advertisers and digital ad platforms to prevent them from inadvertently buying or selling ads on disinformation trafficking sites.
This not only helps protect advertisers’ brands, but also helps channel ad revenue away from misinformation and into higher quality information.
The GDI was founded on three principles: transparency, independence and neutrality. It is a global organization that works in partnership with NGOs, governments and business organizations around the world.
It operates in over 10 different languages and in over 20 countries, and yet is only part of a larger ecosystem of organizations working in media literacy, technology reform policy, counter- message and the trust and safety of platforms that all contribute to the conversation. , and with the ultimate goal of disrupting the scourge of online misinformation.
By some estimates, GDI has already dramatically halved ad revenue for disinformation providers through partnerships with more than a dozen major ad platforms. But there is still a long way to go to protect democracy and cut funding for disinformation. (360info.org) SPC
(This story has not been edited by the Devdiscourse team and is auto-generated from a syndicated feed.)