How Google Fights Disinformation
Introduction
The open Internet has enabled people to create, connect, and distribute information like never before. It has exposed us to perspectives and experiences that were previously out-of-reach. It has enabled increased access to knowledge for everyone.
Google continues to believe that the Internet is a boon to society – contributing to global education, healthcare, research, and economic development by enabling citizens to become more knowledgeable and involved through access to information at an unprecedented scale.
However, like other communication channels, the open Internet is vulnerable to the organized propagation of false or misleading information. Over the past several years, concerns that we have entered a “post-truth” era have become a controversial subject of political and academic debate.
These concerns directly affect Google and our mission – to organize the world’s information and make it universally accessible and useful. When our services are used to propagate deceptive or misleading information, our mission is undermined.
How companies like Google address these concerns has an impact on society and on the trust users place in our services. We take this responsibility very seriously and believe it begins with providing transparency into our policies, inviting feedback, enabling users, and collaborating with policymakers, civil society, and academics around the world.
This document outlines our perspective on disinformation and misinformation and how we address it throughout Google. It begins with the three strategies that comprise our response across products, and an overview of our efforts beyond the scope of our products. It continues with an in-depth look at how these strategies are applied, and expanded, to Google Search, Google News, YouTube, and our advertising products.
We welcome a dialogue about what works well, what does not, and how we can work with others in academia, civil society, newsrooms, and governments to meet the ever-evolving challenges of disinformation.
Contents
What is disinformation?
As we’ve all experienced over the past few years, the words “misinformation”, “disinformation”, and “fake news” mean different things to different people and can become politically charged when they are used to characterize the propagators of a specific ideology or to undermine political adversaries.
However, there is something objectively problematic and harmful to our users when malicious actors attempt to deceive them. It is one thing to be wrong about an issue. It is another to purposefully disseminate information one knows to be inaccurate with the hope that others believe it is true or to create discord in society.
We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation”.
The entities that engage in disinformation have a diverse set of goals. Some are financially motivated, engaging in disinformation activities for the purpose of turning a profit. Others are politically motivated, engaging in disinformation to foster specific viewpoints among a population, to exert influence over political processes, or for the sole purpose of polarizing and fracturing societies. Others engage in disinformation for their own entertainment, which often involves bullying, and they are commonly referred to as “trolls”.
3
Levels of funding and sophistication vary across those entities, ranging from local mom-and-pop operations to well-funded and state-backed campaigns. In addition, propagators of disinformation sometimes end up working together, even unwittingly. For instance, politically motivated actors might emphasize a piece of disinformation that financially motivated groups might latch onto because it is getting enough attention to be a potential revenue source. Sometimes, a successful disinformation narrative is propagated by content creators who are acting in good faith and are unaware of the goals of its originators.
This complexity makes it difficult to gain a full picture of the efforts of actors who engage in disinformation or gauge how effective their efforts may be. Furthermore, because it can be difficult to determine whether a propagator of falsehoods online is acting in good faith, responses to disinformation run the risk of inadvertently harming legitimate expression.
Tackling disinformation in our products and services
We have an important responsibility to our users and to the societies in which we operate to curb the efforts of those who aim to propagate false information on our platforms. At the same time, we respect our users’ fundamental human rights (such as free expression) and we try to be clear and predictable in our efforts, letting users and content creators decide for themselves whether we are operating fairly. Of course, this is a delicate balance, as sharing too much of the granular details of how our algorithms and processes work would make it easier for bad actors to exploit them.
We face complex trade-offs and there is no ‘silver bullet’ that will resolve the issue of disinformation, because:
- It can be extremely difficult (or even impossible) for humans or technology to determine the veracity of, or intent behind, a given piece of content, especially when it relates to current events.
- Reasonable people can have different perspectives on the right balance between risks of harm to good faith, free expression, and the imperative to tackle disinformation.
- The solutions we build have to apply in ways that are understandable and predictable for users and content creators, and compatible with the kind of automation that is required when operating services on the scale of the web. We cannot create standards that require deep deliberation for every individual decision.
- Disinformation manifests differently on different products and surfaces. Solutions that might be relevant in one context might be irrelevant or counter-productive in others. Our products cannot operate in the exact same way in that regard, and this is why they approach disinformation in their own specific ways.
Our approach to tackling disinformation in our products and services is based around a framework of three strategies: make quality count in our ranking systems, counteract malicious actors, and give users more context. We will outline them in this section, as well as the efforts we undertake beyond the scope of our products and services to team up with newsrooms and outside experts, and to get ahead of future risks. It is worth noting that these strategies are also used to address misinformation more broadly, which pertains to the overall trustworthiness of the information we provide users in our products.
In later sections of this paper, we will detail how these strategies are implemented and expanded for Google Search, Google News, YouTube, and our advertising platforms. We adopt slightly different approaches in how we apply these principles to different products given how each service presents its own unique challenges.
Make Quality Count
Our products are designed to sort through immense amounts of material and deliver content that best meets our users’ needs. This means delivering quality information and trustworthy commercial messages, especially in contexts that are prone to rumors and the propagation of false information (such as breaking news events).
While each product and service implements this differently, they share important principles that ensure our algorithms treat websites and content creators fairly and evenly:
• Information is organized by “ranking algorithms”. • These algorithms are geared toward ensuring the usefulness of our services, as measured by user testing, not fostering the ideological viewpoints of the individuals that build or audit them. When it comes to Google Search, you can find a detailed explanation of how those algorithms operate at google.com/search/howsearchworks.
Counteract Malicious Actors
Algorithms cannot determine whether a piece of content on current events is true or false, nor can they assess the intent of its creator just by reading what’s on a page. However, there are clear cases of intent to manipulate or deceive users. For instance, a news website that alleges it contains “Reporting from Bordeaux, France” but whose account activity indicates that it is operated out of New Jersey in the U.S. is likely not being transparent with users about its operations or what they can trust it to know firsthand.
That’s why our policies across Google Search, Google News, YouTube, and our advertising products clearly outline behaviors that are prohibited – such as misrepresentation of one’s ownership or primary purpose on Google News and our advertising products, or impersonation of other channels or individuals on YouTube.
Furthermore, since the early days of Google and YouTube, many content creators have tried to deceive our ranking systems to get more visibility – a set of practices we view as a form of ‘spam’ and that we’ve invested significant resources to address.
This is relevant to tackling disinformation since many of those who engage in the creation or propagation of content for the purpose to deceive often deploy similar tactics in an effort to achieve more visibility. Over the course of the past two decades, we have invested in systems that can reduce ‘spammy’ behaviors at scale, and we complement those with human reviews.
Give Users More Context
Easy access to context and a diverse set of perspectives are key to providing users with the information they need to form their own views. Our products and services expose users to numerous links or videos in response to their searches, which maximizes the chances that users are exposed to diverse perspectives or viewpoints before deciding what to explore in depth.
Google Search, Google News, YouTube, and our advertising products have all developed additional mechanisms to provide more context and agency to users. Those include:
- “Knowledge” or “Information” Panels in Google Search and YouTube, providing high-level facts about a person or issue. • Making it easier to discover the work of fact-checkers on Google Search or Google News, by using labels or snippets making it clear to users that a specific piece of content is a fact-checking article.
- A “Full Coverage” function in Google News enabling users to access a non-personalized, in-depth view of a news cycle at the tap of a finger. • “Breaking News” and “Top News” shelves, and “Developing News” information panels on YouTube, making sure that users are exposed to news content from authoritative sources when looking for information about ongoing news events. • Information panels providing “Topical Context” and “Publisher Context” on YouTube, providing users with contextual information from trusted sources to help them be more informed consumers of content on the platform. These panels provide authoritative information on well-established historical and scientific topics that have often been subject to misinformation online and on the sources of news content, respectively.
- “Why this ad” labels enabling users to understand why they’re presented with a specific ad and how to change their preferences so as to alter the personalization of the ads they are shown, or to opt out of personalized ads altogether. • In-ad disclosures and transparency reports on election advertising, which are rolling out during elections in the US, Europe, and India as a starting point.
We also empower users to let us know when we’re getting it wrong by using feedback buttons across Search, YouTube, and our advertising products to flag content that might be violating our policies.
Teaming up with newsrooms and outside experts
Our work to address disinformation is not limited to the scope of our products and services. Indeed, other organizations play a fundamental role in addressing this societal challenge, such as newsrooms, fact-checkers, civil society organizations, or researchers. While we all address different aspects of this issue, it is only by coming together that we can succeed. That is why we dedicate significant resources to supporting quality journalism, and to weaving together partnerships with many other organizations in this space.
Supporting quality journalism
People come to Google looking for information they can trust and that information often comes from the reporting of journalists and news organizations around the world.
A thriving news ecosystem matters deeply to Google and directly impacts our efforts to combat disinformation. When quality journalism struggles to reach wide audiences, malicious actors have more room to propagate false information.
Over the years, we’ve worked closely with the news industry to address these challenges and launched products and programs to help improve the business model of online journalism. These include the Accelerated Mobile Pages Project1 to improve the mobile web, YouTube Player for Publishers2 to simplify video distribution and reduce costs, and many more.
In March 2018, we launched the Google News Initiative (GNI)3 to help journalism thrive in the digital age. With a $300 million commitment over 3 years, the initiative aims to elevate and strengthen quality journalism, evolve business models to drive sustainable growth, and empower news organizations through technological innovation. $25M of this broader investment was earmarked as innovation grants for YouTube to support news organizations in building sustainable video operations.
One of the programs supported by the Google News Initiative is Subscribe with Google4, a way for people to easily subscribe to various news outlets, helping publishers engage readers across Google and the web. Another is News Consumer Insights, a new dashboard built on top of Google Analytics, which will help news organizations of all sizes understand and segment their audiences with a subscriptions strategy in mind. More details on these projects and others can be found at g.co/newsinitiative.
Partnering with outside experts
Addressing disinformation is not something we can do on our own. The Google News Initiative also houses our products, partnerships, and programs dedicated to supporting news organizations in their efforts create quality reporting that displaces disinformation. This includes:
- Helping to launch the First Draft Coalition (https://firstdraftnews.org/), a nonprofit that convenes news organizations and technology companies to tackle the challenges around combating disinformation online – especially in the run-up to elections.
- Participating in and providing financial support to the Trust Project (http://thetrustproject.org/), of which Google is a founding member and which explores how journalism can signal its trustworthiness online. The Trust Project has developed eight indicators of trust that publishers can use to better convey why their content should be seen as credible, with promising results for the publishers who have trialed them.
- Partnering with Poynter’s International Fact-Checking Network (IFCN)5, a nonpartisan organization gathering fact-checking organizations from the United States, Germany, Brazil, Argentina, South Africa, India, and more.
In addition, we support the work of researchers who explore the issues of disinformation and trust in journalism by funding research at organizations like First Draft, the Oxford University’s Reuters News Institute, Michigan University’s Quello Center for Telecommunication Management law, and more.
Finally, in March 2018, Google.org (Google’s philanthropic arm) launched a $10 million global initiative to support media literacy around the world in the footsteps of programs we have already supported in the UK, Brazil, Canada, Indonesia, and more.
We will continue to explore more ways to partner with others on these issues, whether by building new products that might benefit the work of journalists and fact-checkers, supporting more independent initiatives that help curb disinformation, or developing self-regulatory practices to demonstrate our responsibility
Getting ahead of future risks
Creators of disinformation will never stop trying to find new ways to deceive users. It is our responsibility to make sure we stay ahead of the game. Many of the product strategies and external partnerships mentioned earlier help us reach that goal. In addition, we dedicate specific focus to bolstering our defenses in the run-up to elections and invest in research and development efforts to stay ahead of new technologies or tactics that could be used by malicious actors, such as synthetic media (also known as ‘deep fakes’).
Protecting elections
Fair elections are critical to the health of democracy and we take our work to protect elections very seriously. Our products can help make sure users have access to accurate information about elections. For example, we often partner with election commissions, or other official sources, to make sure key information like the location of polling booths or the dates of the votes are easily available to users.
We also work to protect elections from attacks and interference, including focusing on combating political influence operations, improving account and website security, and increasing transparency.
To prevent political influence operations, working with our partners at Jigsaw, we have multiple internal teams that identify malicious actors wherever they originate, disables their accounts, and shares threat information with other companies and law enforcement officials. We routinely provide public updates about these operations.
There is more we can do beyond protecting our own platforms. Over the past several years, we have taken steps to help protect accounts, campaigns, candidates, and officials against digital attacks. Our Protect Your Election project8 offers a suite of extra security to protect against malicious or insecure apps and guards against phishing. To protect election and campaign websites, we also offer Project Shield9, which can mitigate the risk of Distributed Denial of Service (DDoS) attacks.
In the run-up to elections, we provide free training to ensure that campaign professionals and political parties are up-to-speed on the means to protect themselves from attack. For instance, in 2018, we trained more than 1,000 campaign professionals and the eight major U.S. Republican and Democratic committees on email and campaign website security.
Furthermore, as a part of our security efforts, for the past eight years, we have displayed warnings to Gmail users who are at risk of phishing by potentially state-sponsored actors (even though, in most cases, the specific phishing attempt never reaches the user’s inbox).
Finally, in order to help understand the context for the election-related ads they see online, we require additional verification for advertisers who wish to purchase political ads in the United States, provide transparency about the advertiser to the user, and have established an online transparency report and creative repository on US federal elections.10
We look forward to expanding these tools, trainings, and strategies to more elections in 2019, starting with efforts focused on two of the world’s largest upcoming elections, which are in Europe11 and in India.12
Expecting the unexpected
Creators of disinformation are constantly exploring new ways to bypass the defenses set by online services in an effort to spread their messages to a wider audience.
To stay ahead of the curve, we continuously invest resources to stay abreast of the next tools, tactics, or technologies that creators of disinformation may attempt to use. We convene with experts all around the world to understand what concerns them. We also invest in research, product, and policy developments to anticipate threat vectors that we might not be equipped to tackle at this point.
One example is the rise of new forms of AI-generated, photo-realistic, synthetic audio or video content known as “synthetic media” (often referred to as “deep fakes”). While this technology has useful applications (for instance, by opening new possibilities to those affected by speech or reading impairments, or new creative grounds for artists and movie studios around the world), it raises concerns when used in disinformation campaigns and for other malicious purposes.
The field of synthetic media is fast-moving and it is hard to predict what might happen in the near future. To help prepare for this issue, Google and YouTube are investing in research to understand how AI might help detect such synthetic content as it emerges, working with leading experts in this field from around the world.
Finally, because no detector can be perfect, we are engaging with civil society, academia, newsrooms, and governments to share our best understanding of this challenge and work together on what other steps societies can take to improve their preparedness. This includes exploring ways to help others come up with their own detection tools. One example may involve releasing datasets of synthesized content that others can use to train AI-based detectors.13