How Google is supporting the 2024 UK general election

[ad_1]

In advance of the General Election on July 4th, we wanted to share more about our plans to support this process. In line with our commitment to helping organise the world’s information, making it universally accessible and useful, we are taking a number of steps to support election integrity in the UK by surfacing high quality information to voters, safeguarding our platforms from abuse and equipping campaigns with the best-in-class security tools and training. We’ll also do this work with an increased focus on the role artificial intelligence (AI) might play.

Informing voters by surfacing high quality information

In the run up to elections people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways we make it easy for people to find what they need:

  • Search: When people search for topics like “how to vote,” they will find details about how they can vote — such as ID requirements, registration, voting deadlines – linking to authoritative sources including GOV.UK.
  • YouTube: For election news and information, our systems prominently surface content from authoritative sources on the YouTube homepage, in search results and the “Up Next” panel. For searches related to voting, an information panel may also direct viewers in the UK to official government voting resources. Regardless of whether we’re in an election season, YouTube also displays relevant information panels at the top of search results and under certain videos on topics prone to misinformation.
  • Ads: All advertisers who wish to run election ads in the UK on our platforms are required to go through a verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads are published in our Political Ads Transparency Report, where anyone can look up information such as how much was spent and where it was shown. We also limit how advertisers can target election ads.

Safeguarding our platforms and disrupting the spread of harmful misinformation

To better secure our products and prevent abuse, we continue to enhance our enforcement systems and to invest in Trust & Safety operations — including at our Google Safety Engineering Center (GSEC) for Content Responsibility, dedicated to online safety. We also continue to partner with the wider ecosystem to combat misinformation.

  • Enforcing our policies and using AI models to fight abuse at scale: We have long-standing policies that inform how we approach areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine trust or participation in democratic processes, for example in YouTube’s Community Guidelines and our unreliable claims policy for advertisers. To help enforce our policies, our AI models are enhancing our abuse-fighting efforts. With recent advances in our Large Language Models (LLMs), we’re building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
  • Working with the wider ecosystem on countering misinformation: Google News Initiative in conjunction with PA Media has launched Election Check 24, a new initiative aimed at combating mis- and disinformation around the UK’s next General Election.

Helping people navigate AI-generated content

We have introduced policies and tools to help audiences navigate AI-generated content:

  • Ads disclosures: We were the first tech company to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. This includes ads that were created with the use of AI. Our ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content.
  • YouTube content labels: YouTube’s misinformation policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm. YouTube also requires creators to disclose when they’ve created realistic altered or synthetic content, and will display a label that indicates for people when the content they’re watching is synthetic and realistic. In certain cases, YouTube may also add a label even when a creator hasn’t disclosed it, especially if the use of altered or synthetic content has the potential to confuse or mislead viewers.
  • A responsible approach to Generative AI products: In line with our principled and responsible approach to our generative AI products like Gemini, we’ve prioritised testing across safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness. Out of an abundance of caution on such an important topic, we’re restricting the types of election-related queries for which Gemini will return responses.
  • Providing users with additional context: About this image in Search helps people assess the credibility and context of images found online. Our double-check feature in Gemini enables people to evaluate whether there’s content across the web to substantiate Gemini’s response.
  • Digital watermarking: SynthID, a tool from Google DeepMind, directly embeds a digital watermark into AI-generated text, images, audio and video.
  • Industry collaboration: We recently joined the C2PA coalition and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content. Alongside other leading tech companies, we have also pledged to help prevent deceptive AI-generated imagery, audio or video content from interfering with this year’s global elections. The ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.

Equipping high risk users with best-in-class security features and training

As elections come with increased cybersecurity risks, we are working hard to help high risk users, such as campaign and election officials, improve their security in light of existing and emerging threats, and to educate them on how to use our products and services.

  • Security tools for campaign and election teams: We offer free services like our Advanced Protection Program — our strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. We are also providing security training and security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing your Google Account.
  • Tackling coordinated influence operations: Our Google Threat Intelligence team helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high risk entities. We report on actions taken in our quarterly TAG bulletin, and meet regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.

This all builds on work we do around elections in other countries and regions including in the US, EU, and India. Supporting elections is a core part of our responsibility to our users and we’re committed to working with government, industry and civil society to protect the integrity of elections in the UK.

[ad_2]

Source link

Share:

Leave a Reply

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin