Working together to address AI risks and opportunities at MSC

[ad_1]

For 60 years, the Munich Security Conference has brought together world leaders, businesses, experts and civil society for frank discussions about strengthening and safeguarding democracies and the international world order. Amid mounting geopolitical challenges, important elections around the world, and increasingly sophisticated cyber threats, these conversations are more urgent than ever. And the new role of AI in both offense and defense adds a dramatic new twist.

Earlier this week, Google’s Threat Analysis Group (TAG), Mandiant and Trust & Safety teams released a new report showing that Iranian-backed groups are using information warfare to influence public perceptions of the Israel-Hamas war. It also had the latest updates on our prior report on the cyber dimensions of Russia’s war in Ukraine. TAG separately reported on the growth of commercial spyware that governments and bad actors are using to threaten journalists, human rights defenders, dissidents and opposition politicians. And we continue to see reports about threat actors exploiting vulnerabilities in legacy systems to compromise the security of governments and private businesses.

In the face of these growing threats, we have a historic opportunity to use AI to shore up the cyber defenses of the world’s democracies, providing new defensive tools to businesses, governments and organizations on a scale previously available to only the largest organizations. At Munich this week we’ll be talking about how we can use new investments, commitments, and partnerships to address AI risks and seize its opportunities. Democracies cannot thrive in a world where attackers use AI to innovate but defenders cannot.

Using AI to strengthen cyber defenses

For decades, cyber threats have challenged security professionals, governments, businesses and civil society. AI can tip the scales and give defenders a decisive advantage over attackers. But like any technology, AI can also be used by bad actors and become a vector for vulnerabilities if it’s not securely developed and deployed.

That’s why today we launched an AI Cyber Defense Initiative that harnesses AI’s security potential through a proposed policy and technology agenda designed to help secure, empower and advance our collective digital future. The AI Cyber Defense Initiative builds on our Secure AI Framework (SAIF) designed to help organizations build AI tools and products that are secure by default.

As part of the AI Cyber Defense Initiative, we’re launching a new “AI for Cybersecurity” startup cohort to help strengthen the transatlantic cybersecurity ecosystem, and expanding our $15 million commitment for cybersecurity skilling across Europe. We’re also committing $2 million to bolster cybersecurity research initiatives and open sourcing Magika, the Google AI-powered file type identification system. And we’re continuing to invest in our secure, AI-ready network of global data centers. By the end of 2024, we will have invested over $5 billion in data centers in Europe — helping support secure, reliable access to a range of digital services, including broad generative AI capabilities like our Vertex AI platform.

Safeguarding democratic elections

This year, elections will be happening across Europe, the United States, India and dozens of other countries. We have a long history of supporting the integrity of democratic elections, most recently with the announcement of our EU prebunking campaign ahead of parliamentary elections. The campaign – which teaches audiences how to spot common manipulation techniques before they encounter them via short video ads on social – kicks off this spring in France, Germany, Italy, Belgium and Poland. And we’re fully committed to continuing our efforts to stop abuse on our platforms, surface high-quality information to voters, and give people information about AI-generated content to help them make more informed decisions.

There are understandable concerns about the potential misuse of AI to create deep fakes and mislead voters. But AI also presents a unique opportunity to prevent abuse at scale. Google’s Trust & Safety teams are tackling this challenge, leveraging AI to enhance our abuse-fighting efforts, enforce our policies at scale and adapt quickly to new situations or claims.

We continue to partner with our peers across the industry, working together to share research and counter threats and abuse – including the risk of deceptive AI content. Just last week, we joined the Coalition for Content Provenance and Authenticity (C2PA), which is working on a content credential to provide transparency into how AI-generated is made and edited over time. C2PA builds on our cross-industry collaborations around responsible AI with the Frontier Model Forum, the Partnership on AI, and other initiatives.

Working together to defend the rules-based international order

The Munich Security Conference has stood the test of time as a forum to address and confront tests to democracy. For 60 years, democracies have passed those tests, addressing historic shifts — like the one presented by AI — collectively. Now we have an opportunity to come together once again – as governments, businesses, academics and civil society – to forge new partnerships, harness AI’s potential for good, and strengthen the rules-based world order.

[ad_2]

Source link

Share:

Atbildēt

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin