[ad_1]
AI is a transformational technology. Even in the wake of two decades of unprecedented innovation, AI stands apart as something special and an inflection point for people everywhere. We’re increasingly seeing how it can help to accelerate pharmaceutical drug development, improve energy consumption, revolutionize cybersecurity and improve accessibility.
As we continue to expand use cases and make technical advancements, we know it’s more important than ever to make sure our work isn’t happening in a silo: industry, governments, researchers and civil society must be bold and responsible together. In doing so, we can expand and share knowledge, identify ways to mitigate emerging risks and prevent abuse, and further the development of tools to increase content transparency for people everywhere.
That’s been our approach since the beginning, and today we wanted to share some of the partnerships, commitments and codes that we are participating in to understand AI’s potential and shape it responsibly.
Industry coalitions, partnerships and frameworks
- Frontier Model Forum: Google, along with Anthropic, Microsoft and OpenAI launched the Frontier Model Forum to further the safe and responsible development of frontier AI models. The Forum partners together with philanthropic partners also pledged over $10 million for a new AI Safety Fund to advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models.
- Partnership on AI (PAI): We helped to develop PAI, as part of a community of experts dedicated to fostering responsible practices in the development, creation, and sharing of AI, including media created with generative AI.
- MLCommons: We are part of MLCommons, a collective that aims to accelerate machine learning innovation and increase its positive impact on society.
- Secure AI Framework (SAIF): We introduced a framework for secure AI systems to mitigate risks specific to AI systems such as stealing model weights, poisoning of the training data, and injecting malicious inputs through prompt injection, among others. Our goal is to work with industry partners to apply the framework over time.
- Coalition for Content Provenance and Authenticity (C2PA): We recently joined the C2PA as a steering committee member. The coalition is a cross-industry effort to provide more transparency and context for people on digital content. Google will help to develop its technical standard and further adoption of Content Credentials, tamper-resistant metadata, which shows how content was made and edited over time.
Our work with governments and civil society
- Voluntary White House AI commitments: Alongside other companies at the White House, we jointly committed to advancing responsible practices in the development and use of artificial intelligence to ensure AI helps everyone. And we’ve made significant progress toward living up to our commitments.
- G7 Code of Conduct: We support the G7’s voluntary Code of Conduct, which aims to promote safe, trustworthy and secure AI worldwide.
- US AI Safety Institute Consortium: We’re participating in NIST’s AI Safety Institute Consortium, where we’ll share our expertise as we all work to globally advance safe and trustworthy AI.
- UK AI Safety Institute: The UK AI Safety Institute has access to some of our most capable models for research and safety purposes to build expertise and capability for the long term. We are actively working together to build more robust evaluations for AI models, as well as seek consensus on best practices as the sector advances.
- National AI Research Resource (NAIRR) pilot: We’re contributing our cutting-edge tools, compute and data resources to the National Science Foundation’s NAIRR pilot, which aims to democratize AI research across the U.S.
As we expand these efforts, we’ll update this list to reflect the latest work we’re doing to collaborate with the industry, governments and civil society, among others.
[ad_2]
Source link