Google announces the Coalition for Secure AI

[ad_1]

AI needs a security framework and applied standards that can keep pace with its rapid growth. That’s why last year we shared the Secure AI Framework (SAIF), knowing that it was just the first step. Of course, to operationalize any industry framework requires close collaboration with others — and above all a forum to make that happen.

Today at the Aspen Security Forum, alongside our industry peers, we’re introducing the Coalition for Secure AI (CoSAI). We’ve been working to pull this coalition together over the past year, in order to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon.

CoSAI includes founding members Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, Paypal and Wiz — and it will be housed under OASIS Open, the international standards and open source consortium.

Introducing CoSAI’s inaugural workstreams

As individuals, developers and companies continue their work to adopt common security standards and best practices, CoSAI will support this collective investment in AI security. Today, we’re also sharing the first three areas of focus the coalition will tackle in collaboration with industry and academia:

  1. Software Supply Chain Security for AI systems: Google has continued to work toward extending SLSA Provenance to AI models to help identify when AI software is secure by understanding how it was created and handled throughout the software supply chain. This workstream will aim to improve AI security by providing guidance on evaluating provenance, managing third-party model risks, and assessing full AI application provenance by expanding upon the existing efforts of SSDF and SLSA security principles for AI and classical software.
  2. Preparing defenders for a changing cybersecurity landscape: When handling day-to-day AI governance, security practitioners don’t have a simple path to navigate the complexity of security concerns. This workstream will develop a defender’s framework to help defenders identify investments and mitigation techniques to address the security impact of AI use. The framework will scale mitigation strategies with the emergence of offensive cybersecurity advancements in AI models.
  3. AI security governance: Governance around AI security issues requires a new set of resources and an understanding of the unique aspects of AI security. To help, CoSAI will develop a taxonomy of risks and controls, a checklist, and a scorecard to guide practitioners in readiness assessments, management, monitoring and reporting of the security of their AI products.

Additionally, CoSAI will collaborate with organizations such as Frontier Model Forum, Partnership on AI, Open Source Security Foundation and ML Commons to advance responsible AI.

What’s next

As AI advances, we’re committed to ensuring effective risk management strategies evolve along with it. We’re encouraged by the industry support we’ve seen over the past year for making AI safe and secure. We’re even more encouraged by the action we’re seeing from developers, experts and companies big and small to help organizations securely implement, train and use AI.

AI developers need — and end users deserve — a framework for AI security that meets the moment and responsibly captures the opportunity in front of us. CoSAI is the next step in that journey and we can expect more updates in the coming months. To learn how you can support CoSAI, you can visit coalitionforsecureai.org. In the meantime, you can visit our Secure AI Framework page to learn more about Google’s AI security work.

[ad_2]

Source link

Share:

Atbildēt

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin