{"id":18867,"date":"2024-07-23T19:19:17","date_gmt":"2024-07-23T19:19:17","guid":{"rendered":"http:\/\/scannn.com\/google\/google-announces-the-coalition-for-secure-ai\/"},"modified":"2024-07-23T19:19:17","modified_gmt":"2024-07-23T19:19:17","slug":"google-announces-the-coalition-for-secure-ai","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/google-announces-the-coalition-for-secure-ai\/","title":{"rendered":"Google announces the Coalition for Secure AI"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"rmr7d\">AI needs a security framework and applied standards that can keep pace with its rapid growth. That\u2019s why last year we shared the Secure AI Framework (SAIF), knowing that it was just the first step. Of course, to operationalize any industry framework requires close collaboration with others \u2014 and above all a forum to make that happen.<\/p>\n<p data-block-key=\"b03et\">Today at the Aspen Security Forum, alongside our industry peers, we\u2019re introducing the Coalition for Secure AI (CoSAI). We\u2019ve been working to pull this coalition together over the past year, in order to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon.<\/p>\n<p data-block-key=\"2moj\">CoSAI includes founding members Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, Paypal and Wiz \u2014 and it will be housed under OASIS Open, the international standards and open source consortium.<\/p>\n<h2 data-block-key=\"c15o7\">Introducing CoSAI\u2019s inaugural workstreams<\/h2>\n<p data-block-key=\"cnhd0\">As individuals, developers and companies continue their work to adopt common security standards and best practices, CoSAI will support this collective investment in AI security. Today, we\u2019re also sharing the first three areas of focus the coalition will tackle in collaboration with industry and academia:<\/p>\n<ol>\n<li data-block-key=\"4ii66\"><b>Software Supply Chain Security for AI systems:<\/b> Google has continued to work toward extending SLSA Provenance to AI models to help identify when AI software is secure by understanding how it was created and handled throughout the software supply chain. This workstream will aim to improve AI security by providing guidance on evaluating provenance, managing third-party model risks, and assessing full AI application provenance by expanding upon the existing efforts of SSDF and SLSA security principles for AI and classical software.<\/li>\n<li data-block-key=\"c1bsc\"><b>Preparing defenders for a changing cybersecurity landscape:<\/b> When handling day-to-day AI governance, security practitioners don\u2019t have a simple path to navigate the complexity of security concerns. This workstream will develop a defender\u2019s framework to help defenders identify investments and mitigation techniques to address the security impact of AI use. The framework will scale mitigation strategies with the emergence of offensive cybersecurity advancements in AI models.<\/li>\n<li data-block-key=\"dh800\"><b>AI security governance:<\/b> Governance around AI security issues requires a new set of resources and an understanding of the unique aspects of AI security. To help, CoSAI will develop a taxonomy of risks and controls, a checklist, and a scorecard to guide practitioners in readiness assessments, management, monitoring and reporting of the security of their AI products.<\/li>\n<\/ol>\n<p data-block-key=\"mrd0\">Additionally, CoSAI will collaborate with organizations such as Frontier Model Forum, Partnership on AI, Open Source Security Foundation and ML Commons to advance responsible AI.<\/p>\n<h2 data-block-key=\"4eseb\">What\u2019s next<\/h2>\n<p data-block-key=\"442gi\">As AI advances, we\u2019re committed to ensuring effective risk management strategies evolve along with it. We\u2019re encouraged by the industry support we\u2019ve seen over the past year for making AI safe and secure. We\u2019re even more encouraged by the action we\u2019re seeing from developers, experts and companies big and small to help organizations securely implement, train and use AI.<\/p>\n<p data-block-key=\"4qqb2\">AI developers need \u2014 and end users deserve \u2014 a framework for AI security that meets the moment and responsibly captures the opportunity in front of us. CoSAI is the next step in that journey and we can expect more updates in the coming months. To learn how you can support CoSAI, you can visit coalitionforsecureai.org. In the meantime, you can visit our Secure AI Framework page to learn more about Google\u2019s AI security work.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/safety-security\/google-coalition-for-secure-ai\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI needs a security framework and applied standards that can keep pace with its rapid growth. That\u2019s why last year we shared the Secure AI Framework (SAIF), knowing that it was just the first step. Of course, to operationalize any industry framework requires close collaboration with others \u2014 and above all a forum to make [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18868,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-18867","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18867","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18867"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18867\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18868"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18867"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18867"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18867"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}