{"id":14171,"date":"2023-10-26T12:01:50","date_gmt":"2023-10-26T12:01:50","guid":{"rendered":"http:\/\/scannn.com\/how-google-is-expanding-its-commitment-to-secure-ai\/"},"modified":"2023-10-26T12:01:50","modified_gmt":"2023-10-26T12:01:50","slug":"how-google-is-expanding-its-commitment-to-secure-ai","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/how-google-is-expanding-its-commitment-to-secure-ai\/","title":{"rendered":"How Google is expanding its commitment to secure AI"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"cfg5n\">Cyberthreats evolve quickly and some of the biggest vulnerabilities aren\u2019t discovered by companies or product manufacturers \u2014 but by outside security researchers. That\u2019s why we have a long history of supporting collective security through our Vulnerability Rewards Program (VRP), Project Zero and in the field of Open Source software security. It\u2019s also why we joined other leading AI companies at the White House earlier this year to commit to advancing the discovery of vulnerabilities in AI systems.<\/p>\n<p data-block-key=\"e81o2\">Today, we\u2019re expanding our VRP to reward for attack scenarios specific to generative AI. We believe this will incentivize research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone. We\u2019re also expanding our open source security work to make information about AI supply chain security universally discoverable and verifiable.<\/p>\n<p data-block-key=\"cao82\"><b>New technology requires new vulnerability reporting guidelines<\/b><\/p>\n<p data-block-key=\"a6rt5\">As part of expanding VRP for AI, we\u2019re taking a fresh look at how bugs should be categorized and reported. Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations). As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks. But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure. In August, we joined the White House and industry peers to enable thousands of third-party security researchers to find potential issues at DEF CON\u2019s largest-ever public Generative AI Red Team event. Now, since we are expanding the bug bounty program and releasing additional guidelines for what we\u2019d like security researchers to hunt, we\u2019re sharing those guidelines so that anyone can see what\u2019s \u201cin scope.\u201d We expect this will spur security researchers to submit more bugs and accelerate the goal of a safer and more secure generative AI.<\/p>\n<p data-block-key=\"3noss\"><b>Two new ways to strengthen the AI Supply Chain<\/b><\/p>\n<p data-block-key=\"fhq86\">We introduced our Secure AI Framework (SAIF) \u2014 to support the industry in creating trustworthy applications \u2014 and have encouraged implementation through AI red teaming. The first principle of SAIF is to ensure that the AI ecosystem has strong security foundations, and that means securing the critical supply chain components that enable machine learning (ML) against threats like model tampering, data poisoning, and the production of harmful content.<\/p>\n<p data-block-key=\"8mcmg\">Today, to further protect against machine learning supply chain attacks, we\u2019re expanding our open source security work and building upon our prior collaboration with the Open Source Security Foundation. The Google Open Source Security Team (GOSST) is leveraging SLSA and Sigstore to protect the overall integrity of AI supply chains. SLSA involves a set of standards and controls to improve resiliency in supply chains, while Sigstore helps verify that software in the supply chain is what it claims to be. To get started, today we announced the availability of the first prototypes for model signing with Sigstore and attestation verification with SLSA.<\/p>\n<p data-block-key=\"7dlga\">These are early steps toward ensuring the safe and secure development of generative AI \u2014 and we know the work is just getting started. Our hope is that by incentivizing more security research while applying supply chain security to AI, we\u2019ll spark even more collaboration with the open source security community and others in industry, and ultimately help make AI safer for everyone.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/safety-security\/google-ai-security-expansion\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Cyberthreats evolve quickly and some of the biggest vulnerabilities aren\u2019t discovered by companies or product manufacturers \u2014 but by outside security researchers. That\u2019s why we have a long history of supporting collective security through our Vulnerability Rewards Program (VRP), Project Zero and in the field of Open Source software security. It\u2019s also why we joined [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":14172,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-14171","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/14171","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=14171"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/14171\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/14172"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=14171"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=14171"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=14171"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}