{"id":13635,"date":"2023-06-08T16:36:48","date_gmt":"2023-06-08T16:36:48","guid":{"rendered":"http:\/\/scannn.com\/introducing-googles-secure-ai-framework\/"},"modified":"2023-06-08T16:36:48","modified_gmt":"2023-06-08T16:36:48","slug":"introducing-googles-secure-ai-framework","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/introducing-googles-secure-ai-framework\/","title":{"rendered":"Introducing Google\u2019s Secure AI Framework"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"41dac\">The potential of AI, especially generative AI, is immense. However, in the pursuit of progress within these new frontiers of innovation, there needs to be clear industry security standards for building and deploying this technology in a responsible manner. That\u2019s why today we are excited to introduce the Secure AI Framework (SAIF), a conceptual framework for secure AI systems.<\/p>\n<ul>\n<li data-block-key=\"1qcf6\">For a summary of SAIF, click here.<\/li>\n<li data-block-key=\"9amd2\">For examples of how practitioners can implement SAIF, click here.<\/li>\n<\/ul>\n<h2 data-block-key=\"2kn13\">Why we\u2019re introducing SAIF now<\/h2>\n<p data-block-key=\"4d8i3\">SAIF is inspired by the security best practices \u2014 like reviewing, testing and controlling the supply chain \u2014 that we\u2019ve applied to software development, while incorporating our understanding of security mega-trends and risks specific to AI systems.<\/p>\n<p data-block-key=\"ol6m\">A framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements, so that when AI models are implemented, they\u2019re secure-by-default. Today marks an important first step.<\/p>\n<p data-block-key=\"a14h5\">Over the years at Google, we\u2019ve embraced an open and collaborative approach to cybersecurity. This includes combining frontline intelligence, expertise, and innovation with a commitment to share threat information with others to help respond to \u2014 and prevent \u2014 cyber attacks. Building on that approach, SAIF is designed to help mitigate risks specific to AI systems like stealing the model, data poisoning of the training data, injecting malicious inputs through prompt injection, and extracting confidential information in the training data. As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical.<\/p>\n<p data-block-key=\"412b3\">And with that, let\u2019s take a look at SAIF and its six core elements:<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/safety-security\/introducing-googles-secure-ai-framework\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The potential of AI, especially generative AI, is immense. However, in the pursuit of progress within these new frontiers of innovation, there needs to be clear industry security standards for building and deploying this technology in a responsible manner. That\u2019s why today we are excited to introduce the Secure AI Framework (SAIF), a conceptual framework [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":13636,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-13635","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13635","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=13635"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13635\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/13636"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=13635"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=13635"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=13635"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}