{"id":18571,"date":"2024-04-23T13:39:16","date_gmt":"2024-04-23T13:39:16","guid":{"rendered":"http:\/\/scannn.com\/meta-joins-thorn-and-industry-partners-in-new-generative-ai-principles\/"},"modified":"2024-04-23T13:39:16","modified_gmt":"2024-04-23T13:39:16","slug":"meta-joins-thorn-and-industry-partners-in-new-generative-ai-principles","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/meta-joins-thorn-and-industry-partners-in-new-generative-ai-principles\/","title":{"rendered":"Meta Joins Thorn and Industry Partners in New Generative AI Principles"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400;\">At Meta, we\u2019ve spent over a decade working to keep people safe online. In that time, we\u2019ve developed numerous tools and features to help prevent and combat potential harm \u2013 and as predators have adapted to try and evade our protections, we\u2019ve continued to adapt too.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We\u2019re excited about the opportunities that generative AI technology can bring, but we also want to make sure that innovation and safety go hand in hand. That\u2019s why we take steps to build our generative AI features and models <\/span><a href=\"https:\/\/ai.meta.com\/blog\/meta-llama-3-meta-ai-responsibility\/\"><span style=\"font-weight: 400;\">responsibly<\/span><\/a><span style=\"font-weight: 400;\">. For example, we conduct extensive red teaming exercises in areas like child exploitation with experts and address vulnerabilities we found.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Now Meta is joining Thorn, All Tech is Human and other leading tech companies in an effort to prevent the misuse of gen AI tools to perpetrate child exploitation. Alongside our industry partners, Meta commits to the below Safety by Design principles from Thorn and All Tech is Human, to be applied as appropriate, and will provide updates on our progress. These principles <\/span><span style=\"font-weight: 400;\">will inform how we develop gen AI technology at Meta to help ensure we mitigate potential risks from the start.\u00a0<\/span><\/p>\n<p><b>DEVELOP: <\/b><b>Develop, build and train generative AI models that\u00a0 proactively address child safety risks.<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Responsibly source our training datasets, and safeguard them from child sexual abuse material (CSAM) and child sexual exploitation material (CSEM):\u00a0 \u00a0 <\/b><span style=\"font-weight: 400;\">This is essential to helping prevent generative models from producing AI-generated (AIG) CSAM and CSEM. The presence of CSAM and CSEM in training datasets for generative models is one avenue in which these models are able to reproduce this type of abusive content. For some models, their compositional generalization capabilities further allow them to combine concepts (e.g. adult sexual content and non-sexual depictions of children) to then produce AIG-CSAM. We are committed to avoiding or mitigating training data with a known risk of containing CSAM and CSEM. We are committed to detecting and removing CSAM and CSEM from our training data, and reporting any confirmed CSAM to the relevant authorities. We are committed to addressing the risk of creating AIG-CSAM that is posed by having depictions of children alongside adult sexual content in our video, image and audio generation training datasets.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Incorporate feedback loops and iterative stress-testing strategies in our development process<\/b><span style=\"font-weight: 400;\">: Continuous learning and testing to understand a model\u2019s capabilities to produce abusive content is key in effectively combating the adversarial misuse of these models downstream. If we don\u2019t stress test our models for these capabilities, bad actors will do so regardless. We are committed to conducting structured, scalable and consistent stress testing of our models throughout the development process for their capability to produce AIG-CSAM and CSEM within the bounds of law, and integrating these findings back into model training and development to improve safety assurance for our generative AI products and systems.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Employ content provenance with adversarial misuse in mind<\/b><span style=\"font-weight: 400;\">: Bad actors use generative AI to create AIG-CSAM. This content is photorealistic, and can be produced at scale. Victim identification is already a needle in the haystack problem for law enforcement: sifting through huge amounts of content to find the child in active harm\u2019s way. The expanding prevalence of AIG-CSAM is growing that haystack even further. Content provenance solutions that can be used to reliably discern whether content is AI-generated will be crucial to effectively respond to AIG-CSAM. We are committed to developing state of the art media provenance or detection solutions for our tools that generate images and videos. We are committed to deploying solutions to address adversarial misuse, such as considering incorporating watermarking or other techniques that embed signals imperceptibly in the content as part of the image and video generation process, as technically feasible.<\/span><\/li>\n<\/ul>\n<p><b>DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process.<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Safeguard our generative AI products and services from abusive content and conduct: <\/b><span style=\"font-weight: 400;\">\u00a0Our generative AI products and services empower our users to create and explore new horizons. These same users deserve to have that space of creation be free from fraud and abuse. We are committed to combating and responding to abusive content (CSAM, AIG-CSAM and CSEM) throughout our generative AI systems, and incorporating prevention efforts. Our users\u2019 voices are key, and we are committed to incorporating user reporting or feedback options to empower these users to build freely on our platforms.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Responsibly host models: <\/b><span style=\"font-weight: 400;\">As our models continue to achieve new capabilities and creative heights, a wide variety of deployment mechanisms manifests both opportunity and risk. Safety by design must encompass not just how our model is trained, but how our model is hosted. We are committed to responsible hosting of our first-party generative models, assessing them e.g. via red teaming or phased deployment for their potential to generate AIG-CSAM and CSEM, and implementing mitigations before hosting. We are also committed to responsibly hosting third party models in a way that minimizes the hosting of models that generate AIG-CSAM. We will ensure we have clear rules and policies around the prohibition of models that generate child safety violative content.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Encourage developer ownership in safety by design<\/b><span style=\"font-weight: 400;\">: Developer creativity is the lifeblood of progress. This progress must come paired with a culture of ownership and responsibility. We encourage developer ownership in safety by design. We will endeavor to provide information about our models, including a child safety section detailing steps taken to avoid the downstream misuse of the model to further sexual harms against children. We are committed to supporting the developer ecosystem in their efforts to address child safety risks.<\/span><\/li>\n<\/ul>\n<p><b>MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks.<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Prevent our services from scaling access to harmful tools:<\/b><span style=\"font-weight: 400;\"> Bad actors have built models specifically to produce AIG-CSAM, in some cases targeting specific children to produce AIG-CSAM depicting their likeness. They also have built services that are used to \u201cnudify\u201d content of children, creating new AIG-CSAM. This is a severe violation of children\u2019s rights. We are committed to removing from our platforms and search results these models and services. <\/span><i><span style=\"font-weight: 400;\">[This principle only applies to search engines and public-facing third party model providers.]<\/span><\/i><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Invest in research and future technology solutions<\/b><span style=\"font-weight: 400;\">: Combating child sexual abuse online is an ever-evolving threat, as bad actors adopt new technologies in their efforts. Effectively combating the misuse of generative AI to further child sexual abuse will require continued research to stay up to date with new harm vectors and threats. For example, new technology to protect user content from AI manipulation will be important to protecting children from online sexual abuse and exploitation. We are committed to investing in relevant research and technology development to address the use of generative AI for online child sexual abuse and exploitation. We will continuously seek to understand how our platforms, products and models are potentially being abused by bad actors. We are committed to maintaining the quality of our mitigations to meet and overcome the new avenues of misuse that may materialize.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Fight CSAM, AIG-CSAM and CSEM on our platforms<\/b><span style=\"font-weight: 400;\">: We are committed to fighting CSAM online and preventing our platforms from being used to create, store, solicit or distribute this material. As new threat vectors emerge, we are committed to meeting this moment. We are committed to detecting and removing child safety violative content on our platforms. We are committed to disallowing and combating CSAM, AIG-CSAM and CSEM on our platforms, and combating fraudulent uses of generative AI to sexually harm children.<\/span><\/li>\n<\/ul><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2024\/04\/meta-joins-thorn-and-industry-partners-in-generative-ai-principles\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>At Meta, we\u2019ve spent over a decade working to keep people safe online. In that time, we\u2019ve developed numerous tools and features to help prevent and combat potential harm \u2013 and as predators have adapted to try and evade our protections, we\u2019ve continued to adapt too.\u00a0 We\u2019re excited about the opportunities that generative AI technology [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18572,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-18571","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18571","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18571"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18571\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18572"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18571"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18571"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18571"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}