{"id":14001,"date":"2023-09-27T21:55:09","date_gmt":"2023-09-27T21:55:09","guid":{"rendered":"http:\/\/scannn.com\/building-generative-ai-features-responsibly\/"},"modified":"2023-09-27T21:55:09","modified_gmt":"2023-09-27T21:55:09","slug":"building-generative-ai-features-responsibly","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/building-generative-ai-features-responsibly\/","title":{"rendered":"Building Generative AI Features Responsibly"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400;\">Meta has been a pioneer in AI for more than a decade. We\u2019ve released more than 1,000 AI models, libraries, and data sets for researchers \u2013 including the latest version of our large language model, Llama 2, available in partnership with Microsoft.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">At Connect 2023, we announced several <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/09\/introducing-ai-powered-assistants-characters-and-creative-tools\/\"><span style=\"font-weight: 400;\">new generative AI features<\/span><\/a><span style=\"font-weight: 400;\"> that people can use to make the experiences they have on our platforms even more social and immersive. Our hope is that generative AI tools like these can help people in a variety of ways. Imagine a group of friends planning a trip together: In a group chat, they can ask an AI assistant for activity and restaurant suggestions. In another case, a teacher could use an AI to help create lesson plans that are customized for different learning styles of individual students.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Building this technology comes with the responsibility to develop best practices and policies. While there are many exciting and creative use cases for generative AI, it won\u2019t always be perfect. The underlying models, for example, have the potential to generate fictional responses or exacerbate stereotypes it may learn from its training data. We\u2019ve incorporated lessons we\u2019ve learned over the last decade into our new features \u2013 like notices so people understand the limits of generative AI, and integrity classifiers that help us catch and remove dangerous responses. These are being done in line with industry best practices outlined in the Llama 2 <\/span><a href=\"https:\/\/ai.meta.com\/llama\/responsible-use-guide\/\"><span style=\"font-weight: 400;\">Responsible Use Guide<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In keeping with our commitment to responsible AI, we also stress test our products to improve safety performance and regularly collaborate with policymakers, experts in academia and civil society, and others in our industry to advance the responsible use of this technology. We\u2019ll be rolling these features out step by step, and launching the AIs in beta. We\u2019ll continue to iterate on and improve these features as the technologies evolve and we see how people use them in their daily lives.\u00a0<\/span><\/p>\n<p><b>How are we building responsibly and prioritizing people\u2019s safety?<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The custom AI models that power new text-based experiences like Meta AI, our large language model-powered assistant, are built on the foundation of <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/07\/llama-2\/\"><span style=\"font-weight: 400;\">Llama 2<\/span><\/a><span style=\"font-weight: 400;\"> and leverage its safety and responsibility training. We\u2019ve also been investing in specific measures for the features we announced today.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We\u2019re sharing a <\/span><a href=\"https:\/\/ai.meta.com\/static-resource\/building-generative-ai-responsibly\/\"><span style=\"font-weight: 400;\">resource<\/span><\/a><span style=\"font-weight: 400;\"> that explains in more detail the steps we\u2019re taking to identify potential vulnerabilities, reduce risks, enhance safety, and bolster reliability. For example:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>We\u2019re evaluating and improving our conversational AIs with external and internal experts through red teaming exercises.<\/b><span style=\"font-weight: 400;\"> Dedicated teams of experts have spent thousands of hours stress-testing these models, looking for unexpected ways they might be used along with identifying and fixing vulnerabilities.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>We\u2019re fine-tuning the models.<\/b><span style=\"font-weight: 400;\"> This includes training the models to perform specific tasks, such as generating high-quality images, with instructions that can increase the likelihood of providing helpful responses. We\u2019re also training them to provide expert-backed resources in response to safety issues. For example, the AIs will suggest local suicide and eating disorder organizations in response to certain queries, while making it clear that it cannot provide medical advice.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>We\u2019re training our models on safety and responsibility guidelines.<\/b><span style=\"font-weight: 400;\"> Teaching the models guidelines means they are less likely to share responses that are potentially harmful or inappropriate for all ages on our apps.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>We\u2019re taking steps to reduce bias.<\/b> <span style=\"font-weight: 400;\">Addressing potential bias in generative AI systems is a new area of research. As with other AI models, having more people use the features and share feedback can help us refine our approach.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>We\u2019ve developed new technology to catch and take action on content that violates our policies.<\/b><span style=\"font-weight: 400;\"> Our teams have built algorithms that scan and filter out harmful responses before they\u2019re shared back to people.<\/span><\/li>\n<\/ul>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>We\u2019ve built feedback tools within these features.<\/b><span style=\"font-weight: 400;\"> No AI model is perfect. We\u2019ll use the feedback we receive to keep training the models to improve safety performance and automatic detection of policy violations. We\u2019re also making our new generative AI features available to security researchers through Meta\u2019s long-running bug bounty program.<\/span><\/li>\n<\/ul>\n<p><a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-39425\" src=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836\" alt=\"Visual showcasing how Meta is building AI responsibly\" width=\"960\" height=\"836\" srcset=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=3841 3841w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=300 300w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=768 768w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=1024 1024w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=1536 1536w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=2048 2048w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=1240 1240w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=689 689w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=1920 1920w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/01_AI-Responsibility.png?resize=960%2C836?w=2880 2880w\" sizes=\"(max-width: 960px) 100vw, 960px\" data-recalc-dims=\"1\"\/><\/a><\/p>\n<p><b>How are we protecting people\u2019s privacy?<\/b><\/p>\n<p><span style=\"font-weight: 400;\">We\u2019re held accountable for protecting people\u2019s privacy by regulators, policymakers, and experts. We work with them to ensure that what we build follows best practices and meets high standards for data protection.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We believe it\u2019s important that people understand the types of data we use to train the models that power our generative AI products. For example, we do not use your private messages with friends and family to train our AIs. We may use the data from your use of AI stickers, such as your searches for a sticker to use in a chat, to improve our AI sticker models. You can find out more about the types of data we use in our <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/09\/privacy-matters-metas-generative-ai-features\/\"><span style=\"font-weight: 400;\">Privacy Matters post on generative AI<\/span><\/a><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<p><b>How are we making sure people know how to use the new features and understand their limitations?<\/b><\/p>\n<p><span style=\"font-weight: 400;\">We provide information within the features to help people understand when they\u2019re interacting with AI and how this new technology works. We denote in the product experience that they might return inaccurate or inappropriate outputs.<\/span><\/p>\n<p><a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836\"><img loading=\"lazy\" decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-39426\" src=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836\" alt=\"Phone screens showing Meta AI chats\" width=\"960\" height=\"836\" srcset=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=11521 11521w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=300 300w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=768 768w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=1024 1024w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=1536 1536w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=2048 2048w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=1240 1240w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=689 689w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=1920 1920w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/02_AI-Responsibility.png?resize=960%2C836?w=2880 2880w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" data-recalc-dims=\"1\"\/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">This past year, we published <a href=\"https:\/\/ai.meta.com\/tools\/system-cards\">22 \u2018System Cards\u2019<\/a> to give people understandable information about how our AI systems make decisions that affect them. Today, we\u2019re sharing new <\/span><span style=\"font-weight: 400;\">generative AI System Cards on Meta\u2019s AI website<\/span><span style=\"font-weight: 400;\"> \u2013 one for AI systems that <\/span><a href=\"https:\/\/ai.meta.com\/tools\/system-cards\/ai-systems-that-generate-text\"><span style=\"font-weight: 400;\">generate text<\/span><\/a><span style=\"font-weight: 400;\"> that powers Meta AI and another for AI systems that <\/span><a href=\"https:\/\/ai.meta.com\/tools\/system-cards\/ai-systems-that-generate-images\"><span style=\"font-weight: 400;\">generate images<\/span><\/a><span style=\"font-weight: 400;\"> for AI stickers, Meta AI, restyle, and backdrop. These include an interactive demo so people can see how refining their prompt affects the output from the models.<\/span><\/p>\n<p><b>How are we helping people to know when images are created with our AI features?<\/b><\/p>\n<p><span style=\"font-weight: 400;\">We\u2019re following industry best practices so it\u2019s harder for people to spread misinformation with our tools. Images created or edited by Meta AI, restyle, and backdrop will have visible markers so people know the content was created by AI. We\u2019re also developing additional techniques to include information within image files that were created by Meta AI, and we intend to expand this to other experiences as the technology improves. We\u2019re not planning to add these features to AI stickers, since they are not photorealistic and are therefore unlikely to mislead people into thinking they are real.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Currently, there aren\u2019t any common standards for identifying and labeling AI-generated content across the industry. We think there should be, so we are working with other companies through forums like the Partnership on AI in the hope of developing them.<\/span><\/p>\n<p><a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836\"><img loading=\"lazy\" decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-39428\" src=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836\" alt=\"Image showing label on AI-generated content\" width=\"960\" height=\"836\" srcset=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=3840 3840w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=300 300w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=768 768w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=1024 1024w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=1536 1536w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=2048 2048w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=1240 1240w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=689 689w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=1920 1920w, https:\/\/about.fb.com\/wp-content\/uploads\/2023\/09\/03_AI-Responsibility.png?resize=960%2C836?w=2880 2880w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" data-recalc-dims=\"1\"\/><\/a><\/p>\n<p><b>What steps are we taking to stop people from spreading misinformation using generative AI?<\/b><\/p>\n<p><span style=\"font-weight: 400;\">AI is a key part of how we tackle misinformation and other harmful content. For example, we <\/span><a href=\"https:\/\/ai.meta.com\/blog\/heres-how-were-using-ai-to-help-detect-misinformation\/\"><span style=\"font-weight: 400;\">developed AI technologies<\/span><\/a><span style=\"font-weight: 400;\"> to match near-duplications of previously fact-checked content. We also have a tool called <\/span><a href=\"https:\/\/ai.meta.com\/blog\/harmful-content-can-evolve-quickly-our-new-ai-system-adapts-to-tackle-it\/\"><span style=\"font-weight: 400;\">Few-Shot Learner<\/span><\/a><span style=\"font-weight: 400;\"> that can adapt more easily to take action on new or evolving types of harmful content quickly, working across more than 100 languages. Previously, we would have needed to gather thousands or sometimes even millions of examples to build a data set large enough to train an AI model, and then do the fine tuning to make it work properly. Few-Shot Learner can train an AI model based on only a handful of examples.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Generative AI could help us take down harmful content faster and more accurately than existing AI tools. We\u2019ve started testing large language models (LLMs) by training them on our Community Standards to help determine whether a piece of content violates our policies or not. These initial tests suggest the LLMs can perform better than existing machine learning models, or at least enhance ones like Few-Shot Learner, and we\u2019re optimistic generative AI can help us enforce our policies in the future.<\/span><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2023\/09\/building-generative-ai-features-responsibly\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Meta has been a pioneer in AI for more than a decade. We\u2019ve released more than 1,000 AI models, libraries, and data sets for researchers \u2013 including the latest version of our large language model, Llama 2, available in partnership with Microsoft. At Connect 2023, we announced several new generative AI features that people can [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":14002,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-14001","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/14001","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=14001"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/14001\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/14002"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=14001"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=14001"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=14001"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}