{"id":18321,"date":"2024-04-05T14:29:33","date_gmt":"2024-04-05T14:29:33","guid":{"rendered":"http:\/\/scannn.com\/our-approach-to-labeling-ai-generated-content-and-manipulated-media\/"},"modified":"2024-04-05T14:29:33","modified_gmt":"2024-04-05T14:29:33","slug":"our-approach-to-labeling-ai-generated-content-and-manipulated-media","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/our-approach-to-labeling-ai-generated-content-and-manipulated-media\/","title":{"rendered":"Our Approach to Labeling AI-Generated Content and Manipulated Media"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400;\">We are making changes to the way we handle manipulated media on Facebook, Instagram and Threads based on <\/span><a href=\"https:\/\/www.oversightboard.com\/decision\/FB-GW8BY1Y3\"><span style=\"font-weight: 400;\">feedback<\/span><\/a><span style=\"font-weight: 400;\"> from the Oversight Board that we should update our approach to reflect a broader range of content that exists today and provide context about the content through labels. These changes are also informed by Meta\u2019s policy review process that included extensive public opinion surveys and consultations with academics, civil society organizations and others.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We agree with the Oversight Board\u2019s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn\u2019t say. Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving. As the Board noted, it\u2019s equally important to address manipulation that shows a person doing something they didn\u2019t do.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommended a \u201cless restrictive\u201d approach to manipulated media like labels with context. In February, we announced that we\u2019ve been working with industry partners on common technical standards for <\/span><a href=\"https:\/\/about.fb.com\/news\/2024\/02\/labeling-ai-generated-images-on-facebook-instagram-and-threads\/\"><span style=\"font-weight: 400;\">identifying AI content<\/span><\/a><span style=\"font-weight: 400;\">, including video and audio. Our \u201cMade with AI\u201d labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they\u2019re uploading AI-generated content. We already add \u201cImagined with AI\u201d to photorealistic images created using our Meta AI feature.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We agree that providing transparency and additional context is now the better way to address this content. The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling. If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context. This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We will keep this content on our platforms so we can add informational labels and context, unless the content otherwise violates our policies. For example, we will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards. We also have a <\/span><a href=\"https:\/\/www.facebook.com\/business\/help\/2593586717571940?id=673052479947730\"><span style=\"font-weight: 400;\">network of nearly 100 independent fact-checkers<\/span><\/a><span style=\"font-weight: 400;\"> who will continue to review false and misleading AI-generated content. When fact-checkers rate content as False or Altered, we show it lower in Feed so fewer people see it, and add an overlay label with additional information. In addition, we reject an ad if it contains debunked content, and since January, advertisers have to <\/span><a href=\"https:\/\/www.facebook.com\/government-nonprofits\/blog\/political-ads-ai-disclosure-policy\"><span style=\"font-weight: 400;\">disclose<\/span><\/a><span style=\"font-weight: 400;\"> when they digitally create or alter a political or social issue ad in certain cases.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We plan to start labeling AI-generated content in May 2024, and we\u2019ll stop removing content solely on the basis of our manipulated video policy in July. This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.<\/span><\/p>\n<p class=\"jetpack-slideshow-noscript robots-nocontent\">This slideshow requires JavaScript.<\/p>\n<h2>Policy Process Informed By Global Experts and Public Surveys<\/h2>\n<p><span style=\"font-weight: 400;\">In Spring 2023, we began reevaluating our policies to see if we needed a new approach to keep pace with rapid advances in generative AI technologies and usage. We completed consultations with over 120 stakeholders in 34 countries in every major region of the world. Overall, we heard broad support for labeling AI-generated content and strong support for a more prominent label in high-risk scenarios. Many stakeholders were receptive to the concept of people self-disclosing content as AI-generated.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A majority of stakeholders agreed that removal should be limited to only the highest risk scenarios where content can be tied to harm, since generative AI is becoming a mainstream tool for creative expression. This aligns with the principles behind our Community Standards \u2013 that people should be free to express themselves while also remaining safe on our services.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We also conducted public opinion research with more than 23,000 respondents in 13 countries and asked people how social media companies, such as Meta, should approach AI-generated content on their platforms. A large majority (82%) favor warning labels for AI-generated content that depicts people saying things they did not say.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Additionally, the Oversight Board noted their recommendations were informed by consultations with civil-society organizations, academics, inter-governmental organizations and other experts.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Based on feedback from the Oversight Board, experts and the public, we\u2019re taking steps we think are appropriate for platforms like ours. We want to help people know when photorealistic images have been created or edited using AI, so we\u2019ll continue to collaborate with industry peers through forums like the Partnership on AI and remain in a dialogue with governments and civil society \u2013 and we\u2019ll continue to review our approach as technology progresses.<\/span><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2024\/04\/metas-approach-to-labeling-ai-generated-content-and-manipulated-media\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We are making changes to the way we handle manipulated media on Facebook, Instagram and Threads based on feedback from the Oversight Board that we should update our approach to reflect a broader range of content that exists today and provide context about the content through labels. These changes are also informed by Meta\u2019s policy [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18322,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-18321","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18321"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18321\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18322"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}