{"id":15831,"date":"2024-02-06T13:05:34","date_gmt":"2024-02-06T13:05:34","guid":{"rendered":"http:\/\/scannn.com\/labeling-ai-generated-images-on-facebook-instagram-and-threads\/"},"modified":"2024-02-06T13:05:34","modified_gmt":"2024-02-06T13:05:34","slug":"labeling-ai-generated-images-on-facebook-instagram-and-threads","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/labeling-ai-generated-images-on-facebook-instagram-and-threads\/","title":{"rendered":"Labeling AI-Generated Images on Facebook, Instagram and Threads"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400\">As a company that\u2019s been at the cutting edge of AI development for more than a decade, it\u2019s been hugely encouraging to witness the explosion of creativity from people using our new generative AI tools, like our Meta AI image generator which helps people create pictures with simple text prompts.<\/span><\/p>\n<p><span style=\"font-weight: 400\">As the difference between human and synthetic content gets blurred, people want to know where the boundary lies. People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it\u2019s important that we help people know when photorealistic content they\u2019re seeing has been created using AI. We do that by applying \u201cImagined with AI\u201d labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies\u2019 tools too.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">That\u2019s why we\u2019ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads. We\u2019re building this capability now, and in the coming months we\u2019ll start applying labels in all languages supported by each app. We\u2019re taking this approach through the next year, during which a number of important elections are taking place around the world. During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward.<\/span><\/p>\n<h2>A New Approach to Identifying and Labeling AI-Generated Content<\/h2>\n<p><span style=\"font-weight: 400\">When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/09\/building-generative-ai-features-responsibly\/\"><span style=\"font-weight: 400\">visible markers<\/span><\/a><span style=\"font-weight: 400\"> that you can see on the images, and both <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/12\/meta-ai-updates\/\"><span style=\"font-weight: 400\">invisible watermarks<\/span><\/a><span style=\"font-weight: 400\"> and metadata embedded within image files. Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them. This is an important part of<\/span> <a href=\"https:\/\/about.fb.com\/news\/2023\/09\/building-generative-ai-features-responsibly\/\"><span style=\"font-weight: 400\">the responsible approach we\u2019re taking to building generative AI features<\/span><\/a><span style=\"font-weight: 400\">.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Since AI-generated content appears across the internet, we\u2019ve been working with other companies in our industry to develop common standards for identifying it through forums like the<\/span> <a href=\"https:\/\/partnershiponai.org\/\"><span style=\"font-weight: 400\">Partnership on AI<\/span><\/a><span style=\"font-weight: 400\"> (PAI). The invisible markers we use for Meta AI images \u2013 IPTC metadata and invisible watermarks \u2013 are in line with PAI\u2019s<\/span> <a href=\"https:\/\/partnershiponai.org\/glossary-for-synthetic-media-transparency-methods-part-1-indirect-disclosure\/\"><span style=\"font-weight: 400\">best practices<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We\u2019re building industry-leading tools that can identify invisible markers at scale \u2013 specifically, the<\/span> <a href=\"https:\/\/c2pa.org\/specifications\/specifications\/1.3\/specs\/C2PA_Specification.html#_digital_signatures\"><span style=\"font-weight: 400\">\u201cAI generated\u201d information in the <\/span><span style=\"font-weight: 400\">C2PA<\/span><\/a><span style=\"font-weight: 400\"> and<\/span><a href=\"https:\/\/iptc.org\/standards\/photo-metadata\/iptc-standard\/\"> <span style=\"font-weight: 400\">IPTC<\/span><\/a><span style=\"font-weight: 400\"> technical standards \u2013 so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.\u00a0<\/span><\/p>\n<p class=\"jetpack-slideshow-noscript robots-nocontent\">This slideshow requires JavaScript.<\/p>\n<p><span style=\"font-weight: 400\">While companies are starting to include signals in their image generators, they haven\u2019t started including them in AI tools that generate audio and video at the same scale, so we can\u2019t yet detect those signals and label this content from other companies. While the industry works towards this capability, we\u2019re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it. We\u2019ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.<\/span><\/p>\n<p><span style=\"font-weight: 400\">This approach represents the cutting edge of what\u2019s technically possible right now. But it\u2019s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers. So we\u2019re pursuing a range of options. We\u2019re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we\u2019re looking for ways to make it more difficult to remove or alter invisible watermarks. For example, Meta\u2019s AI Research lab FAIR recently shared research on an invisible watermarking technology we\u2019re developing called<\/span> <a href=\"https:\/\/ai.meta.com\/blog\/stable-signature-watermarking-generative-ai\/\"><span style=\"font-weight: 400\">Stable Signature<\/span><\/a><span style=\"font-weight: 400\">. This integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can\u2019t be disabled.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">This work is especially important as this is likely to become an increasingly adversarial space in the years ahead. People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we\u2019ll need to keep looking for ways to stay one step ahead.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">In the meantime, it\u2019s important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">These are early days for the spread of AI-generated content. As it becomes more common in the years ahead, there will be debates across society about what should and shouldn\u2019t be done to identify both synthetic and non-synthetic content. Industry and regulators may move towards ways of authenticating content that hasn\u2019t been created using AI as well content that has. What we\u2019re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we\u2019ll continue to watch and learn, and we\u2019ll keep our approach under review as we do. We\u2019ll keep collaborating with our industry peers. And we\u2019ll remain in a dialogue with governments and civil society.\u00a0<\/span><\/p>\n<h2>AI Is Both a Sword and a Shield<\/h2>\n<p><span style=\"font-weight: 400\">Our Community Standards apply to all content posted on our platforms regardless of how it is created. When it comes to harmful content, the most important thing is that we are able to catch it and take action regardless of whether or not it has been generated using AI. And the use of AI in our integrity systems is a big part of what makes it possible for us to catch it.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">We\u2019ve used AI systems to help protect our users for a number of years. For example, we use AI to help us detect and address hate speech and other content that violates our policies. This is a big part of the reason why we\u2019ve been able to cut the prevalence of hate speech on Facebook to just 0.01-0.02% (as of Q3 2023). In other words, for every 10,000 content views, we estimate just one or two will contain hate speech.<\/span><\/p>\n<p><span style=\"font-weight: 400\">While we use AI technology to help enforce our policies, our use of generative AI tools for this purpose has been limited. But we\u2019re optimistic that generative AI could help us take down harmful content faster and more accurately. It could also be useful in enforcing our policies during moments of heightened risk, <\/span><span style=\"font-weight: 400\">like elections. We\u2019ve started testing Large Language Models (LLMs) by training them on our Community Standards to help determine whether a piece of content violates our policies. These initial tests suggest the LLMs can perform better than existing machine learning models. We\u2019re also using LLMs to remove content from review queues in certain circumstances when we\u2019re highly confident it doesn\u2019t violate our policies. This frees up capacity for our reviewers to focus on content that\u2019s more likely to break our rules.<\/span><\/p>\n<p><span style=\"font-weight: 400\">AI-generated content is also eligible to be fact-checked by our independent fact-checking partners and we label debunked content so people have accurate information when they encounter similar content across the internet.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Meta has been a pioneer in AI development for more than a decade. We know that progress and responsibility can and must go hand in hand. Generative AI tools offer huge opportunities, and we believe that it is both possible and necessary for these technologies to be developed in a transparent and accountable way. That\u2019s why we want to help people know when photorealistic images have been created using AI, and why we are being open about the limits of what\u2019s possible too. We\u2019ll continue to learn from how people use our tools in order to improve them. And we\u2019ll continue to work collaboratively with others through forums like PAI to develop common standards and guardrails.\u00a0<\/span><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2024\/02\/labeling-ai-generated-images-on-facebook-instagram-and-threads\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As a company that\u2019s been at the cutting edge of AI development for more than a decade, it\u2019s been hugely encouraging to witness the explosion of creativity from people using our new generative AI tools, like our Meta AI image generator which helps people create pictures with simple text prompts. As the difference between human [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":15832,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-15831","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/15831","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=15831"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/15831\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/15832"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=15831"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=15831"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=15831"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}