{"id":18727,"date":"2024-06-07T09:26:42","date_gmt":"2024-06-07T09:26:42","guid":{"rendered":"http:\/\/scannn.com\/how-meta-is-preparing-for-the-2024-uk-general-election\/"},"modified":"2024-06-07T09:26:42","modified_gmt":"2024-06-07T09:26:42","slug":"how-meta-is-preparing-for-the-2024-uk-general-election","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/how-meta-is-preparing-for-the-2024-uk-general-election\/","title":{"rendered":"How Meta Is Preparing for the 2024 UK General Election"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400\">In 2024, a number of countries and regions across the world are taking to the polls to elect their leaders. At Meta, we have been preparing for this for a long time \u2013 including for the UK General Election, which is taking place on the 4th July. In advance of this, last year, we activated a dedicated team to develop a tailored approach to help preserve the integrity of these elections on our platforms.<\/span><\/p>\n<p><span style=\"font-weight: 400\">While each election is unique, this work draws on key lessons we have learned from more than 200 elections around the world since 2016, as well as the regulatory framework set out under the UK\u2019s Online Safety Act. These lessons help us focus our teams, technologies, and investments so they will have the greatest impact.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Since 2016, we\u2019ve invested more than $20 billion into safety and security and quadrupled the size of our global team working in this area to around 40,000 people. This includes 15,000 content reviewers who review content across Facebook, Instagram and Threads.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Over the last eight years, we\u2019ve rolled out industry-leading transparency tools for ads about social issues, elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform to help combat the spread of misinformation. More recently, we have committed to taking a responsible approach to new technologies like Generative AI. We\u2019ll be drawing on all of these resources in the run up to the election, and are launching a new ad campaign on Facebook and Instagram to raise awareness of these tools and features.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We\u2019ll also activate a UK-specific Elections Operations Centre, bringing together experts from across the company from our intelligence, data science, engineering, research, operations, content policy and legal teams to identify potential threats and put specific mitigations in place across our apps and technologies in real time.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Here are four key areas our teams will be focusing on:<\/span><\/p>\n<h2><b>Combating misinformation\u00a0<\/b><\/h2>\n<p><span style=\"font-weight: 400\">We remove the most serious kinds of <\/span><a href=\"https:\/\/transparency.meta.com\/en-gb\/policies\/community-standards\/misinformation\/\"><span style=\"font-weight: 400\">misinformation<\/span><\/a><span style=\"font-weight: 400\"> from Facebook, Instagram and Threads, such as content that could contribute to imminent violence or physical harm, or that attempts to interfere with voting.<\/span><\/p>\n<p><span style=\"font-weight: 400\">For content that doesn\u2019t violate these particular policies, we work with independent fact-checking organisations, all certified by the International Fact Checking Network (IFCN) or the European Fact-Checking Standards Network (EFCSN), who review and rate content. In the UK this includes Full Fact, Reuters, Logically Facts and FactCheckNI. When content is debunked by these fact checkers, we attach warning labels to the content and reduce its distribution in Feed and Explore so people are less likely to see it.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ahead of the election period, we will make it easier for all our fact-checking partners to find and rate content related to the election because we recognise that speed is especially important in these moments. We have increased their capacity to rate and review content and we\u2019ll use keyword detection to group related content in one place, making it easy for fact-checkers to find. Our fact checking partners have been onboarded to our new research tool, <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/11\/new-tools-to-support-independent-research\/amp\/\"><span style=\"font-weight: 400\">Meta Content Library<\/span><\/a><span style=\"font-weight: 400\">, that has a powerful search capability to support them in their work.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We don\u2019t allow ads that contain debunked content. We also <\/span><a href=\"https:\/\/www.facebook.com\/business\/help\/253606115684173\"><span style=\"font-weight: 400\">don\u2019t allow<\/span><\/a><span style=\"font-weight: 400\"> ads targeting the UK that discourage people from voting in the election; call into question the legitimacy of the election and its outcome; or contain premature claims of election victory. Our ads review process has several layers of analysis and detection, both before and after an ad goes live, which you can read more about <\/span><a href=\"https:\/\/www.facebook.com\/business\/help\/204798856225114?id=649869995454285\"><span style=\"font-weight: 400\">here<\/span><\/a><span style=\"font-weight: 400\">.\u00a0<\/span><\/p>\n<h2><b>Tackling influence operations\u00a0<\/b><\/h2>\n<p><span style=\"font-weight: 400\">We define influence operations as coordinated efforts to manipulate or corrupt public debate for a strategic goal \u2013 what some may refer to as disinformation \u2013 and which may or may not include misinformation as a tactic. They can vary from covert campaigns that rely on fake identities (what we call coordinated inauthentic behaviour), to overt efforts by state-controlled media entities.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">To counter covert influence operations, we\u2019ve built specialised global teams to stop coordinated inauthentic behaviour and have investigated and taken down over 200 of these adversarial networks since 2017, something we publicly share as part of our <\/span><a href=\"https:\/\/transparency.fb.com\/metasecurity\/threat-reporting\/\"><span style=\"font-weight: 400\">Quarterly Threat Reports<\/span><\/a><span style=\"font-weight: 400\">. This is a highly adversarial space where deceptive campaigns we take down continue to try to come back and evade detection by us and other platforms, which is why we continuously take action as we find further violating activity.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">For more overt efforts, we label state-controlled media on Facebook, Instagram and Threads so that people know when content is from a publication that may be under the editorial control of a government. We also applied new and stronger enforcement to Russian state-controlled media, including blocking them in the EU and the UK and globally demoting their posts. Recent <\/span><a href=\"http:\/\/graphika.com\/faltering-facebook-ignored-instagram\"><span style=\"font-weight: 400\">research<\/span><\/a><span style=\"font-weight: 400\"> by Graphika shows this led to posting volumes on their pages going down 55% and engagement levels going down 94% compared to pre-war levels, while \u201cmore than half of all Russian state media assets had stopped posting altogether.\u201d<\/span><\/p>\n<h2><b>Countering the risks related to the abuse of GenAI technologies<\/b><\/h2>\n<p><span style=\"font-weight: 400\">Our <\/span><a href=\"https:\/\/transparency.fb.com\/policies\/community-standards\/\"><span style=\"font-weight: 400\">Community Standards<\/span><\/a><span style=\"font-weight: 400\">, and <\/span><a href=\"https:\/\/transparency.fb.com\/policies\/ad-standards\"><span style=\"font-weight: 400\">Ad Standards<\/span><\/a><span style=\"font-weight: 400\"> apply to all content, including content generated by AI, and we will take action against this type of content when it violates these policies. AI-generated content is also eligible to be reviewed and rated by our independent fact-checking partners. One of the <\/span><a href=\"https:\/\/www.facebook.com\/business\/help\/341102040382165?id=673052479947730\"><span style=\"font-weight: 400\">rating options<\/span><\/a><span style=\"font-weight: 400\"> is <\/span><i><span style=\"font-weight: 400\">Altered<\/span><\/i><span style=\"font-weight: 400\">, which includes, \u201cFaked, manipulated or transformed audio, video, or photos.\u201d When it is rated as such, we label it and down-rank it in feed, so fewer people see it. We also don\u2019t allow an ad to run if it contains content that has already been debunked.<\/span><\/p>\n<p><span style=\"font-weight: 400\">For content that doesn\u2019t violate our policies, we still believe it\u2019s important for people to know when photorealistic content they\u2019re seeing has been created using AI. We label photorealistic images created using Meta AI, and <\/span><a href=\"https:\/\/about.fb.com\/news\/2024\/02\/labeling-ai-generated-images-on-facebook-instagram-and-threads\/\"><span style=\"font-weight: 400\">label<\/span><\/a><span style=\"font-weight: 400\"> organic<\/span> <span style=\"font-weight: 400\">AI generated images that users post to Facebook, Instagram and Threads from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We have also added a feature for people to disclose when they share AI-generated images, video or audio so we can add a label to it. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so people have more information and context.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Advertisers who run ads related to <\/span><a href=\"https:\/\/www.facebook.com\/government-nonprofits\/blog\/political-ads-ai-disclosure-policy\"><span style=\"font-weight: 400\">social issues, elections or politics<\/span><\/a><span style=\"font-weight: 400\"> with Meta also have to disclose if they use a photorealistic image or video, or realistic sounding audio, that has been created or altered digitally, including with AI, in certain cases. That is in addition to our industry leading ad transparency, which includes a verification process to prove an advertiser is who they say they are and that they live in the UK; a \u201cPaid for by\u201d disclaimer to show who\u2019s behind each ad; and our <\/span><a href=\"https:\/\/www.facebook.com\/ads\/library\/?active_status=all&amp;ad_type=political_and_issue_ads&amp;country=DE\"><span style=\"font-weight: 400\">Ad Library<\/span><\/a><span style=\"font-weight: 400\">, where everyone can see what ads are running, see information about targeting and find out how much was spent on them.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Since AI-generated content appears across the internet, we\u2019ve also been working with other companies in our industry on common standards and guidelines. We\u2019re a member of the <\/span><a href=\"https:\/\/partnershiponai.org\/\"><span style=\"font-weight: 400\">Partnership on AI<\/span><\/a><span style=\"font-weight: 400\">, for example, and we recently signed on to <\/span><a href=\"https:\/\/www.aielectionsaccord.com\/\"><span style=\"font-weight: 400\">the tech accord<\/span><\/a><span style=\"font-weight: 400\"> designed to combat the spread of deceptive AI content in the 2024 elections. This work is bigger than any one company &amp; will require a huge effort across industry, government, &amp; civil society.<\/span><\/p>\n<h2><b>Candidate Safety\u00a0<\/b><\/h2>\n<p><span style=\"font-weight: 400\">We want our platforms to give Members of Parliament and candidates the power to engage their constituents on the issues that matter to them in a safe and positive way. Whilst our rules distinguish between public figures and private individuals to allow for open discussion, we employ a range of measures to protect public figures from malicious posts and behaviour.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Our <\/span><a href=\"https:\/\/transparency.meta.com\/en-gb\/policies\/community-standards\/\"><span style=\"font-weight: 400\">Community Standards<\/span><\/a><span style=\"font-weight: 400\"> outline what is and isn\u2019t allowed on Facebook, Instagram and Threads. Through feedback from our community and experts around the world, we have created policies specifically to protect public figures.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\">Hate Speech: <span style=\"font-weight: 400\">we remove attacks on someone based on their race, religion, nationality, sex, gender identity, sexual orientation or other defined protected characteristics.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400\">Violence &amp; Incitement: <span style=\"font-weight: 400\">we remove any language that incites or facilitates serious violence, disable accounts, and work with law enforcement when we believe that there is a genuine risk of physical harm or direct threats to public safety.\u00a0<\/span><\/li>\n<li style=\"font-weight: 400\">Bullying &amp; Harassment: <span style=\"font-weight: 400\">we protect MPs and candidates from repeated unwanted contact and sexual harassment, and we remove more abusive posts if a public figure is directly tagged in a post or comment or where they report it directly.\u00a0<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">We also work closely with the Political Parties to encourage candidates to report anything that raises cause for concern and assist with best practice around social media security and monitor open social media pages for abuse and threats. We\u2019re holding a series of training sessions for candidates on safety, outlining the help available to address harassment on our platforms. All of this information will also be available in our <\/span><a href=\"https:\/\/www.facebook.com\/government-nonprofits\/2024-united-kingdom-general-election\"><span style=\"font-weight: 400\">UK Election Center for Candidates.<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400\">For more information about how Meta approaches elections, visit our <\/span><a href=\"https:\/\/about.facebook.com\/actions\/preparing-for-elections-on-facebook\/?_ga=2.219304026.2131706280.1706819222-1041470744.1706815322\"><span style=\"font-weight: 400\">Preparing for Elections page<\/span><\/a><span style=\"font-weight: 400\"> and for WhatsApp, visit our elections <\/span><a href=\"https:\/\/faq.whatsapp.com\/518562649771533\"><span style=\"font-weight: 400\">website<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2024\/06\/how-meta-is-preparing-for-the-2024-uk-general-election\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In 2024, a number of countries and regions across the world are taking to the polls to elect their leaders. At Meta, we have been preparing for this for a long time \u2013 including for the UK General Election, which is taking place on the 4th July. In advance of this, last year, we activated [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18728,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-18727","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18727","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18727"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18727\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18728"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18727"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18727"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18727"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}