Protecting Public Debate During the War in Ukraine and Protests in Iran

At times of war and social unrest, apps like Facebook, Instagram, WhatsApp and Messenger give people a way to connect within and across borders to share information and make their voices heard. In a number of regions around the world, we’ve seen attempts by governments to silence citizens, control the flow of information, and manipulate public debate. Nowhere has this been more apparent over the last year than in the Russian invasion of Ukraine and during the mass protests in Iran.

We go to great lengths to protect people’s ability to use our apps when they’re needed the most. Of course, there are times when we have to take action against content that doesn’t otherwise violate our policies in order to comply with local laws. But our starting point is always to defend people’s ability to make their voices heard and to resist attempts to clamp down on the use of our services — especially during times of war and social unrest.  

As part of our commitment to transparency, we also publish quarterly reports that give insights into our work in this area, including the adversarial threats we’ve identified and taken action against, and our enforcement of Meta’s policies. Alongside today’s publication of these and other transparency reports covering Q4 2022, we’re also providing updates on our actions relating to the Russia-Ukraine war and the protests in Iran.


It is nearly a year since Russia’s full scale invasion of Ukraine. Within days, Russia attempted to block or restrict access to Facebook and Instagram as part of a wider attempt to cut Russian citizens off from the open internet, silence people and independent media, and manipulate public opinion. Since the invasion began, we’ve provided updates on our response, including the measures we’ve taken to help keep Ukrainians and Russians safe, our approach to misinformation, state-controlled media and ensuring reliable access to trusted information.

Throughout the war, our teams have been on high alert to identify emerging threats and respond as quickly as we can. As it has progressed, we have observed changes in both overt and covert Russian influence operations on our platform and on the internet more broadly.

State-controlled media:

After we applied new and stronger enforcements to Russian state-controlled media, including demoting their posts and providing labels to users so they know the source of information, the most recent research has found that there was a substantial drop in the activity of these Pages around the world, as well as a substantial drop in people’s engagements with the content they posted.  

In total, we labeled more than 450 Pages and Instagram accounts. In response, we saw state-controlled media shifting to other platforms and using new domains to try to escape the additional transparency on (and demotions against) links to their websites. We observed this behavior around the world, not only in places where Russian state media faced government restrictions. 

Covert influence campaigns:

While overt activity by Russian state-controlled media on our platforms has decreased, attempts at covert activity have increased sharply. Russian covert networks have historically targeted Ukraine more than any other country. Last year, we took down two Russian covert influence campaigns that focused on the war. Rather than trying to build up convincing fake personas, these campaigns resembled “smash-and-grab” operations that used thousands of fake accounts across social media — not Meta platforms alone — in an attempt to overwhelm the conversation with their content. In both cases, our automated systems detected and disabled the majority of these accounts soon after they were created.

Since taking these two networks down we’ve seen thousands of recidivist attempts to create fake accounts. This covert activity is aggressive and persistent, constantly probing for weak spots across the internet, including setting up hundreds of new spoof news organization domains.

Humanitarian support:

The work of hundreds of humanitarian organizations and volunteer groups gets impacted when information is cut off or limited. Through our Data for Good program, established in 2017, Meta has been part of an unprecedented collaboration between technology companies, the public sector, universities, nonprofits and others to aid disaster relief, support economic recovery, and inform policy and decision making. Data for Good datasets informed the work of humanitarian partners, including Direct Relief, who provided 650 tons of medical aid and $14.7 million in direct financial assistance to support Ukrainian refugees. They also inform the work of large UN agencies like International Organization for Migration, United Nations High Commissioner on Refugees and others, as they support medical relief, housing and resettlement efforts.

Displacement estimates from Data for Good have helped fill gaps in official statistics on where displaced populations ultimately resettle after leaving Ukraine. For example, many people leave Ukraine and immediately enter Poland but then may move on to Germany or the Czech Republic, which official statistics may not report. Information from Data for Good has uncovered demographic insights on these populations, including age and gender breakdowns, which have helped inform the type of medical services needed, as well as employment support. These insights also informed the World Bank’s support to governments. They pivoted some of their resource allocations around food and housing in favor of registration and employment services as a result.

We’ve also seen a huge outpouring of generosity and solidarity for the people of Ukraine from Facebook and Instagram users all over the world. So far, people have raised nearly $70 million on Facebook and Instagram for nonprofits in support of humanitarian efforts for Ukraine on Facebook and Instagram.


The widespread protests that began in the wake of the awful killing of Mahsa Amini led to the Iranian authorities clamping down aggressively on speech and freedom of assembly, as well as limiting the use of the internet and apps like Instagram which are widely used by Iranians. We’ve followed events closely and taken a number of steps to try and keep people connected and safe.

Helping people connect:

Despite attempts to block Instagram, we’re seeing tens of millions of people still finding ways to access it via VPNs and other means, including with a more stable connection through Instagram Lite, which we launched in Iran last year to help people access Instagram when bandwidth is reduced. In January, we also introduced a blocking circumvention tool into WhatsApp that allows people to access WhatsApp through proxy servers set up by volunteers around the world. 

Helping protestors shine a light on events on the ground:

Instagram has been widely used by Iranians to shed light on the protests and the brutal response of the regime. Since Mahsa Amini’s death, hashtags related to the protests in Iran have been used on Instagram more than 160 million times. #Mahsaamini was the fifth top hashtag globally during the first three months of protests, demonstrating the power of social media to help create awareness in these critical moments. Protestors have shared their Instagram footage of the protests with international media outlets, many of whom can’t report from Iran.

When the protests began we moved quickly to set up a dedicated team focusing on addressing any issues stemming from them. This team is made up of Farsi speakers, as well as experts on our policies, products, operations, and human rights, who are working round the clock to make sure we apply our policies correctly and address any mistakes as quickly as possible.

We responded to over-enforcement at the start of protests when we saw accounts suddenly posting unusually high volume of videos and triggering our spam detection to temporarily restrict them from posting. We can’t turn off our spam detection that protects our global community, so our teams have been working on other ways to prevent over-enforcement.

Social media accounts of activists and journalists are often targeted during protests. We’ve designed policies to give these groups more protections from threats of violence, including removing content that “outs” people as activists in situations that could put them in danger. And when we become aware of human rights defenders who have been arrested or detained as a result of their work, we take steps to thwart unauthorized access to their accounts. To help people condemn and raise awareness of human rights abuses, we allow graphic content — often with a warning label — while still removing particularly violent content. 

Quarterly Transparency Report

Today we are also publishing our regular quarterly transparency reports covering Q4 2022. This includes our Adversarial Threat Report with details of three Coordinated Inauthentic Behavior networks we took down in the last quarter. The networks targeted domestic audiences in Cuba, Serbia and Bolivia and were all linked to these countries’ governments or ruling parties. They used a range of tactics to influence public debate, including deceptive campaigns, attempts to criticize and intimidate the opposition and activists into silence, and to falsely report people with opposing views in an attempt to get Meta to remove their accounts.

As these reports demonstrate, these aren’t issues any one company can tackle alone. These campaigns — and the tactics used in Russia, Iran and elsewhere — target a range of platforms and services across the internet. Tactics will adapt and change, and new threats will emerge. As they do, we will continue investigating and removing covert influence operations like these and sharing our findings with industry peers, researchers and the public to help inform our collective defenses.

You can find the reports here:

Source link


Leave a Reply

3 latest news
On Key

Related Posts

Solverwp- WordPress Theme and Plugin