[ad_1]
We know that safety and security are top of mind for people using our apps, including businesses and advertisers. Today, as part of our quarterly integrity reporting, we’re sharing updates on our work to combat a range of threats, including covert influence operations, cyber espionage and malware campaigns.
In my first year as Meta’s chief information security officer, my focus has been bringing together teams working on integrity, security, support, and operations so that we can work together in the most effective way possible. Each of these efforts has been ongoing for many years, and a key focus for us has been sharing progress, bringing in outside experts and working with other companies to tackle industry-wide threats. It’s been more than 10 years since our bug bounty program began working with the security research community, 10 years since we first published transparency reports on government data requests, over five years since we started sharing takedowns of covert influence operations and five years since we published our first community standards enforcement report.
We’ve learned a lot through this work, including the importance of sharing both qualitative and quantitative insights into our integrity work. And it’s been encouraging to see our peers join us in expanding their trust and safety reporting. We’re committed to continuing these efforts, and today’s updates are good examples of this work.
Countering Malware Campaigns Across the Internet
My teams track and take action against hundreds of threat actors around the world, including malware campaigns. Here are a few things that stood out from our latest malware work.
First, our threat research has shown time and again that malware operators, just like spammers, are very attuned to what’s trendy at any given moment. They latch onto hot-button issues and popular topics to get people’s attention. The latest wave of malware campaigns have taken notice of generative AI technology that’s captured people’s imagination and excitement.
Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet. For example, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-related tools. In fact, some of these malicious extensions did include working ChatGPT functionality alongside the malware. This was likely to avoid suspicion from the stores and from users. We’ve detected and blocked over 1,000 of these unique malicious URLs from being shared on our apps, and reported them to our industry peers at file-sharing services where malware was hosted so they, too, can take appropriate action.
This is not unique to the generative AI space. As an industry, we’ve seen this across other topics popular in their time, such as crypto scams fueled by the interest in digital currency. The generative AI space is rapidly evolving and bad actors know it, so we should all be vigilant.
Second, we’ve seen that our and industry’s efforts are forcing threat actors to rapidly evolve their tactics in attempts to evade detection and enable persistence. One way they do this is by spreading across as many platforms as they can to protect against enforcement by any one service. For example, we’ve seen malware families leveraging services like ours and LinkedIn, browsers like Chrome, Edge, Brave and Firefox, link shorteners, file-hosting services like Dropbox and Mega, and more. When they get caught, they mix in more services including smaller ones that help them disguise the ultimate destination of links. Another example is when some malware families masquerading as ChatGPT apps switched their lures to other popular themes like Google’s Bard or TikTok marketing support, in response to detection.
These changes are likely an attempt by threat actors to ensure that any one service has only limited visibility into the entire operation. When bad actors count on us to work in silos while they target people far and wide across the internet, we need to work together as an industry to protect people. That’s why we designed our threat research to help us scale our security work in a number of ways — it disrupts malicious operations on our platform and helps inform our industry’s defenses against threats that rarely target one platform. The insights we gain from this research help drive our continuous product development to protect people and businesses.
In the months and years ahead, we’ll continue to highlight how these malicious campaigns operate, share threat indicators with our industry peers and roll out new protections to address new tactics. For instance, we’re launching a new support flow for businesses impacted by malware. Read more about our work to help businesses stay safe on our apps.
Disrupting Cyber Espionage and Covert Influence Operations
In today’s Q1 Adversarial Threat report, we shared findings about nine adversarial networks we took action against for various security violations.
Six of these networks engaged in coordinated inauthentic behavior (CIB) that originated in the US, Venezuela, Iran, China, Georgia, Burkina Faso and Togo, and primarily targeted people outside of their countries. We removed the majority of these networks before they were able to build authentic audiences.
Nearly all of them ran fictitious entities — news media organizations, hacktivist groups and NGOs — across the internet, including on Facebook, Twitter, Telegram, YouTube, Medium, TikTok, Blogspot, Reddit, WordPress, Freelancer[.]com, hacking forums and their own websites. Half of these operations were linked to private entities including an IT company in China, a US marketing firm and a political marketing consultancy in the Central African Republic.
We also disrupted three cyber espionage operations in South Asia, including an advanced persistent threat (APT) group we attributed to state-linked actors in Pakistan, a threat actor in India known in the security industry as Patchwork APT, and the threat group known as Bahamut APT in South Asia.
Each of these APTs relied heavily on social engineering to trick people into clicking on malicious links, downloading malware or sharing personal information across the internet. This investment in social engineering meant that these threat actors did not have to invest as much on the malware side. In fact, for at least two of these operations, we saw a reduction in the malicious capabilities in their apps, likely to ensure they can be published in official app stores. In response to the security community continuing to disrupt these malicious efforts, we’ve seen these APTs to be forced to set up new infrastructure, change tactics and invest more in hiding and diversifying their operations, which likely degraded their operations. Read more about this threat research in our Q1 Adversarial Threat Report.
[ad_2]
Source link