Groundbreaking Studies Could Help Answer the Thorniest Questions About Social Media and Democracy

[ad_1]

As social media platforms have empowered people to connect with others, express themselves freely, and build and be part of borderless communities, online political debate has erupted. Platforms like Facebook and Instagram have become a stage for democratic debate – between citizens, between candidates and voters, and for campaigners and advocacy groups. Much of that is positive – if raucous, messy, and often intense.

But with these technologies being so young, and their emergence taking place during a period of profound economic, political and wider technological disruption, our understanding of social media’s role in society is still imperfect and incomplete. We are faced with a number of important questions to which there are still no definitive answers based on clear empirical evidence. Does social media make us more polarized as a society or merely reflect divisions that already exist? Does it help people become better informed about politics, or less? How does it affect people’s attitudes toward government and democracy?

Today we have new evidence to help answer some of these questions. Ahead of the US Presidential election in 2020, there was an opportunity to create an evidence base for researchers to dive into these and other questions, as well as strong demand from academics to do the research. That was the impetus behind an unprecedented research partnership between Meta and external academics to better understand the impact of Facebook and Instagram on key political attitudes and behaviors during that election cycle. The result is a series of studies, the first four of which are published today. The intention is for more papers – 16 in total – to be published in the future. 

The papers published today include studies of the effects of algorithmic ranking and virality, the prevalence and effects of like-minded information exposure on Facebook, and ideological segregation in exposure to news. Although questions about social media’s impact on key political attitudes, beliefs, and behaviors are not fully settled, the experimental findings add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or have meaningful effects on these outcomes. They also challenge the now commonplace assertion that the ability to reshare content on social media drives polarization.   

For example, Nature’s summary of one paper states that the findings “challenge views about the extent to which the ‘echo chambers’ of social media drive political polarization.” And the co-chairs of the study have stated: “Removing reshared content on Facebook produced a decrease in news knowledge among the study participants, and did not significantly affect political polarization or other individual-level political attitudes.”

The studies also shed new light on the claim that the way content is surfaced on social media — and by Meta’s algorithms specifically — keeps people divided. One of the papers shows there is considerable ideological segregation in consumption of political news, reflecting a complex interaction between algorithmic and social factors. Yet, when participants in the experiments saw a reduced amount of content from sources that reinforced their views, they were actually more likely to engage with the like-minded content they did see. And even then, it had no detectable impact on polarization, political attitudes or beliefs.

Michael Wagner, a professor at the University of Wisconsin, has observed the research process throughout to document and publicly comment on the research process as an independent observer. And as Dr. Wagner writes in his introduction to the papers, changes to what participants saw in their feeds or whether they encountered reshared content “did not reduce polarization or improve political knowledge during the 2020 US election. Indeed, removing reshared content reduced political knowledge.”

We will continue to review these papers in full, as well as those we expect to be published in the coming months.  

This is the first time Meta, or any technology company, has entered into such a partnership to transparently study its impact on elections. The external researchers have had full operational and intellectual independence throughout. They were not paid by Meta and they don’t answer to us either. We’ve signed the same contracts with them that we do with other independent researchers who use our data, which is publicly posted on Social Science One’s web site

The US 2020 project is a partnership between Meta researchers and external academics, the latter led by Professor Talia Jomini Stroud, founder and current Director of the Center for Media Engagement at the University of Texas at Austin, and Professor Joshua A. Tucker, co-director of the Center for Social Media and Politics at New York University. Professors Stroud and Tucker selected 15 additional researchers to collaborate on this effort, based on their expertise. Meta’s internal researchers designed the studies together with these external partners. Importantly, neither Meta’s researchers nor the company as a whole have the authority to restrict their findings – regardless of whether they are favorable or not for the company. 

As four of the lead researchers explained in a blog when the project was announced, it hoped to get over two major hurdles that have stood in the way of previous research into these issues. Firstly, that “increasing public concern and legal obligations related to data privacy led social media companies to restrict access to data previously used by external researchers.” And secondly, that “it is hard to conduct a rigorous scientific study on social media’s impact after the fact.” 

To overcome these issues, data was collected during the election cycle – not after – and we asked for explicit, informed consent from those who opted to be part of the research that analyzes individual level data. This means research participants consented to both the use of their data and that they were provided information on how and why their data would be used. Additionally, as part of the studies, the researchers also analyzed aggregated user data on Facebook and Instagram to help understand patterns. The studies – and consent language – were also reviewed and approved by an Institutional Review Board (IRB) to ensure they adhered to high ethical standards.

The findings were submitted to academic journals in open access format, which means they will be freely available to the public. Meta and the researchers have also documented study plans and hypotheses in advance through a pre-registration process and those initial commitments are being released upon publication of the studies. This means that people will be able to check that we did what we said we would – and didn’t hide any of the results. In addition, to run their own analyses and further check our homework, we plan to deliver de-identified data (i.e. data that cannot be reasonably linked to an individual) on the studies we run. This foundational data will be archived by the Inter-university Consortium for Political and Social Research (ICPSR), part of the University of Michigan Institute for Social Research, where it is available for other researchers to run their own analysis and check the validity of the findings. 

Last year, Meta’s transparency products were used to further academic research on elections, polarization, digital violence, misinformation, and other pressing social issues in dozens of publications. We recently announced that researchers can apply for access to beta versions of a new suite of research tools: Meta Content Library and API. The Library includes data from public posts, pages, groups, and events on Facebook. For Instagram, it will include public posts and data from creator and business accounts. Data from the Library can be searched, explored, and filtered on a graphical user interface or through a programmatic API.

The research published in these papers won’t settle every debate about social media and democracy, but we hope and expect it will advance society’s understanding of these issues. Its findings will be hugely valuable to us, and we hope they will also help policymakers as they shape the rules of the road for the internet – for the benefit of our democracy, and society as a whole.



[ad_2]

Source link

Share:

Atbildēt

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin