Two years of progress on content responsibility

[ad_1]

It’s been two years since we opened our Google Safety Engineering Center (GSEC) for Content Responsibility in Dublin, the second in our network of centers dedicated to online safety in Europe.

Located in our European headquarters, GSEC Dublin is a hub for our regional Trust & Safety team, a cross-disciplinary group that includes policy experts, anti-abuse engineers, threat researchers, machine learning and artificial intelligence specialists, technical analysts and compliance professionals. They are part of our global team committed to providing users with information and content they can trust, by enforcing our policies and moderating content at scale.

The center was designed to provide additional transparency into this work and make it easier for regulators, policymakers and civil society to gain a hands-on understanding of how we develop and enforce policies, how our anti-abuse technologies and early threat detection systems work, how we work with Priority Flaggers, and how our enforcement processes and content moderation practices work.

To make progress on topics that are so nuanced and complex, companies like ours must work together with civil society, researchers, academics and policymakers. Over the last two years, GSEC Dublin has held nearly 100 public and private engagements to share our experience of managing content risk, and hear from experts across a wide range of topics, including misinformation, ads safety, election integrity, the use of AI in content moderation, and fighting child sexual abuse and exploitation online.

These events have offered valuable opportunities for us to learn from each other and collaborate, as we continue to develop and improve the tools, processes, and teams that help us provide access to credible information and moderate content.

We’ve also ventured further afield to hold events in more than a dozen European countries, including a special series in the Baltics and Eastern Europe, to discuss our work moderating content and fighting mis- and disinformation following Russia’s invasion of Ukraine.

And we’ve acted on our commitment to protect users from illegal and harmful content across Google and YouTube services by continuing to invest in our Trust & Safety operations and partnering with others across the industry to tackle tough issues like child sexual abuse online by developing and sharing our content moderation technology to other companies, free of charge.

We look forward to continuing to listen, learn and engage on these topics, and to provide transparency into these efforts.

You can learn more about how we are building a safer, more trusted internet at safety.google.

[ad_2]

Source link

Share:

Leave a Reply

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin