A shared agenda for responsible AI progress


When it comes to AI, we need both good individual practices and shared industry standards. But society needs something more: Sound government policies that promote progress while reducing risks of abuse. And developing good policy takes deep discussions across governments, the private sector, academia and civil society.

As we’ve said for years, AI is too important not to regulate — and too important not to regulate well. The challenge is to do it in a way that mitigates risks and promotes trustworthy applications that live up to AI’s promise of societal benefit.

Here are some core principles that can help guide this work:

  1. Build on existing regulation, recognizing that many regulations that apply to privacy, safety or other public purposes already apply fully to AI applications.
  2. Adopt a proportionate, risk-based framework focused on applications, recognizing that AI is a multi-purpose technology that calls for customized approaches and differentiated accountability among developers, deployers and users.
  3. Promote an interoperable approach to AI standards and governance, recognizing the need for international alignment.
  4. Ensure parity in expectations between non-AI and AI systems, recognizing that even imperfect AI systems can improve on existing processes.
  5. Promote transparency that facilitates accountability, empowering users and building trust.

Importantly, in developing new frameworks for AI, policymakers will need to reconcile contending policy objectives like competition, content moderation, privacy and security. They will also need to include mechanisms to allow rules to evolve as technology progresses. AI remains a very dynamic, fast-moving field and we will all learn from new experiences.

With a lot of collaborative, multi-stakeholder efforts already underway around the world, there’s no need to start from scratch when developing AI frameworks and responsible practices.

The U.S. National Institute of Standards and Technology AI Risk Management Framework and the OECD’s AI Principles and AI Policy Observatory are two strong examples. Developed through open and collaborative processes, they provide clear guidelines that can adapt to new AI applications, risks and developments. And we continue to provide feedback on proposals like the Europe Union’s pending AI Act.

Regulators should look first to how to use existing authorities — like rules ensuring product safety and prohibiting unlawful discrimination — pursuing new rules where they’re needed to manage truly novel challenges.



Source link

Share:

Leave a Reply

3 latest news
On Key

Related Posts

Building for our AI future

Ed note: Today, Google and Alphabet CEO Sundar Pichai shared a number of structural changes to improve velocity and execution across the company. His note

Solverwp- WordPress Theme and Plugin