Designing for privacy in an AI world

[ad_1]

Artificial intelligence can help take on tasks that range from the everyday to the extraordinary, whether it’s crunching numbers or curing diseases. But the only way to harness AI’s potential in the long run is to build it responsibly.

That’s why the conversation about generative AI and privacy is so important — and why we want to support this dialogue with insights from innovation’s frontlines and our extensive engagement with regulators and other experts.

In our new “Generative AI and Privacy” policy working paper, we argue that AI products should have embedded protections that promote user safety and privacy from the start. And we recommend policy approaches that address privacy concerns while unlocking AI’s benefits.

Privacy-by-design in AI

AI promises benefits to people and society, but also has the potential to exacerbate existing societal challenges and pose new challenges, as our own research and that of others has highlighted.

The same is true for privacy. It’s important to build in protections that provide transparency and control and address risks like the inadvertent leakage of personal information.

That requires a robust framework from development to deployment, grounded in well-established principles. Any organization building AI tools should be clear about its privacy approach.

Ours is guided by longstanding data protection practices, Privacy & Security Principles, Responsible AI practices and our AI Principles. This means we implement strong privacy safeguards and data minimization techniques, provide transparency about data practices, and offer controls that empower users to make informed choices and manage their information.

Focus on AI applications to effectively reduce risks

There are legitimate issues to explore as we apply some well-established privacy principles to generative AI.

What does data minimization mean in practice when training models on large volumes of data? What are the effective ways to provide meaningful transparency of complex models in ways that address individuals’ concerns? How do we provide age-appropriate experiences that benefit teens in a world using AI tools?

Our paper offers some initial thoughts for these conversations, considering two distinct phases for models:

  • Training and development
  • User-facing applications

During training and development, personal data such as names or biographical information makes up a small but important element of training data. Models use such data to learn how language embeds abstract concepts about relationships between people and our world.

These models are not “databases” nor is their purpose to identify individuals. In fact, the inclusion of personal data can actually help reduce bias in models — for example, how to understand names from different cultures around the world — and improve accuracy and performance.

It is at the application level that we see both greater potential for privacy harms such as personal data leakage, and the opportunity to create more effective safeguards. This is where features like output filters and auto-delete play important roles.

Prioritizing such safeguards at the application level is not only the most feasible approach, but also, we believe, the most effective one.

Achieving privacy through innovation

Most of today’s AI privacy conversations are focusing on mitigating risks, and rightly so, given the necessary work of building trust in AI. Yet generative AI also offers great potential to improve user privacy, and we should also take advantage of these important opportunities.

Generative AI is already helping organizations understand privacy feedback for large numbers of users and identify privacy compliance issues. AI is enabling a new generation of cyber defenses. Privacy-enhancing technologies like synthetic data and differential privacy are illuminating ways we can deliver greater benefits to society without revealing private information. Public policies and industry standards should promote — and not unintentionally restrict — such positive uses.

The need to work together

Privacy laws are meant to be adaptive, proportional and technology-neutral — over the years, this is what has made them resilient and durable.

The same holds true in the age of AI, as stakeholders work to balance strong privacy protections with other fundamental rights and social goals.

The work ahead will require collaboration across the privacy community, and Google is committed to working with others to ensure that generative AI responsibly benefits society.

Read our Policy Working Paper on Generative AI and Privacy here.

[ad_2]

Source link

Share:

Leave a Reply

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin