Our commitment to advancing bold and responsible AI, together

[ad_1]

We’re proud to join with other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence. Today is a milestone in bringing the industry together to ensure that AI helps everyone. These commitments will support efforts by the G7, the OECD, and national governments to maximize AI’s benefits and minimize its risks.

We’ve long believed that being bold on AI means being responsible from the start. Here’s more on how we’re doing just that:

Applying AI to solve society’s biggest challenges

We’ve been working on AI for more than a dozen years, and in 2017 reoriented to be an “AI-first company.” Today AI powers Google Search, Translate, Maps, and other services you use every day. And we’re using AI to solve societal issues — forecasting floods, cutting carbon emissions by reducing stop-and-go traffic, improving health care by answering clinical questions with more than 90% accuracy, and helping to treat and screen for diseases like breast cancer.

Promoting safe and secure AI systems

But it’s not enough for AI to make better services — we also want to use it to make those services safe and secure. We design our products to be secure-by-default — and our approach to AI is no different. We recently introduced our Secure AI Framework (SAIF) to help organizations secure AI systems, and we expanded our bug hunters programs (including our Vulnerability Rewards Program) to incentivize research around AI safety and security. We put our models through adversarial testing to mitigate risks, and our Google DeepMind team is advancing the state of the art on topics like helping AI communicate in safer ways, preventing advanced models from being misused, and designing systems to be more ethical and fair.

We’re committed to continuing this work, and participating in additional red teaming exercises including one at DefCon next month.

Building trust in AI systems

We recognize that sometimes the power of new AI tools may amplify current societal challenges like misinformation and unfair bias. That’s why in 2018 we published a set of AI Principles to guide our work, and established a governance team to put them into action by conducting ethical reviews of new systems, avoiding bias and incorporating privacy, security and safety. And our Responsible AI Toolkit helps developers pursue AI responsibly as well. We will keep working to build trust in AI systems, including by sharing regular progress reports on our work.

When it comes to content, we’re taking steps to promote trustworthy information. We’ll soon be integrating watermarking, metadata, and other innovative techniques into our latest generative models, and bringing an About this image tool to Google Search to give you context about where an image first appeared online. Addressing AI-generated content will require industry-wide solutions, and we look forward to working with others, including the Partnership for AI’s synthetic media working group.

Building responsible AI, together

None of us can get AI right on our own. We’re pleased to be joining other leading AI companies in endorsing these commitments, and we pledge to continue working together by sharing information and best practices. Groups like the Partnership for AI and ML Commons are already leading important initiatives and we look forward to additional efforts to promote the responsible development of new generative AI tools.

[ad_2]

Source link

Share:

Leave a Reply

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin