Google’s 3 emerging best practices

Google’s ongoing work in AI powers tools that billions of people use every day — including Google Search, Translate, Maps and more. Some of the work we’re most excited about involves using AI to solve major societal issues — from forecasting floods and cutting carbon to improving healthcare. We’ve learned that AI has the potential to have a far-reaching impact on the global crises facing everyone, while at the same expanding the benefits of existing innovations to people around the world.

This is why AI must be developed responsibly, in ways that address identifiable concerns like fairness, privacy and safety, with collaboration across the AI ecosystem. And it’s why — in the wake of announcing that we were an “AI-first” company in 2017 — we shared our AI Principles and have since built an extensive AI Principles governance structure and a scalable and repeatable ethics review process. To help others develop AI responsibly, we’ve also developed a growing Responsible AI toolkit.

Each year, we share a detailed report on our processes for risk assessments, ethics reviews and technical improvements in a publicly available annual update — 2019, 2020, 2021, 2022 — supplemented by a brief, midyear look at our own progress that covers what we’re seeing across the industry.

This year, generative AI is receiving more public focus, conversation and collaborative interest than any emerging technology in our lifetime. That’s a good thing. This collaborative spirit can only benefit the goal of AI’s responsible development on the road to unlocking its benefits, from helping small businesses create more compelling ad campaigns to enabling more people to prototype new AI applications, even without writing any code.

For our part, we’ve applied the AI Principles and an ethics review process to our own development of AI in our products — generative AI is no exception. What we’ve found in the past six months is that there are clear ways to promote safer, socially beneficial practices to generative AI concerns like unfair bias and factuality. We proactively integrate ethical considerations early in the design and development process and have significantly expanded our reviews of early-stage AI efforts, with a focus on guidance around generative AI projects.

For our midyear update, we’d like to share three of our best practices based on this guidance and what we’ve done in our pre-launch design, reviews and development of generative AI: design for responsibility, conduct adversarial testing and communicate simple, helpful explanations.

Source link


Leave a Reply

3 latest news
On Key

Related Posts

Solverwp- WordPress Theme and Plugin