[ad_1]
For more than two decades, Google has worked with machine learning and AI to make our products more helpful. AI has helped our users in everyday ways from Smart Compose in Gmail to finding faster routes home in Maps. AI is also allowing us to contribute to major issues facing everyone, whether that means advancing medicine or finding more effective ways to combat climate change. As we continue to incorporate AI, and more recently, generative AI, into more Google experiences, we know it’s imperative to be bold and responsible together.
Building protections into our products from the outset
An important part of introducing this technology responsibly is anticipating and testing for a wide range of safety and security risks, including those presented by images generated by AI. We’re taking steps to embed protections into our generative AI features by default, guided by our AI Principles:
- Protecting against unfair bias: We’ve developed tools and datasets to help identify and mitigate unfair bias in our machine learning models. This is an active area of research for our teams and over the past few years, we’ve published several key papers on the topic. We also regularly seek third-party input to help account for societal context and to assess training datasets for potential sources of unfair bias.
- Red-teaming: We enlist in-house and external experts to participate in red-teaming programs that test for a wide spectrum of vulnerabilities and potential areas of abuse, including cybersecurity vulnerabilities as well as more complex societal risks such as fairness. These dedicated adversarial testing efforts, including our participation at the DEF CON AI Village Red Team event this past August, help identify current and emergent risks, behaviors and policy violations, enabling our teams to proactively mitigate them.
- Implementing policies: Leveraging our deep experience in policy development and technical enforcement, we’ve created generative AI prohibited use policies outlining the harmful, inappropriate, misleading or illegal content we do not allow. Our extensive system of classifiers is then used to detect, prevent and remove content that violates these policies. For example, if we identify a violative prompt or output, our products won’t provide a response and may also direct the user to additional resources for help on sensitive topics such as those related to dangerous acts or self harm. And we are continuously fine-tuning our models to provide safer responses.
- Safeguarding teens: As we slowly expand access to generative AI experiences like SGE to teens, we’ve developed additional safeguards around areas that can pose risk for younger users based on their developmental needs. This includes limiting outputs related to topics like bullying and age-gated or illegal substances.
- Indemnifying customers for copyright: We’ve put strong indemnification protections on both training data used for generative AI models and the generated output for users of key Google Workspace and Google Cloud services. Put simply: if customers are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.
Providing additional context for generative AI outputs
Building on our long track record to provide context about the information people find online, we’re adding new tools to help people evaluate information produced by our models. For example, we’ve added About this result to generative AI in Search to help people evaluate the information they find in the experience. We also introduced new ways to help people double check the responses they see in Bard.
Context is especially important with images, and we’re committed to finding ways to make sure every image generated through our products has metadata labeling and embedded watermarking with SynthID. Similarly, we recently updated our election advertising policies to require advertisers to disclose when their election ads include material that’s been digitally altered or generated. This will help provide additional context to people seeing election advertising on our platforms.
We launched Bard and SGE as experiments because we recognize that as emerging tech, large language model (LLM)-based experiences can get things wrong, especially regarding breaking news. We’re always working to make sure our products update as more information becomes available, and our teams continue to quickly implement improvements as needed.
How we protect your information
New technologies naturally raise questions around user privacy and personal data. We’re building AI products and experiences that are private by design. Many of the privacy protections we’ve had in place for years apply to our generative AI tools too and just like with other types of activity data in your Google Account, we make it easy to pause, save or delete it at any time including for Bard and Search.
We never sell your personal information to anyone, including for ads purposes — this is a longstanding Google policy. Additionally, we’ve implemented privacy safeguards tailored to our generative AI products. For example, If you choose to use the Workspace extensions in Bard, your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads, or used to train the Bard model.
Collaborating with stakeholders to shape the future
AI raises complex questions that neither Google, nor any other single company, can answer alone. To get AI right, we need collaboration across companies, academic researchers, civil society, governments, and other stakeholders. We are already in conversation with groups like Partnership on AI and ML Commons, and launched the Frontier Model Forum with other leading AI labs to promote the responsible development of frontier AI models. And we’ve also published dozens of research papers to help share our expertise with the researchers and the industry.
We are also transparent about our progress on the commitments we’ve made, including those we voluntarily made alongside other tech companies at a White House summit earlier this year. We will continue to work across the industry and with governments, researchers and others to embrace the opportunities and address the risks AI presents.
[ad_2]
Source link