{"id":19369,"date":"2024-10-04T08:52:29","date_gmt":"2024-10-04T08:52:29","guid":{"rendered":"http:\/\/scannn.com\/google\/our-ongoing-work-to-build-and-deploy-responsible-ai\/"},"modified":"2024-10-04T08:52:29","modified_gmt":"2024-10-04T08:52:29","slug":"our-ongoing-work-to-build-and-deploy-responsible-ai","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/our-ongoing-work-to-build-and-deploy-responsible-ai\/","title":{"rendered":"Our ongoing work to build and deploy responsible AI"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<h3 data-block-key=\"4x6bz\">Detecting abuse at scale<\/h3>\n<p data-block-key=\"aueie\">Our teams across Trust &amp; Safety are also using AI to improve the way we protect our users online. AI is showing tremendous promise for speed and scale in nuanced abuse detection. Building on our established automated processes, we have developed prototypes that leverage recent advances, to assist our teams in identifying abusive content at scale.<\/p>\n<p data-block-key=\"4h489\">Using LLMs, our aim is to be able to rapidly build and train a model in a matter of days \u2014 instead of weeks or months \u2014 to find specific kinds of abuse on our products. This is especially valuable for new and emerging abuse areas, such as Russian disinformation narratives following the invasion of Ukraine, or for nuanced scaled challenges, like detecting counterfeit goods online. We can quickly prototype a model and automatically route it to our teams for enforcement.<\/p>\n<p data-block-key=\"d6is0\">LLMs are also transforming training. Using new techniques, we can now expand coverage of abuse types, context and languages in ways we never could have before \u2014 including doubling the number of languages covered with our on-device safety classifiers in the last quarter alone. Starting with an insight from one of our abuse analysts, we can use LLMs to generate thousands of variations of an event and then use this to train our classifiers.<\/p>\n<p data-block-key=\"4ism1\">We&#8217;re still testing these new techniques to meet rigorous accuracy standards, but prototypes have demonstrated impressive results so far. The potential is huge, and I believe we are at the cusp of dramatic transformation in this space.<\/p>\n<h3 data-block-key=\"2b0al\">Boosting collaboration and transparency<\/h3>\n<p data-block-key=\"1e7nu\">Addressing AI-generated content will require industry and ecosystem collaboration and solutions; no one company or institution can do this work alone. Earlier this week at the summit, we brought together researchers and students to engage with our safety experts to discuss risks and opportunities in the age of AI. In support of an ecosystem that generates impactful research with real-world applications, we doubled the number of Google Academic Research Awards recipients this year to grow our investment into Trust &amp; Safety research solutions.<\/p>\n<p data-block-key=\"1k6mo\">Finally, information quality has always been core to Google\u2019s mission, and part of that is making sure that users have context to assess the trustworthiness of content they find online. As we continue to bring AI to more products and services, we are focused on helping people better understand how a particular piece of content was created and modified over time.<\/p>\n<p data-block-key=\"bd505\">Earlier this year, we joined the Coalition for Content Provenance and Authenticity (C2PA), as a steering committee member. We are partnering with others to develop interoperable provenance standards and technology to help explain whether a photo was taken with a camera, edited by software or produced by generative AI. This kind of information helps our users make more informed decisions about the content they\u2019re engaging with \u2014 including photos, videos and audio \u2014 and builds media literacy and trust.<\/p>\n<p data-block-key=\"2fcmn\">\u200b\u200bOur work with the C2PA directly complements our own broader approach to transparency and the responsible development of AI. For example, we\u2019re continuing to bring our SynthID watermarking tools to additional gen AI tools and more forms of media including text, audio, visual and video.<\/p>\n<p data-block-key=\"6as67\">We&#8217;re committed to deploying AI responsibly \u2014 from using AI to strengthen our platforms against abuse to developing tools to enhance media literacy and trust \u2014 all while focused on the importance of collaborating, sharing insights and building AI responsibly, together.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/safety-security\/google-paris-summit-responsible-ai\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Detecting abuse at scale Our teams across Trust &amp; Safety are also using AI to improve the way we protect our users online. AI is showing tremendous promise for speed and scale in nuanced abuse detection. Building on our established automated processes, we have developed prototypes that leverage recent advances, to assist our teams in [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":19370,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-19369","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/19369","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=19369"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/19369\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/19370"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=19369"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=19369"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=19369"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}