{"id":18625,"date":"2024-05-08T17:47:23","date_gmt":"2024-05-08T17:47:23","guid":{"rendered":"http:\/\/scannn.com\/how-google-built-generative-ai-tools-for-the-chrome-browser\/"},"modified":"2024-05-08T17:47:23","modified_gmt":"2024-05-08T17:47:23","slug":"how-google-built-generative-ai-tools-for-the-chrome-browser","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/how-google-built-generative-ai-tools-for-the-chrome-browser\/","title":{"rendered":"How Google built generative AI tools for the Chrome browser"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"a8ee4\">From an engineering perspective, implementing LLM technology into Chrome was a challenge. \u201cIt\u2019s a new skill set,\u201d Adriana says. \u201cWe had to learn not only how this technology works but also how to turn it into a product people can use. Traditional browser features work the same way every time you run them. If a feature has the same input, it will give the same output.&#8221; When Adriana and her team write code for a new Chrome feature, they also write tests to check it works as expected. &#8220;If it passes the tests, you have confidence it works,&#8221; she says.<\/p>\n<p data-block-key=\"20fn9\">With features that use generative AI, it&#8217;s not so simple. Large language models recognize and generate text or images, and they need to be able to adapt to many kinds of user input. &#8220;We take the foundation model and we teach it what we want it to do for our example use cases, and then we evaluate how it works against many different types of user scenarios,&#8221; Adriana says. Determining whether it&#8217;s working requires deeper human evaluation. &#8220;It&#8217;s not a simple binary of &#8216;it runs&#8217; or &#8216;it doesn&#8217;t run,'&#8221; Adriana says. &#8220;We&#8217;re looking at it and thinking, &#8216;Is the tone right? Is this length OK? Is this the level of specificity we&#8217;re looking for?&#8217; It&#8217;s a very different process.&#8221;<\/p>\n<p data-block-key=\"5br0s\">One training scenario Adriana thought was particularly interesting was how the AI tab organizer uses emoji. \u201cI really love how people use emoji to label tab groups,\u201d she says. \u201cSeeing the emoji makes it easier to know the topic of that tab group when you\u2019re scanning your tabs.\u201d The Chrome team wanted the new auto-tab organizer to have an emoji option for users, but they also didn\u2019t want it to potentially pick inappropriate options. For example, If you\u2019re planning a celebration of life, Adriana explains, they don\u2019t want Chrome to show you a skull and crossbones. So, with help from Google\u2019s emoji team, they decided to map out what kinds of tab group categories were safe for broad use. \u201cTravel, animals, places, nature \u2014 these kinds of things are great use cases for emoji, so we know the auto-tab organizer has a good chance of getting it right by only drawing from those categories,\u201d she says.<\/p>\n<p data-block-key=\"3d7u6\">The Chrome team also wanted to make sure that people could use the new AI features without needing to understand how the underlying technology works. So they designed Help me write to gather context from the webpage you\u2019re on and take it from there. \u201cIt can see you want to write a restaurant review and adjust for that versus helping you fill out a form or sell something,\u201d Adriana says. Similarly, when creating the AI themes tool, they originally thought users could write their own prompts to populate the visual themes. \u201cWe realized it was actually kind of difficult to come up with a prompt for this,\u201d Adriana says. Instead, they went with a drop-down approach where you choose a subject \u2014 like the Aurora borealis or rainbows \u2014 and then can use other drop downs to add styling details and select a color scheme. \u201cWe want people to be able to customize it but also give narrower options that get good results,\u201d Adriana says.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/products\/chrome\/google-chrome-generative-ai-development-\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>From an engineering perspective, implementing LLM technology into Chrome was a challenge. \u201cIt\u2019s a new skill set,\u201d Adriana says. \u201cWe had to learn not only how this technology works but also how to turn it into a product people can use. Traditional browser features work the same way every time you run them. If a [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18626,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-18625","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18625","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18625"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18625\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18626"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18625"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18625"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18625"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}