{"id":13505,"date":"2023-05-11T14:05:26","date_gmt":"2023-05-11T14:05:26","guid":{"rendered":"http:\/\/scannn.com\/100-things-google-announced-at-i-0-2023\/"},"modified":"2023-05-11T14:05:26","modified_gmt":"2023-05-11T14:05:26","slug":"100-things-google-announced-at-i-0-2023","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/100-things-google-announced-at-i-0-2023\/","title":{"rendered":"100 things Google announced at I\/0 2023"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"e9dd5\"><b>49<\/b>. We introduced PaLM 2, our next generation language model. It\u2019s faster and more efficient than previous models \u2014 and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases.<\/p>\n<p data-block-key=\"1u0ck\"><b>50.<\/b> PaLM 2 powers more than 25 products announced at I\/O and dozens of product teams across Google are using it.<\/p>\n<p data-block-key=\"fkbi3\"><b>51<\/b>. And PaLM 2 powers our new PaLM API.<\/p>\n<p data-block-key=\"590b5\"><b>52<\/b>. Our health research teams used PaLM 2 to create Med-PaLM 2, which is fine-tuned for medical knowledge to help answer questions and summarize insights from a variety of dense medical texts. We\u2019re now exploring multimodal capabilities, so it can synthesize patient information from images, like a chest x-ray or mammogram, to help improve patient care.<\/p>\n<p data-block-key=\"361vp\"><b>53.<\/b> We\u2019re opening Med-PaLM 2 up to a small group of Cloud customers for feedback later this summer to identify safe, helpful use cases.<\/p>\n<p data-block-key=\"8guft\"><b>54.<\/b> We\u2019re already at work on Gemini, our first model created from the ground up to be multimodal, highly capable at different sizes and efficient at integrating with other tools and APIs. Gemini is still in training, but it\u2019s already exhibiting multimodal capabilities never before seen in prior models.<\/p>\n<p data-block-key=\"amg12\"><b>55<\/b>. We announced a handful of improvements to and news about Bard, our experiment that lets you collaborate with generative AI \u2014 for instance, Dark theme!<\/p>\n<p data-block-key=\"9n2dh\"><b>56.<\/b> You\u2019ll soon be able to use images in your Bard prompts, allowing you to boost your creativity in completely new ways.<\/p>\n<p data-block-key=\"fj9ft\"><b>57<\/b>. Access to Bard in English is also expanding in over 180 countries. And starting today, you can now use Bard in Japanese and Korean.<\/p>\n<p data-block-key=\"fv73p\"><b>58.<\/b> We\u2019re on track to make Bard available in the 40 most spoken languages by the end of the year, so more people can collaborate with it in their native languages.<\/p>\n<p data-block-key=\"3ns0o\"><b>59.<\/b> We also removed the waitlist so more people can interact directly with Bard.<\/p>\n<p data-block-key=\"7tgs5\"><b>60.<\/b> Starting next week, we\u2019re making code citations even more precise. If Bard brings in a block of code, just click the annotation and Bard will underline the block and link to the source.<\/p>\n<p data-block-key=\"3oujk\"><b>61<\/b>. Coming soon, Bard will become more visual by including images in its responses, giving you a much better sense of what you\u2019re exploring.<\/p>\n<p data-block-key=\"5ms76\"><b>62<\/b>. In the future, you&#8217;ll see Bard integrated not only with Google services, but with popular apps you use \u2014 like Adobe, Instacart and Khan Academy.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/developers\/google-io-2023-100-announcements\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>49. We introduced PaLM 2, our next generation language model. It\u2019s faster and more efficient than previous models \u2014 and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases. 50. PaLM 2 powers more than 25 products announced at I\/O and dozens of product [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":13506,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-13505","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13505","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=13505"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13505\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/13506"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=13505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=13505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=13505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}