{"id":18717,"date":"2024-06-05T20:01:39","date_gmt":"2024-06-05T20:01:39","guid":{"rendered":"http:\/\/scannn.com\/i-tried-8-of-googles-newest-ai-products-and-updates-at-i-o-2024\/"},"modified":"2024-06-05T20:01:39","modified_gmt":"2024-06-05T20:01:39","slug":"i-tried-8-of-googles-newest-ai-products-and-updates-at-i-o-2024","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/i-tried-8-of-googles-newest-ai-products-and-updates-at-i-o-2024\/","title":{"rendered":"I tried 8 of Google's newest AI products and updates at I\/O 2024"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"5g86a\">The improved long context window can even pull information from multiple documents when responding to a single prompt. In the side panel in Docs, I asked for help writing a sample letter to a potential job candidate \u2014 in the prompt I linked to the job description document and the applicant\u2019s PDF portfolio, both of which were in my Drive \u2014 and instantly received a email draft, which factored in relevant details from both documents.<\/p>\n<p data-block-key=\"a4eia\">Gemini 1.5 Pro isn\u2019t our only shiny new model, though: I also got to try the freshly-announced Imagen 3, our highest-quality text-to-image model yet. One of the new abilities I was excited about was its ability to generate decorative text and letters, so I put it through its paces. I started by asking for a stylized alphabet \u2014 like letters spelled out in jam on toast, or with silver balloons floating in the sky. Imagen 3 generated a full alphabet of letters, which I could then use to type out my own (delicious) menus.<\/p>\n<p data-block-key=\"1f5rs\">After my Imagen 3 interlude, I continued with more Gemini demos. In one of them, I could pull up Gemini\u2019s overlay on an Android phone and ask questions about anything on the screen. This really showed how we\u2019re not only expanding what you can ask Gemini, but we\u2019re also making Gemini context aware, so it can anticipate your needs and provide helpful suggestions.<\/p>\n<p data-block-key=\"5c1ch\">The use case here was a lengthy oven manual. Whether it&#8217;s a demo or real life, that&#8217;s not something I&#8217;d be excited about reading. Instead of skimming through the document, I pulled up Gemini and immediately got an &#8220;Ask this PDF&#8221; suggestion. I tested questions like &#8220;how do I update the clock&#8221; and quickly got accurate answers. It worked just as well with YouTube videos. Instead of watching a 20-minute workout video, I asked a quick question about how to modify planks, got an answer, and was on my way onto the next demo, where I tested a new conversation mode called Gemini Live that lets you talk with Gemini in the app, no typing required.<\/p>\n<p data-block-key=\"4539a\">Speaking with Gemini was a different experience than the traditional chatbot interface: Gemini\u2019s answers are a lot more conversational than the paragraphs of texts and bullet-pointed lists you might usually get. In my demo, I learned you could even cut off Gemini in the middle of an answer. After asking for a list of kid\u2019s activities for a summer vacation, I was able to interrupt a list of suggestions to dive in deeper on what materials I\u2019d need for tie-dying a shirt.<\/p>\n<p data-block-key=\"c3dfb\">The Project Astra \u2014 or \u201cadvanced seeing and talking responsive agent\u201d \u2014 demo took things a step further to show the cutting edge of where our conversational AI projects are heading.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/ai\/new-google-ai-product-demos-io\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The improved long context window can even pull information from multiple documents when responding to a single prompt. In the side panel in Docs, I asked for help writing a sample letter to a potential job candidate \u2014 in the prompt I linked to the job description document and the applicant\u2019s PDF portfolio, both of [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18718,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-18717","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18717","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18717"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18717\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18718"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18717"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18717"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18717"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}