{"id":18647,"date":"2024-05-14T19:58:43","date_gmt":"2024-05-14T19:58:43","guid":{"rendered":"http:\/\/scannn.com\/flash-1-5-gemma-2-and-project-astra\/"},"modified":"2024-05-14T19:58:43","modified_gmt":"2024-05-14T19:58:43","slug":"flash-1-5-gemma-2-and-project-astra","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/flash-1-5-gemma-2-and-project-astra\/","title":{"rendered":"Flash 1.5, Gemma 2 and Project Astra"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"jgqlw\">1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more. This is because it\u2019s been trained by 1.5 Pro through a process called \u201cdistillation,\u201d where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.<\/p>\n<p data-block-key=\"fq62v\">Read more about 1.5 Flash on the Gemini technology page, and learn about 1.5 Flash\u2019s availability and pricing. We\u2019ll share more details in an updated Gemini 1.5 technical report soon.<\/p>\n<h3 data-block-key=\"d28tq\">Significantly improving 1.5 Pro<\/h3>\n<p data-block-key=\"27mnm\">Over the last few months, we\u2019ve significantly improved 1.5 Pro, our best model for general performance across a wide range of tasks.<\/p>\n<p data-block-key=\"513l2\">Beyond extending its context window to 2 million tokens, we\u2019ve enhanced its code generation, logical reasoning and planning, multi-turn conversation, and audio and image understanding through data and algorithmic advances. We see strong improvements on public and internal benchmarks for each of these tasks.<\/p>\n<p data-block-key=\"47fh0\">1.5 Pro can now follow increasingly complex and nuanced instructions, including ones that specify product-level behavior involving role, format and style. We\u2019ve improved control over the model\u2019s responses for specific use cases, like crafting the persona and response style of a chat agent or automating workflows through multiple function calls. And we\u2019ve enabled users to steer model behavior by setting system instructions.<\/p>\n<p data-block-key=\"bnbdh\">We added audio understanding in the Gemini API and Google AI Studio, so 1.5 Pro can now reason across image and audio for videos uploaded in Google AI Studio. And we\u2019re now integrating 1.5 Pro into Google products, including Gemini Advanced and in Workspace apps.<\/p>\n<p>Read more about 1.5 Pro on the Gemini technology page. More details are coming soon in our updated Gemini 1.5 technical report.<\/p>\n<h3 data-block-key=\"tfb9\">Gemini Nano understands multimodal inputs<\/h3>\n<p data-block-key=\"83uss\">Gemini Nano is expanding beyond text-only inputs to include images as well. Starting with Pixel, applications using Gemini Nano with Multimodality will be able to understand the world the way people do \u2014 not just through text, but also through sight, sound and spoken language.<\/p>\n<p data-block-key=\"erroq\">Read more about Gemini 1.0 Nano on Android.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-update-flash-ai-assistant-io-2024\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more. This is because it\u2019s been trained by 1.5 Pro through a process called \u201cdistillation,\u201d where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model. Read more [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18648,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-18647","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18647","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18647"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18647\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18648"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18647"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18647"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18647"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}