{"id":21565,"date":"2026-03-05T18:08:04","date_gmt":"2026-03-05T18:08:04","guid":{"rendered":"https:\/\/scannn.com\/google-expert-explains-ai-mode-in-searchs-query-fan-out-method\/"},"modified":"2026-03-05T18:08:04","modified_gmt":"2026-03-05T18:08:04","slug":"google-expert-explains-ai-mode-in-searchs-query-fan-out-method","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/google-expert-explains-ai-mode-in-searchs-query-fan-out-method\/","title":{"rendered":"Google expert explains AI Mode in Search\u2019s query fan-out method"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"mc7o5\"><b>What powers these types of visual search responses?<\/b><\/p>\n<p data-block-key=\"76ken\">Our advanced Gemini models make AI Mode possible, and its multimodal capabilities benefit from the visual expertise we&#8217;ve built into Lens over the years. When you search with an image, Gemini analyzes the image alongside your question to decide which tools to use. Let&#8217;s say you&#8217;re scrolling on your phone and see an outfit on social media that you love. When you search it, the model knows to use Lens to retrieve image results for the hat, shoes and jacket of the outfit simultaneously. It then weaves those individual results into one easy-to-read response.<\/p>\n<p data-block-key=\"emk0f\">Think of it this way: The AI model acts as the &#8220;brain&#8221; that can \u201csee\u201d the image, while the visual search backend acts as the &#8220;library&#8221; containing billions of web results. The AI performs multi-object reasoning to understand what you\u2019re looking at. Then it uses a &#8220;fan-out&#8221; technique which triggers multiple searches at once, reads through the results and presents a single, cohesive response with helpful links \u2014 all in seconds.<\/p>\n<p data-block-key=\"6bbo9\"><b>Can you explain the fan-out technique?<\/b><\/p>\n<p data-block-key=\"dqhst\">AI Mode is basically doing a dozen searches for you in the time it takes to do one. If you upload a photo of a garden you admire, you might have several questions: Will these plants survive in the shade? Are they right for my climate? How much maintenance do they need?<\/p>\n<p data-block-key=\"94ue0\">Before, you\u2019d ask those one by one. Now, AI Mode identifies all those necessary &#8220;fan-out&#8221; searches. This way, it gathers care requirements for every plant in the photo using helpful web results, breaks down the info and even suggests next steps you might want to take. Since AI Mode is uncovering more visual results from a single search, it&#8217;s easier than ever to find just what you&#8217;re looking for, and stumble upon something new that sparks your interest.<\/p>\n<p data-block-key=\"lcf7\"><b>Do you have to start with an image to get this kind of help in AI Mode?<\/b><\/p>\n<p data-block-key=\"57f88\">Not at all! You can start with a simple text search in AI Mode, like &#8220;visual inspo for work outfits.&#8221; When you see a result you like, you can just say, &#8220;Show me more options like the second skirt.&#8221; The system immediately takes that specific image and begins the fan-out process from there.<\/p>\n<p data-block-key=\"9qb7u\"><b>It definitely seems great for shopping \u2014 what else could you use it for?<\/b><\/p>\n<p data-block-key=\"db442\">You could take a photo of a wall at a museum and ask for explanations of each painting. Or take a photo of a bakery window and ask what all the different pastries are. It\u2019s about moving from &#8220;What is this one thing?&#8221; to &#8220;Explain this entire scene to me.&#8221;<\/p>\n<p data-block-key=\"a86ia\"><b>Sounds like I\u2019ve got some photos to take and a lot more to discover. I&#8217;m off to put these tools to the test!<\/b><\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/company-news\/inside-google\/googlers\/how-google-ai-visual-search-works\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>What powers these types of visual search responses? Our advanced Gemini models make AI Mode possible, and its multimodal capabilities benefit from the visual expertise we&#8217;ve built into Lens over the years. When you search with an image, Gemini analyzes the image alongside your question to decide which tools to use. Let&#8217;s say you&#8217;re scrolling [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":21566,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-21565","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/21565","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=21565"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/21565\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/21566"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=21565"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=21565"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=21565"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}