{"id":13631,"date":"2023-06-07T17:11:16","date_gmt":"2023-06-07T17:11:16","guid":{"rendered":"http:\/\/scannn.com\/more-accurate-responses-export-to-google-sheets\/"},"modified":"2023-06-07T17:11:16","modified_gmt":"2023-06-07T17:11:16","slug":"more-accurate-responses-export-to-google-sheets","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/more-accurate-responses-export-to-google-sheets\/","title":{"rendered":"More accurate responses, export to Google Sheets"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"y9znz\">Let\u2019s dig deeper into this new capability and how it\u2019s helping Bard improve its responses.<\/p>\n<h2 data-block-key=\"d5q0\">Improved logic and reasoning skills<\/h2>\n<p data-block-key=\"19aqb\">Large language models (LLMs) are like prediction engines \u2014 when given a prompt, they generate a response by predicting what words are likely to come next. As a result, they\u2019ve been extremely capable on language and creative tasks, but weaker in areas like reasoning and math. In order to help solve more complex problems with advanced reasoning and logic capabilities, relying solely on LLM output isn\u2019t enough.<\/p>\n<p data-block-key=\"9buc3\">Our new method allows Bard to generate and execute code to boost its reasoning and math abilities. This approach takes inspiration from a well-studied dichotomy in human intelligence, notably covered in Daniel Kahneman\u2019s book \u201cThinking, Fast and Slow\u201d \u2014 the separation of \u201cSystem 1\u201d and \u201cSystem 2\u201d thinking.<\/p>\n<ul>\n<li data-block-key=\"15kd\">System 1 thinking is fast, intuitive and effortless. When a jazz musician improvises on the spot or a touch-typer thinks about a word and watches it appear on the screen, they\u2019re using System 1 thinking.<\/li>\n<li data-block-key=\"4065v\">System 2 thinking, by contrast, is slow, deliberate and effortful. When you\u2019re carrying out long division or learning how to play an instrument, you\u2019re using System 2.<\/li>\n<\/ul>\n<p data-block-key=\"2ko84\">In this analogy, LLMs can be thought of as operating purely under System 1 \u2014 producing text quickly but without deep thought. This leads to some incredible capabilities, but can fall short in some surprising ways. (Imagine trying to solve a math problem using System 1 alone: You can\u2019t stop and do the arithmetic, you just have to spit out the first answer that comes to mind.) Traditional computation closely aligns with System 2 thinking: It\u2019s formulaic and inflexible, but the right sequence of steps can produce impressive results, such as solutions to long division.<\/p>\n<p data-block-key=\"b6agh\">With this latest update, we\u2019ve combined the capabilities of both LLMs (System 1) and traditional code (System 2) to help improve accuracy in Bard\u2019s responses. Through implicit code execution, Bard identifies prompts that might benefit from logical code, writes it \u201cunder the hood,\u201d executes it and uses the result to generate a more accurate response. So far, we&#8217;ve seen this method improve the accuracy of Bard\u2019s responses to computation-based word and math problems in our internal challenge datasets by approximately 30%.<\/p>\n<p data-block-key=\"5tf99\">Even with these improvements, Bard won\u2019t always get it right \u2014 for example, Bard might not generate code to help the prompt response, the code it generates might be wrong or Bard may not include the executed code in its response. With all that said, this improved ability to respond with structured, logic-driven capabilities is an important step toward making Bard even more helpful. Stay tuned for more.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/ai\/bard-improved-reasoning-google-sheets-export\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Let\u2019s dig deeper into this new capability and how it\u2019s helping Bard improve its responses. Improved logic and reasoning skills Large language models (LLMs) are like prediction engines \u2014 when given a prompt, they generate a response by predicting what words are likely to come next. As a result, they\u2019ve been extremely capable on language [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":13632,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-13631","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13631","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=13631"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13631\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/13632"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=13631"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=13631"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=13631"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}