{"id":18063,"date":"2024-03-20T14:44:08","date_gmt":"2024-03-20T14:44:08","guid":{"rendered":"http:\/\/scannn.com\/google-shares-4-updates-on-generative-ai-in-healthcare\/"},"modified":"2024-03-20T14:44:08","modified_gmt":"2024-03-20T14:44:08","slug":"google-shares-4-updates-on-generative-ai-in-healthcare","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/google-shares-4-updates-on-generative-ai-in-healthcare\/","title":{"rendered":"Google shares 4 updates on generative AI in healthcare"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"fprru\">Last year at Google Health\u2019s Check Up event, we introduced Med-PaLM 2, our large language model (LLM) fine-tuned for healthcare. Since introducing that research, the model has become available to a set of global customer and partner organizations that are building solutions for a range of uses \u2014 including streamlining nurse handoffs and supporting clinicians\u2019 documentation. At the end of last year, we introduced MedLM, a family of foundation models for healthcare built on Med-PaLM 2, and made it more broadly available through Google Cloud\u2019s Vertex AI platform.<\/p>\n<p data-block-key=\"blsjq\">Since then, our work on generative AI for healthcare has progressed \u2014 from the new ways we\u2019re training our health AI models to our latest research on applying AI to the healthcare industry.<\/p>\n<h3 data-block-key=\"43lq5\">New modalities in models for healthcare<\/h3>\n<p data-block-key=\"7a20b\">Medicine is a multimodal discipline; it\u2019s made up of different types of information stored across formats \u2014 like radiology images, lab results, genomics data, environmental context and more. To get a fuller understanding of a person\u2019s health, we need to build technology that understands all of this information.<\/p>\n<p data-block-key=\"8fd4b\">We\u2019re bringing new capabilities to our models with the hope of making generative AI more helpful to healthcare organizations and people\u2019s health. We just introduced MedLM for Chest X-ray, which has the potential to help transform radiology workflows by helping with the classification of chest X-rays for a variety of use cases. We\u2019re starting with Chest X-rays because they are critical in detecting lung and heart conditions. MedLM for Chest X-ray is now available to trusted testers in an experimental preview on Google Cloud.<\/p>\n<h3 data-block-key=\"4gl9v\">Research on fine-tuning our models for the medical domain<\/h3>\n<p data-block-key=\"34qgh\">Approximately 30% of the world\u2019s data volume is being generated by the healthcare industry &#8211; and is growing at 36% annually. This includes large quantities of text, images, audio, and video. And further, important information about patients&#8217; histories is often buried deep in a medical record, making it difficult to find relevant information quickly.<\/p>\n<p data-block-key=\"2p7di\">For these reasons, we\u2019re researching how a version of the Gemini model, fine-tuned for the medical domain, can unlock new capabilities for advanced reasoning, understanding a high volume of context, and processing multiple modalities. Our latest research resulted in state-of-the-art performance on the benchmark for the U.S. Medical Licensing Exam (USMLE)-style questions at 91.1%, and on a video dataset called MedVidQA.<\/p>\n<p data-block-key=\"6setg\">And because our Gemini models are multimodal, we were able to apply this fine-tuned model to other clinical benchmarks \u2014 including answering questions about chest X-ray images and genomics information. We\u2019re also seeing promising results from our fine-tuned models on complex tasks such as report generation for 2D images like X-rays, as well as 3D images like brain CT scans \u2013 representing a step-change in our medical AI capabilities. While this work is still in the research phase, there\u2019s potential for generative AI in radiology to bring assistive capabilities to health organizations.<\/p>\n<h3 data-block-key=\"bq279\">A Personal Health LLM for personalized coaching and recommendations<\/h3>\n<p data-block-key=\"a8qh\">Fitbit and Google Research are working together to build a Personal Health Large Language Model that can power personalized health and wellness features in the Fitbit mobile app, helping people get even more insights and recommendations from the data from their Fitbit and Pixel devices. This model is being fine-tuned to deliver personalized coaching capabilities, like actionable messages and guidance, that can be individualized based on personal health and fitness goals. For example, this model may be able to analyze variations in your sleep patterns and sleep quality, and then suggest recommendations on how you might change the intensity of your workout based on those insights.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/health\/google-generative-ai-healthcare\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Last year at Google Health\u2019s Check Up event, we introduced Med-PaLM 2, our large language model (LLM) fine-tuned for healthcare. Since introducing that research, the model has become available to a set of global customer and partner organizations that are building solutions for a range of uses \u2014 including streamlining nurse handoffs and supporting clinicians\u2019 [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18064,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-18063","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18063"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18063\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18064"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}