{"id":18061,"date":"2024-03-20T13:42:14","date_gmt":"2024-03-20T13:42:14","guid":{"rendered":"http:\/\/scannn.com\/3-ways-google-is-integrating-health-equity-into-ai\/"},"modified":"2024-03-20T13:42:14","modified_gmt":"2024-03-20T13:42:14","slug":"3-ways-google-is-integrating-health-equity-into-ai","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/3-ways-google-is-integrating-health-equity-into-ai\/","title":{"rendered":"3 ways Google is integrating health equity into AI"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"yg8ye\">The goal of health equity is to ensure everyone has a fair and just opportunity to attain their highest level of health. The reality is the opposite for too many people \u2014 including people of color, women, those in rural communities and other historically marginalized populations. As Google\u2019s Chief Health Equity officer, my team is committed to making sure we build AI-powered health tools responsibly and equitably.<\/p>\n<p data-block-key=\"ab9o7\">At our annual health event, The Check Up, we unveiled three ways we are helping deliver a more equitable future.<\/p>\n<h3 data-block-key=\"86dmb\">Our recent research on identifying and mitigating biases<\/h3>\n<p data-block-key=\"a2ci4\">As medical AI rapidly evolves, it&#8217;s critical we develop tools and resources that can be used to identify and mitigate biases that could negatively impact health outcomes. Our new research paper, \u201cA Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models,&#8221; is a step in this direction. This paper provides a framework for how to assess if medical large language models (LLMs) may perpetuate historical biases and provides a collection of seven adversarial testing datasets called \u201cEquityMedQA\u201d as a guidepost.<\/p>\n<p data-block-key=\"dpro4\">These tools are based on literature regarding health inequities, actual model failures and participatory input from equity experts. We used these tools to evaluate our own large language models, and now they\u2019re available to the research community and beyond.<\/p>\n<h3 data-block-key=\"ccnbv\">A new framework to measure health equity within AI models<\/h3>\n<p data-block-key=\"67bpc\">A group of health equity researchers, social scientists, clinicians, bioethicists, statisticians, and AI researchers came together across Google to develop a framework for building AI that avoids creating and reinforcing unfair bias.<\/p>\n<p data-block-key=\"9cot6\">This framework, which is called HEAL (Health Equity Assessment of Machine Learning performance), is designed to assess the likelihood that AI technology will perform equitably and to prevent AI models from being deployed that might make disparities worse \u2014 especially for groups that experience poorer health outcomes on average. The four-step process includes:<\/p>\n<ol>\n<li data-block-key=\"edirc\">Determining factors associated with health inequities and defining AI performance metrics.<\/li>\n<li data-block-key=\"83jnf\">Identifying and quantifying pre-existing health outcome disparities.<\/li>\n<li data-block-key=\"bgoll\">Measuring the performance of the AI tool for each subpopulation.<\/li>\n<li data-block-key=\"bo8p1\">Assessing the likelihood that the AI tool prioritizes performance with respect to health disparities.<\/li>\n<\/ol>\n<p data-block-key=\"30kpu\">Already, we\u2019ve used this framework to test a dermatology AI model. The results showed that while this model performed equitably across race, ethnicity and sex subgroups, there were improvements we could make to perform better for older age groups. The framework found that when it came to evaluating cancerous conditions, like melanoma, the model performed equitably across age groups, but for non-cancer conditions, like eczema, it did not perform as well across the 70 and older age group.<\/p>\n<p data-block-key=\"9v114\">We\u2019ll continue to apply the framework to healthcare AI models in the future, and we\u2019ll evolve and refine the framework in the process.<\/p>\n<h3 data-block-key=\"8ojhs\">A more representative dataset to advance dermatology<\/h3>\n<p data-block-key=\"8dm7j\">Today, many dermatology datasets are not representative of the population, limiting developers from building equitable AI models. Current dataset images are often captured in a clinical setting and may not reflect different parts of the body, varying levels of severity of a condition or diverse skin tones, ages, genders and more. And they\u2019re primarily focused on severe issues \u2014 like skin cancer \u2014 rather than more common issues like allergic, inflammatory or infectious conditions.<\/p>\n<p data-block-key=\"6eus6\">To create a more representative dataset of images, we partnered with Stanford Medicine on the Skin Condition Image Network (SCIN). Thousands of people contributed more than 10,000 real-world dermatology images to create this open-access dataset. Dermatologists and research teams then helped identify diagnoses on each image and labeled them based on two skin-tone scales to make sure it included an expansive collection of conditions and skin types.<\/p>\n<p data-block-key=\"plgl\">Scientists and doctors can now use the SCIN dataset to help them develop tools to identify dermatological concerns, conduct dermatology-related research, and expose health professional students to more examples of skin conditions and their manifestations across different skin types.<\/p>\n<p data-block-key=\"di28d\">We are early in this journey but we\u2019re committed to making a difference. We believe that working with partners and sharing our learnings can help build a healthier future for everyone regardless of their background or location.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/technology\/health\/google-ai-health-equity\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The goal of health equity is to ensure everyone has a fair and just opportunity to attain their highest level of health. The reality is the opposite for too many people \u2014 including people of color, women, those in rural communities and other historically marginalized populations. As Google\u2019s Chief Health Equity officer, my team is [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":18062,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-18061","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18061","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=18061"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/18061\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/18062"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=18061"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=18061"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=18061"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}