{"id":13541,"date":"2023-05-18T15:01:23","date_gmt":"2023-05-18T15:01:23","guid":{"rendered":"http:\/\/scannn.com\/5-products-and-features-that-make-the-digital-world-more-accessible\/"},"modified":"2023-05-18T15:01:23","modified_gmt":"2023-05-18T15:01:23","slug":"5-products-and-features-that-make-the-digital-world-more-accessible","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/5-products-and-features-that-make-the-digital-world-more-accessible\/","title":{"rendered":"5 products and features that make the digital world more accessible"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p data-block-key=\"rbh91\">Alt text is a description that content creators can add to visuals, so that people who are blind or low-vision can get a description of what is in the digital image \u2014 whether it\u2019s a photo on a website or a social media image shared with friends. The problem is a lot of images have low-quality captions and alt text \u2013 or often none at all \u2013 making visual information inaccessible to a lot of people. In fact<b>,<\/b> a Carnegie Mellon study in 2019 found that of 1.09 million tweets, only .01% contained alt text added by content creators, which meant that over 99% of those images were not easily accessible to people who have blindness. Now, AI is helping make images more accessible.<\/p>\n<p data-block-key=\"cvl1p\">Lookout, which was launched in 2019 and designed with the blind and low-vision community, uses AI to help people accomplish everyday tasks, like sorting mail and putting away groceries. Today, a new feature within Lookout called \u201cimage question and answer\u201d is launching for a select group of people from the blind and low vision communities. Now, whether or not images have no captions or alt text, Lookout can process the image and provide a description of it \u2014 then people can use their voice or type to ask questions and have a more detailed understanding of what\u2019s in an image. This feature is powered by an advanced visual language model developed by Google DeepMind.<\/p>\n<p data-block-key=\"4rhg\">\u201cThis collaboration shows how our multimodal model can directly benefit people\u2019s lives,\u201d says Colin Murdoch, Google DeepMind chief business officer. \u201cIt opens up new avenues for many more applications, especially when it comes to using AI to make the world around us more accessible.\u201d<\/p>\n<p data-block-key=\"491jt\">Following months of internal testing with people with blindness and low-vision, we\u2019re working with the Royal National Institute of Blind People (RNIB) to invite a limited number of people to test out this feature with plans to make it available to even more people soon.<\/p>\n<p data-block-key=\"4i93c\"><b>Wheelchair-accessible places for everyone on Google Maps<\/b><\/p>\n<p data-block-key=\"e82k7\">Since 2020, people have been able to opt-in to the Accessible Places feature in Maps to more easily identify when a place has a wheelchair-accessible entrance, indicated by the wheelchair icon. Now, we\u2019re making the icon visible for everyone on Maps so you can \u201cknow before you go\u201d if there\u2019s a step-free entrance, which is helpful whether you\u2019re using a wheelchair, pushing a stroller, or lugging a suitcase. If a business is known <i>not<\/i> to have an accessible entrance, you\u2019ll see the same icon with a strikethrough, and you can find more information \u2013 like wheelchair-accessible seating, parking or restrooms \u2013 in the \u201cAbout\u201d tab so you can plan visits with confidence.<\/p>\n<p data-block-key=\"9a9o4\">Thanks to contributions from business owners, Local Guides and the Maps community, we\u2019re able to provide wheelchair accessibility information for more than 40 million businesses around the world. If you notice a place you\u2019ve visited is missing accessibility information, you can easily contribute it by scrolling to the \u201cAbout\u201d tab and selecting \u201cEdit features\u201d on Android or \u201cUpdate this place\u201d on iOS.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/outreach-initiatives\/accessibility\/global-accessibility-awareness-day-google-product-update\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Alt text is a description that content creators can add to visuals, so that people who are blind or low-vision can get a description of what is in the digital image \u2014 whether it\u2019s a photo on a website or a social media image shared with friends. The problem is a lot of images have [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":13542,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-13541","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13541","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=13541"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13541\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/13542"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=13541"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=13541"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=13541"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}