{"id":19025,"date":"2024-09-06T21:02:02","date_gmt":"2024-09-06T21:02:02","guid":{"rendered":"http:\/\/scannn.com\/google\/google-shopping-adds-dresses-to-virtual-try-on-tool\/"},"modified":"2024-09-06T21:02:02","modified_gmt":"2024-09-06T21:02:02","slug":"google-shopping-adds-dresses-to-virtual-try-on-tool","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/google-shopping-adds-dresses-to-virtual-try-on-tool\/","title":{"rendered":"Google Shopping adds dresses to virtual try-on tool"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<h2 data-block-key=\"xljt8\">How we built it<\/h2>\n<p data-block-key=\"1p5hv\">This feature is made possible thanks to a generative AI technology we created specifically for virtual try-on (VTO), which uses a technique based on diffusion. Diffusion lets us generate every pixel from scratch to produce high-quality, realistic images of tops and blouses on models. As we tested our diffusion technique for dresses, though, we learned there are two unique challenges: First, dresses are usually a more nuanced garment, and second, dresses tend to cover more of the human body.<\/p>\n<p data-block-key=\"5tuvv\">Let\u2019s start with the first problem: Dresses are often more detailed than a simple top in their draping, silhouette, length or shape \u2014 and include everything from midi-length halters to mini shifts to maxi drop waists \u2014 plus everything in between. Imagine you\u2019re trying to paint a detailed dress on a tiny canvas \u2014 it\u2019d be hard to squeeze in details like a floral print or ruffled collar onto that small space. Enlarging the image won\u2019t make details clearer, either, because they weren\u2019t even visible in the first place. You can think of our VTO challenge in the same way: Our existing VTO AI model successfully diffused using low-resolution images, but in our testing with dresses, this approach often resulted in the loss of a dress\u2019s critical details \u2014 and simply switching to high-resolution didn\u2019t help. So our research team came up with what\u2019s called a \u201cprogressive training strategy\u201d for VTO, where diffusion begins with lower-resolution images and gradually trains in higher resolutions during the diffusion process. With this approach, the finer details are reflected, so every pleat and print comes through crystal clear.<\/p>\n<p data-block-key=\"foesp\">Next, since dresses cover more of a person&#8217;s body than tops, we found that \u201cerasing\u201d and \u201creplacing\u201d the dress on a person would smudge the person&#8217;s features or obscure important details of their body \u2014 much like it would if you were painting a portrait of someone and later tried to erase and replace their dress. To prevent this \u201cidentity loss\u201d from happening, we came up with a new technique called the VTO-UNet Diffusion Transformer (VTO-UDiT for short) which isolates and preserves a person\u2019s important features. So while we train the model with \u201cidentity loss\u201d in place, VTO-UDiT also gives us a virtual \u201cstencil,\u201d allowing us to re-train the model on only the person, preserving the person\u2019s face and body. This gives us a much more accurate portrayal of not only the dress but just as important, the person wearing it.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/blog.google\/products\/shopping\/virtual-try-on-dresses\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>How we built it This feature is made possible thanks to a generative AI technology we created specifically for virtual try-on (VTO), which uses a technique based on diffusion. Diffusion lets us generate every pixel from scratch to produce high-quality, realistic images of tops and blouses on models. As we tested our diffusion technique for [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":19026,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[100],"tags":[],"class_list":["post-19025","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/19025","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=19025"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/19025\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/19026"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=19025"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=19025"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=19025"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}