{"id":20027,"date":"2025-02-07T10:21:46","date_gmt":"2025-02-07T10:21:46","guid":{"rendered":"https:\/\/scannn.com\/announcing-the-language-technology-partner-program\/"},"modified":"2025-02-07T10:21:46","modified_gmt":"2025-02-07T10:21:46","slug":"announcing-the-language-technology-partner-program","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/announcing-the-language-technology-partner-program\/","title":{"rendered":"Announcing the Language Technology Partner Program"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400\">Meta\u2019s Fundamental AI Research (FAIR) team is focused on achieving advanced machine intelligence (AMI) \u2013 AI that can use human reasoning to perform cognitively demanding tasks, such as translation \u2013 and using it to power products and innovations that benefit everyone.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Our work with UNESCO to expand the support of underserved languages in AI models is an essential part of this effort. Developing models that are able to work on multilingual problems and in underserved languages not only promotes linguistic diversity and inclusivity in the digital world, but also helps us create intelligent systems that can adapt to new situations and learn from experience.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Today, we\u2019re excited to share some of our most recent programs, research and models that support that goal, and to offer opportunities for collaborators to contribute to AI translation technologies that incorporate a vast array of global languages and dialects.\u00a0<\/span><\/p>\n<h2>Language Technology Partner Program<\/h2>\n<p><span style=\"font-weight: 400\">We\u2019re seeking partners to collaborate with us on advancing and broadening Meta\u2019s open source language technologies, including AI translation technologies. Our efforts are especially focused on underserved languages, in support of UNESCO\u2019s work as part of the International Decade of Indigenous Languages.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">We are looking for partners who can contribute 10+ hours of speech recordings with transcriptions, large amounts of written text (200+ sentences) and sets of translated sentences in diverse languages. Partners will work with our teams to help integrate these languages into AI-driven speech recognition and machine translation models, which when released will be open sourced and made freely available to the community.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">As a partner, you will also gain access to technical workshops led by our research teams, where you\u2019ll learn how to leverage our open source models to build language technologies. We are pleased that the Government of Nunavut, Canada, has agreed to work with us on this exciting initiative, collaborating to share data in the Inuit languages Inuktitut and Inuinnaqtun.<\/span><\/p>\n<p><span style=\"font-weight: 400\">To join to our <\/span><span style=\"font-weight: 400\">Language Technology Partner Program<\/span><span style=\"font-weight: 400\">, please fill out <a href=\"https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLSdzcRdtkQCuTrXw727DgJgWbOPKDj5v0bArgGfQUTT6sEopFw\/viewform\">this<\/a> interest form.<\/span><\/p>\n<h2>Open Source Translation Benchmark<\/h2>\n<p><span style=\"font-weight: 400\">In addition to our Language Partner Program, <\/span><span style=\"font-weight: 400\">we\u2019re launching an open source machine translation benchmark, a standard test that will help evaluate performance of AI models that conduct translation. Composed of sentences carefully crafted by linguistic experts, we intend this benchmark to showcase the diversity of human language.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">We invite you to access the benchmark, which is available in seven languages, and contribute translations that will be made open source and available to others. We aim to build an unprecedented multilingual machine translation benchmark.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">You can access the benchmark <\/span><a href=\"https:\/\/huggingface.co\/spaces\/facebook\/bouquet\"><span style=\"font-weight: 400\">here<\/span><\/a><span style=\"font-weight: 400\">.\u00a0<\/span><\/p>\n<h2>Our Commitment to Linguistic Diversity<\/h2>\n<p><span style=\"font-weight: 400\">Today\u2019s announcements are part of our long-term commitment to supporting under-served languages. In 2022, we released the No Language Left Behind (NLLB) project, a groundbreaking open source machine translation engine that was the first neural machine translation model for many languages, and laid the foundation for future research and development.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">We collaborated with UNESCO and Hugging Face to build a <\/span><a href=\"https:\/\/huggingface.co\/spaces\/UNESCO\/nllb\"><span style=\"font-weight: 400\">language translator based on NLLB<\/span><\/a><span style=\"font-weight: 400\">, which we <\/span><a href=\"https:\/\/about.fb.com\/news\/2024\/09\/meta-at-unga-2024\/\"><span style=\"font-weight: 400\">announced<\/span><\/a><span style=\"font-weight: 400\"> during United Nations General Assembly week last September.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Most recently, to support digital empowerment, which is a key thematic area of the Global Action Plan of the International Decade of Indigenous Languages<\/span><span style=\"font-weight: 400\">, we introduced t<\/span><span style=\"font-weight: 400\">he Meta <\/span><a href=\"https:\/\/huggingface.co\/spaces\/UNESCO\/MMS\"><span style=\"font-weight: 400\">Massively Multilingual Speech<\/span><\/a><span style=\"font-weight: 400\"> (MMS) project, which scales audio transcription to over 1,100 languages. Since then, we\u2019ve continued to improve and expand its capabilities, including the addition of zero-shot speech recognition in 2024, which enables it to transcribe audio in languages it has never seen before without prior training.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ultimately, our goal is to create intelligent systems that can understand and respond to complex human needs, regardless of language or cultural background. As we continue in this direction, we\u2019re excited to collaboratively enhance and expand machine translation and other language technologies.<\/span><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2025\/02\/announcing-language-technology-partner-program\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Meta\u2019s Fundamental AI Research (FAIR) team is focused on achieving advanced machine intelligence (AMI) \u2013 AI that can use human reasoning to perform cognitively demanding tasks, such as translation \u2013 and using it to power products and innovations that benefit everyone.\u00a0 Our work with UNESCO to expand the support of underserved languages in AI models [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":20028,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-20027","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/20027","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=20027"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/20027\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/20028"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=20027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=20027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=20027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}