{"id":13649,"date":"2023-06-13T07:41:47","date_gmt":"2023-06-13T07:41:47","guid":{"rendered":"http:\/\/scannn.com\/a-spotlight-on-the-four-emea-tech-hubs-pioneering-metas-ai-research-around-the-world\/"},"modified":"2023-06-13T07:41:47","modified_gmt":"2023-06-13T07:41:47","slug":"a-spotlight-on-the-four-emea-tech-hubs-pioneering-metas-ai-research-around-the-world","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/a-spotlight-on-the-four-emea-tech-hubs-pioneering-metas-ai-research-around-the-world\/","title":{"rendered":"A Spotlight on the Four EMEA Tech Hubs Pioneering Meta\u2019s AI Research Around the World"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<h2>A Spotlight on Paris, London, Tel Aviv and Zurich<\/h2>\n<p><span style=\"font-weight: 400\">In the eight years since we established our FAIR hub in Paris, Meta has become one of the leading research organizations in the world, with pioneering work stemming from our tech hubs in Paris, London, Tel Aviv, and Zurich.<\/span><\/p>\n<p><span style=\"font-weight: 400\">One of the most important decisions we made when we set up FAIR was to put exploratory research and open science at the center. We regularly collaborate with external researchers, because we have a strong hypothesis that this is the fastest and most responsible way to make progress.<\/span><\/p>\n<blockquote>\n<p><span style=\"font-weight: 400\">\u201cWe have worked with institutions to develop generations of AI researchers, especially via our PhD programs,\u201d said Naila Murray, head of FAIR EMEA. \u201cMany of our PhD students have made important contributions to the field.\u201d<\/span><\/p>\n<\/blockquote>\n<p><span style=\"font-weight: 400\">Today, our teams in Paris, London, Tel Aviv, and Zurich, are focused on a variety of interests, including self-supervised learning, reinforcement learning, speech and audio, computer vision, natural language modeling, responsible AI, machine learning theory, model efficiency, AR\/VR, and more.<\/span><\/p>\n<blockquote>\n<p><span style=\"font-weight: 400\">\u201cOur research is driven by a unique mix of ambition and collegiality, and our team works tightly together across boundaries of expertise, seniority, location, and job role to make rapid research progress,\u201d Murray said. \u201cIn this current era in AI research, seemingly each day brings a potential new research breakthrough, including from our EMEA team.\u201d<\/span><\/p>\n<\/blockquote>\n<h2>Groundbreaking Large Language Model Research<\/h2>\n<p><span style=\"font-weight: 400\">Earlier this year, our researchers in Paris formed the team that built and deployed<\/span> <a href=\"https:\/\/ai.facebook.com\/blog\/large-language-model-llama-meta-ai\/\"><span style=\"font-weight: 400\">LLaMA<\/span><\/a><span style=\"font-weight: 400\"> (Large Language Model Meta AI) \u2013 a state-of-the-art foundational <\/span><a href=\"https:\/\/ai.facebook.com\/blog\/democratizing-access-to-large-scale-language-models-with-opt-175b\/\"><span style=\"font-weight: 400\">large language model<\/span><\/a><span style=\"font-weight: 400\"> designed to help researchers advance their work in this subfield of AI.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text. To train our model, we chose text from the 20 languages with the most speakers, focusing on those with Latin and Cyrillic alphabets. <\/span><span style=\"font-weight: 400\">With capabilities to generate creative text, <\/span><a href=\"https:\/\/ai.facebook.com\/blog\/ai-math-theorem-proving\/\"><span style=\"font-weight: 400\">solve mathematical theorems<\/span><\/a><span style=\"font-weight: 400\">, <\/span><a href=\"https:\/\/ai.facebook.com\/blog\/protein-folding-esmfold-metagenomics\/\"><span style=\"font-weight: 400\">predict protein structures<\/span><\/a><span style=\"font-weight: 400\">, answer reading comprehension questions, and more, large language models are one of the clearest cases of the substantial potential benefits AI can offer at scale to billions of people.<\/span><\/p>\n<h2>Self-supervised Computer Vision Research<\/h2>\n<p><span style=\"font-weight: 400\">Also based in Paris, our teams introduced two breakthroughs in computer vision research. In April, we unveiled <\/span><a href=\"https:\/\/ai.facebook.com\/blog\/dino-v2-computer-vision-self-supervised-learning\/\"><span style=\"font-weight: 400\">DINOv2<\/span><\/a><span style=\"font-weight: 400\"> \u2013 the first method for training computer vision models that uses self-supervised learning to achieve results that match or surpass the standard approach used in the field.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">DINOv2 can discover and segment objects in an image or a video with absolutely no supervision and without being given a targeted objective. For example, DINO can understand that an image contains a representation of a dog without ever being taught what a dog is in the first place. As part of this announcement, we shared a <\/span><a href=\"https:\/\/dinov2.metademolab.com\/\"><span style=\"font-weight: 400\">public demo <\/span><\/a><span style=\"font-weight: 400\">that anyone can use to explore some of the capabilities of DINOv2.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We\u2019re already using DINOv2 to learn more about the physical world. Meta recently collaborated with the <\/span><a href=\"https:\/\/www.wri.org\/\"><span style=\"font-weight: 400\">World Resources Institute<\/span><\/a><span style=\"font-weight: 400\"> to <\/span><a href=\"https:\/\/research.facebook.com\/blog\/2023\/4\/every-tree-counts-large-scale-mapping-of-canopy-height-at-the-resolution-of-individual-trees\/\"><span style=\"font-weight: 400\">use AI to map forests<\/span><\/a><span style=\"font-weight: 400\"> \u2013 tree by tree \u2013 across areas the size of continents.<\/span><span style=\"font-weight: 400\"> While our self-supervised model was trained on data from forests in <\/span><span style=\"font-weight: 400\">North America, evaluations confirm that it generalizes well and delivers accurate maps in other locations around the world.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Our Paris team, in collaboration with colleagues in North America,\u00a0 also pioneered new research using <\/span><a href=\"https:\/\/ai.facebook.com\/blog\/seer-10b-better-fairer-computer-vision-through-self-supervised-learning-training-on-diverse-datasets\/\"><span style=\"font-weight: 400\">\u00a0SEER (SElf-SupERvised), Meta AI Research\u2019s groundbreaking self-supervised computer vision model<\/span><\/a><span style=\"font-weight: 400\">. SEER learns directly from any random collection of images \u2014 without the need for careful data curation and labeling that goes into conventional computer vision training \u2014 and then outputs an image embedding.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">For our latest breakthrough, SEER10B, we use diverse datasets to enable better and fairer computer vision. <\/span><span style=\"font-weight: 400\">Traditional computer vision systems are trained primarily on examples from the U.S. and wealthy countries in Europe, so they often don\u2019t work well for images from other places with different socioeconomic characteristics. SEER delivers strong results for images from all around the globe \u2013 including non-U.S. and non-Europe regions with a wide range of income levels. SEER10B drastically improved performance on fairness benchmarks across gender, apparent skin tone, and age groups. Apart from its improved performance on fairness benchmarks, this model understands images from across the world well enough to localize them with unprecedented precision. We hope SEER will be an important building block as the AI community works to build systems that work well for everyone.<\/span><\/p>\n<h2>Advancements in 3D Modeling<\/h2>\n<p><span style=\"font-weight: 400\">In August 2022, researchers in London and Paris open sourced the code for<\/span><a href=\"https:\/\/ai.facebook.com\/blog\/implicitron-a-new-modular-extensible-framework-for-neural-implicit-representations-in-pytorch3d\/\"><span style=\"font-weight: 400\"> Implicitron<\/span><\/a><span style=\"font-weight: 400\">, <\/span><span style=\"font-weight: 400\">a modular framework within our open source PyTorch3D library. Implictron uses neural implicit representation, a computer vision technique that can seamlessly combine real and virtual objects in augmented reality \u2014 without requiring large amounts of data to learn from and without being limited to just a few points of view.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Implicitron learns a representation of a 3D object or scene using a sparse set of combined images of that object or scene from arbitrary viewpoints. Unlike traditional 3D representations such as meshes or point clouds, this newer approach represents objects as a continuous function, which allows for more accurate reconstruction of shapes with complex geometries as well as higher color reconstruction accuracy.<\/span><\/p>\n<h2>Generative AI for Images and Video<\/h2>\n<p><span style=\"font-weight: 400\">Our team in Tel Aviv is working closely on generative AI and has been at the forefront of some of Meta\u2019s most recent advancements. In July 2022, our Tel Aviv researchers and collaborators around the world <\/span><span style=\"font-weight: 400\">created a <\/span><a href=\"https:\/\/ai.facebook.com\/blog\/greater-creative-control-for-ai-image-generation\/\"><span style=\"font-weight: 400\">generative AI research model called Make-A-Scene<\/span><\/a><span style=\"font-weight: 400\">. This multimodal generative AI method puts creative control in the hands of people who use it by allowing them to describe and illustrate their vision through both text descriptions and freeform sketches, resulting in surreal art, such as a hot dog flying through the sky and skyscrapers in the desert.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We followed up this work with <\/span><a href=\"https:\/\/ai.facebook.com\/blog\/generative-ai-text-to-video\/\"><span style=\"font-weight: 400\">Make-A-Video<\/span><\/a><span style=\"font-weight: 400\">, an AI system that enables people to turn text prompts into brief, high-quality, one-of-a-kind video clips. <\/span><span style=\"font-weight: 400\">The system can also create videos from images or take existing videos and create new ones that are similar.\u00a0<\/span><\/p>\n<h2>The Metaverse and Beyond<\/h2>\n<p><span style=\"font-weight: 400\">W<\/span><span style=\"font-weight: 400\">e believe augmented and virtual reality, coupled with AI-powered interfaces, will constitute the next paradigm shift in human-oriented computing. While our other EMEA hubs are predominantly focused on the AI research that will help us get there, our team in Zurich is working closely to advance AR and VR.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Together, we are working on contextualized AI interfaces that could allow our devices to understand our context, our preferences, our history, and our goals. This supports our future vision where devices will act as partners rather than tools, surrounding us with technology that adapts to us and helps us to work the way we want.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Historically, different areas of AI research have been relatively isolated from one another, Murray said. However, the collaborative foundation FAIR was built upon has been an important catalyst for bringing different teams together and advancing research.\u00a0<\/span><\/p>\n<blockquote>\n<p><span style=\"font-weight: 400\">As head of the FAIR EMEA team, Murray said one of the best parts of her job is \u201csparking collaborations across researchers by pointing out connections between related research interests.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u201cIn recent months, there\u2019s been an exciting confluence of multimodal perception, language understanding and generation, reinforcement learning, and human-machine interaction,\u201d Murray\u00a0 said. \u201cThis confluence is getting us closer to the field\u2019s long-held dream of building truly advanced intelligent systems, which is immensely exciting.\u201d<\/span><\/p>\n<\/blockquote><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2023\/06\/a-spotlight-on-the-four-emea-tech-hubs-pioneering-metas-ai-research-around-the-world\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A Spotlight on Paris, London, Tel Aviv and Zurich In the eight years since we established our FAIR hub in Paris, Meta has become one of the leading research organizations in the world, with pioneering work stemming from our tech hubs in Paris, London, Tel Aviv, and Zurich. One of the most important decisions we [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":13650,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-13649","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13649","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=13649"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/13649\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/13650"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=13649"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=13649"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=13649"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}