{"id":15423,"date":"2023-12-18T17:25:35","date_gmt":"2023-12-18T17:25:35","guid":{"rendered":"http:\/\/scannn.com\/living-in-the-future-meta\/"},"modified":"2023-12-18T17:25:35","modified_gmt":"2023-12-18T17:25:35","slug":"living-in-the-future-meta","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/living-in-the-future-meta\/","title":{"rendered":"Living in the Future | Meta"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400;\">Bill Gates once said people \u201coverestimate what they can do in one year and underestimate what they can do in 10 years.\u201d <\/span><span style=\"font-weight: 400;\">Individual breakthroughs tend to accumulate in a non-linear way until suddenly, the future comes into focus. As the old saying goes, it happens two ways: <\/span><a href=\"https:\/\/www.oreilly.com\/radar\/gradually-then-suddenly\/\"><span style=\"font-weight: 400;\">gradually and then suddenly<\/span><\/a><span style=\"font-weight: 400;\">. And as we close out a wild 12 months of technological progress, I think it\u2019s fair to say that 2023 has been a \u201csuddenly\u201d kind of year.\u00a0\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Last month we celebrated the 10th anniversary of the founding of FAIR, our Fundamental AI Research lab. When we launched it in 2013 there was tremendous excitement across the industry about the role AI would play in the future, and early machine learning applications were already playing a central role across Facebook. Few could have imagined back then just how impressive the progress would be. In fact, even just two years ago many might have questioned it.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">As we look ahead to 2024, another big milestone is coming up: it will be 10 years since Meta began working on the computing platform of the future at Reality Labs. These two emerging technologies \u2014 AI and the metaverse\u2014represent Meta\u2019s biggest long-term bets on the future. And in 2023 we began to see these two technological pathways intersect in the form of products accessible to huge numbers of people.<\/span><\/p>\n<h2>AI<\/h2>\n<p><span style=\"font-weight: 400;\">One of the highlights of the year was seeing the way <\/span><a href=\"https:\/\/ai.meta.com\/llama\/\"><span style=\"font-weight: 400;\">Llama and Llama 2<\/span><\/a><span style=\"font-weight: 400;\"> were embraced by the developer community, with more than 100 million downloads and constant improvements coming from organizations across the world as they iterate. In India, Jio <\/span><a href=\"https:\/\/blog.gofynd.com\/fine-tuning-metas-llama-2-to-power-jio-copilot-part-1-afa527744d36\"><span style=\"font-weight: 400;\">quickly fine-tuned it<\/span><\/a><span style=\"font-weight: 400;\"> to build a new tool for serving their more than half a billion customers. And HuggingFace\u2019s <\/span><a href=\"https:\/\/huggingface.co\/spaces\/HuggingFaceH4\/open_llm_leaderboard\"><span style=\"font-weight: 400;\">Open LLM Leaderboard<\/span><\/a><span style=\"font-weight: 400;\"> has filled up with impressive projects built on Llama 2 that are leading the way. These are just a handful of the more than 13,000 Llama variants hosted there.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Like any new technology, AI will have the most impact when it\u2019s available to everyone. It wasn\u2019t that long ago that being able to generate beautiful images using text prompts was effectively inaccessible to most people. But today we\u2019re <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/12\/meta-ai-updates\/\"><span style=\"font-weight: 400;\">adding tools like<\/span><\/a><span style=\"font-weight: 400;\"> collaborative image generation, conversational assistants, writing helpers, and smart image editors into products already used by billions of people around the world.\u00a0<\/span><span style=\"font-weight: 400;\"><br \/><\/span><\/p>\n<p><a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/12\/01_NEW-IG-BACKGROUND-EDITOR.gif?resize=800%2C800\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-40159\" src=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/12\/01_NEW-IG-BACKGROUND-EDITOR.gif?resize=800%2C800\" alt=\"Animation showing Instagram image background editor\" width=\"800\" height=\"800\" data-recalc-dims=\"1\"\/><\/a><\/p>\n<h2>A Platform Shift<\/h2>\n<p><span style=\"font-weight: 400;\">The shift we have seen over the last year suggests there is a path to AI becoming a primary way that people interact with machines. The stage is set for new kinds of devices that can perceive, understand, and interact with the world around us in ways that have never been possible before.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Our AI-powered Ray-Ban Meta glasses show one such path. Our new Meta AI assistant combines vision and language understanding to see the world from your perspective and work with you to make sense of it. And we\u2019re testing new multimodal AI capabilities on the glasses. With this enabled, they can translate a foreign language you\u2019re trying to read,\u00a0 or come up with a funny caption for a photo you\u2019ve taken. And they can do it all hands free, without you needing to pull out a phone or operate an app.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We believe one of the most powerful manifestations of cutting-edge AI will be assistants that can understand the world around you and help you throughout your day, eventually without needing to be prompted. Glasses are the ideal form factor for this \u2014 they can see and hear the world from your point of view, they\u2019re already socially acceptable, they\u2019re wearable all day, and they let you stay fully present in the moment.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">At Reality Labs, we\u2019ve invested in years of research into the technologies needed to advance this \u2014 things like ultra low-power, always-on sensors and machine perception systems capable of understanding your context. We\u2019re not just pioneering a new kind of device here \u2014 we\u2019ll be pushing it forward for years to come.\u00a0<\/span><\/p>\n<p><a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/12\/03_BOZ-MR-IMAGE.gif?resize=800%2C800\"><img loading=\"lazy\" decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-40165\" src=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/12\/03_BOZ-MR-IMAGE.gif?resize=800%2C800\" alt=\"Animation showing mixed reality art\" width=\"800\" height=\"800\" data-recalc-dims=\"1\"\/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Mixed reality and spatial computing represent another path forward. These aren\u2019t simply incremental improvements on the personal computing paradigm that has dominated for the last 50 years. They represent a fundamental shift that\u2019s just beginning to come into focus.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Making these new technologies available to as many people as possible has been a top priority for Reality Labs for many years now, so <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/09\/meet-meta-quest-3-mixed-reality-headset\/\"><span style=\"font-weight: 400;\">releasing the first mass-market mixed reality headset<\/span><\/a><span style=\"font-weight: 400;\"> this September was another 2023 highlight for us.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Within months of the Meta Quest 3 launch, seven of the top 20 apps are mixed reality apps. We\u2019re seeing strong signals that people really value these experiences. There are already hundreds of mixed reality apps in our store where a majority of users have tried mixed reality features. Seeing what happens when lots of people get their hands on a new technology like this has been delightful:<\/span><\/p>\n<p><a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/12\/04_INSERT-VISUAL_VIDEO-OF-FUN-MR-USE-CASE.gif?resize=800%2C800\"><img loading=\"lazy\" decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-40163\" src=\"https:\/\/about.fb.com\/wp-content\/uploads\/2023\/12\/04_INSERT-VISUAL_VIDEO-OF-FUN-MR-USE-CASE.gif?resize=800%2C800\" alt=\"Animation showing mixed reality examples\" width=\"800\" height=\"800\" data-recalc-dims=\"1\"\/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">We\u2019ll see this progress accelerate in 2024 as more people access mixed reality and developers learn to harness its power. Whether it\u2019s immersive NBA viewing on Xtadium or a totally new approach to learning music on Pianovision, we\u2019re already seeing MR deliver experiences that would be impossible on any other kind of device.<\/span><\/p>\n<h2>The Long View<\/h2>\n<p><span style=\"font-weight: 400;\">Making long-term bets on emerging technologies isn\u2019t easy. It\u2019s not guaranteed to work, and it\u2019s certainly not cheap. It\u2019s also one of the most valuable things a technology company can do \u2014 and the only way to remain relevant over the long run. Seeing Meta\u2019s two biggest long-term technological bets both mature and intersect this year has been an extremely powerful reminder of the importance of maintaining a healthy investment in future technologies. And it has given us an even clearer view of the innovation we need to deliver over the coming decade.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In AI, this means full steam ahead on what\u2019s next: what comes after today\u2019s generation of LLMs and generative AI? Most researchers agree that there\u2019s still plenty of opportunity to build bigger and better language, image, and video models with the technologies we have today. But there are still fundamental breakthroughs and entirely new architectures to be discovered, and our AI research teams at Meta are on track to discover them.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This means ongoing research into areas like embodied AI, which aims to build models that experience the world the way humans do. The path toward human-level AI, our researchers believe, will require systems that have a <\/span><a href=\"https:\/\/ai.meta.com\/blog\/yann-lecun-advances-in-ai-research\/\"><span style=\"font-weight: 400;\">deeper understanding of how the world works<\/span><\/a><span style=\"font-weight: 400;\">, and our teams are <\/span><a href=\"https:\/\/ai.meta.com\/blog\/yann-lecun-ai-model-i-jepa\/\"><span style=\"font-weight: 400;\">already making progress<\/span><\/a><span style=\"font-weight: 400;\"> on this, with years of work still to come.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">And at Reality Labs, our <\/span><span style=\"font-weight: 400;\">researchers are pushing ahead on some of the most promising technologies that will make the next computing platform possible. Over the years this research has led to breakthroughs like the pancake lenses on Quest Pro and Quest 3 and the amazing Codec Avatars prototype that <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=MVYrJJNdrEg\"><span style=\"font-weight: 400;\">Mark Zuckerberg and Lex Fridman tried out this year<\/span><\/a><span style=\"font-weight: 400;\">. That\u2019s just the tip of the iceberg, and Reality Labs\u2019 research breakthroughs will enable us to release a string of industry-first products over the coming years.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">But of all the things I\u2019ve mentioned here, the most valuable technologies are the ones that are in people\u2019s hands today. The progress made in 2023 means generative AI is making its way into the heart of the world\u2019s most popular apps, mixed reality is now at the core of a mass-market headset, and smart glasses will let AI see the world from our perspective for the first time. This is an extremely exciting time to be building the future. More importantly, it\u2019s a great time to be living in it.\u00a0\u00a0<\/span><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2023\/12\/metas-2023-progress-in-ai-and-mixed-reality\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Bill Gates once said people \u201coverestimate what they can do in one year and underestimate what they can do in 10 years.\u201d Individual breakthroughs tend to accumulate in a non-linear way until suddenly, the future comes into focus. As the old saying goes, it happens two ways: gradually and then suddenly. And as we close [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":15424,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-15423","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/15423","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=15423"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/15423\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/15424"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=15423"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=15423"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=15423"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}