{"id":15566,"date":"2024-01-19T18:24:58","date_gmt":"2024-01-19T18:24:58","guid":{"rendered":"http:\/\/scannn.com\/on-ai-progress-and-vigilance-can-go-hand-in-hand\/"},"modified":"2024-01-19T18:24:58","modified_gmt":"2024-01-19T18:24:58","slug":"on-ai-progress-and-vigilance-can-go-hand-in-hand","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/on-ai-progress-and-vigilance-can-go-hand-in-hand\/","title":{"rendered":"On AI, Progress and Vigilance Can Go Hand in Hand"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400\">AI dominated the discussion as political, business and civil society leaders gathered in Davos for the World Economic Forum this week \u2013 from the opportunities and risks AI creates, to what governments and tech companies can do to ensure it is developed responsibly and deployed in a way that benefits the most people.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">I attended the conference alongside world-leading AI scientist Yann LeCun and other Meta colleagues, where we had the opportunity to set out some of the company\u2019s thinking on these issues.<\/span><\/p>\n<p><span style=\"font-weight: 400\">As a company that has been at the forefront of AI development for more than a decade, we believe that progress and vigilance can go hand in hand. We\u2019re confident that AI technologies have the potential to bring huge benefits to societies \u2013 from boosting productivity to accelerating scientific research. And we believe that it is both possible and necessary for these technologies to be developed in a responsible, transparent and accountable way, with safeguards built into AI products to mitigate many of the potential risks, and collaboration between government and industry to establish standards and guardrails.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We\u2019ve seen some of this progress firsthand as researchers have used AI tools that we\u2019ve developed and made available to them. For example, Yale and EPFL\u2019s Lab for Intelligent Global Health Technologies used our latest Large Language Model, Llama 2, to build <\/span><a href=\"https:\/\/www.linkedin.com\/feed\/update\/urn:li:activity:7135408165017243648\/\"><span style=\"font-weight: 400\">Meditron<\/span><\/a><span style=\"font-weight: 400\">, the world\u2019s best performing open source LLM tailored to the medical field to help guide clinical decision-making. Meta also partnered with New York University on <\/span><a href=\"https:\/\/about.fb.com\/news\/2020\/08\/how-ai-is-accelerating-mri-scans\/\"><span style=\"font-weight: 400\">AI research<\/span><\/a><span style=\"font-weight: 400\"> to develop faster MRI scans. And we are partnering with Carnegie Mellon University on a <\/span><a href=\"https:\/\/ai.meta.com\/research\/impact\/open-catalyst\/\"><span style=\"font-weight: 400\">project<\/span><\/a><span style=\"font-weight: 400\"> that is using AI to develop forms of renewable energy storage.<\/span><\/p>\n<h2>An Open Approach to AI Innovation<\/h2>\n<p><span style=\"font-weight: 400\">Among policymakers, one of the big debates around the development of AI in the past year has been whether it is better for companies to keep their AI models in-house or to make them available more openly. As strong advocates of tech companies taking a broadly open approach, it was encouraging to sense a clear shift in favor of openness among delegates in Davos this year.<\/span><\/p>\n<p><a href=\"https:\/\/www.instagram.com\/reel\/C2QARHJR1sZ\/\"><span style=\"font-weight: 400\">As Mark Zuckerberg set out this week<\/span><\/a><span style=\"font-weight: 400\">, our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit. <\/span><span style=\"font-weight: 400\">Meta has a long history of sharing AI technologies openly. <a href=\"https:\/\/about.fb.com\/news\/2023\/07\/llama-2\/\">Llama 2<\/a> <\/span><span style=\"font-weight: 400\">is available for free to most people on our website, as well as through partnerships with Microsoft, Google Cloud, AWS and more. We\u2019ve released technologies like<\/span> <a href=\"https:\/\/ai.meta.com\/blog\/pytorch-foundation\/\">PyTorch<\/a>, the leading machine learning framework, our <a href=\"https:\/\/ai.meta.com\/research\/no-language-left-behind\/\">No Language Left Behind<\/a> models that can translate up to 200 languages, and our <a href=\"https:\/\/ai.meta.com\/blog\/seamless-communication\/\">Seamless<\/a> suite of AI speech-to-speech translation models, which can translate your voice into 36 languages with around two seconds of latency.<\/p>\n<p><span style=\"font-weight: 400\">While we recognize there are times when it\u2019s appropriate for some proprietary models not to be released openly, broadly speaking, we believe openness is the best way to spread the benefits of these technologies. Giving businesses, startups and researchers access to state-of-the-art AI tools creates opportunities for everyone, not just a small handful of big tech companies. Of course, Meta believes it\u2019s in our own interests too. It leads to better products, faster innovation and a flourishing market, which benefits us as it does many others.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Open innovation isn\u2019t something to be feared. The infrastructure of the internet runs on open source code, as do web browsers and many of the apps that billions use every day. The cybersecurity industry has been built on open source technology. An open approach creates safer products by ensuring models are continuously scrutinized and stress-tested for vulnerabilities by thousands of developers and researchers, who can identify and solve problems that teams holed up inside company siloes would take much longer to do. And by seeing how these tools are used by others, in-house teams can learn from them and address them.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ultimately, openness is the best antidote to the fears surrounding AI. It allows for collaboration, scrutiny and iteration in a way that is especially suited to nascent technologies. It provides accountability by enabling academics, researchers and journalists to evaluate AI models and challenge claims made by <\/span><span style=\"font-weight: 400\">big companies, instead of having to take their word for it that they are doing the right thing.<\/span><\/p>\n<h2>Generative AI and Elections<\/h2>\n<p><span style=\"font-weight: 400\">One concern that we take extremely seriously at Meta is the potential for generative AI tools to be misused during the elections taking place across the world this year. We\u2019ve been talking with experts about what advances in AI will mean as we approach this year\u2019s elections, and we have policies in place that we enforce regardless of whether content is generated by AI or people. <\/span>See <a href=\"https:\/\/about.fb.com\/news\/2023\/11\/how-meta-is-planning-for-elections-in-2024\/\">our approach to this year\u2019s elections<\/a> in more detail.<\/p>\n<p><span style=\"font-weight: 400\">While we aren\u2019t waiting until formal industry standards to be established before we take steps on our own in areas like helping people understand when images are created with our AI features, we\u2019re working with other companies through forums like the Partnership on AI to develop those standards.<\/span><\/p>\n<p><span style=\"font-weight: 400\">I had the opportunity to talk about how AI is helping Meta tackle hate speech online during a panel discussion at the World Economic Forum on Thursday:<\/span><\/p>\n<h2>Developing AI Responsibly<\/h2>\n<p><span style=\"font-weight: 400\">While today\u2019s AI tools are capable of remarkable things, they don\u2019t come close to the levels of superintelligence imagined by science fiction. They are pattern recognition systems: vast databases with a gigantic autocomplete capacity that can create responses by stringing together sentences or creating images or audio. It\u2019s important to consider and prepare for the potential risks technologies could pose in the future, but we shouldn\u2019t let that distract from the challenges that need addressing today.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Meta\u2019s long-term experience developing AI models and tools helps us build safeguards into AI products from the beginning. We train and fine-tune our models to fit our safety and responsibility guidelines. And crucially, we ensure they are thoroughly stress-tested by conducting what is known as \u201cred-teaming\u201d with external experts and internal teams to identify vulnerabilities at the foundation layer and help mitigate them in a transparent way. For example, we submitted Llama 2 to the DEFCON conference, where it could be stress-tested by more than 2,500 hackers.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We also think it\u2019s important to be transparent about the models and tools we release. That\u2019s why, for example, we publish <\/span><a href=\"https:\/\/ai.meta.com\/blog\/how-ai-powers-experiences-facebook-instagram-system-cards\/\"><span style=\"font-weight: 400\">system<\/span><\/a><span style=\"font-weight: 400\"> and <\/span><a href=\"https:\/\/github.com\/facebookresearch\/llama\/blob\/main\/MODEL_CARD.md\"><span style=\"font-weight: 400\">model<\/span><\/a><span style=\"font-weight: 400\"> cards giving details about how our systems work in a way that is accessible without deep technical knowledge, and why we shared a <\/span><a href=\"https:\/\/ai.meta.com\/research\/publications\/llama-2-open-foundation-and-fine-tuned-chat-models\/\"><span style=\"font-weight: 400\">research paper<\/span><\/a><span style=\"font-weight: 400\"> alongside Llama 2 that outlines our approach to safety and privacy, red teaming efforts, and model evaluations against industry safety benchmarks. We\u2019ve also released a <\/span><a href=\"https:\/\/ai.meta.com\/llama\/responsible-use-guide\/\"><span style=\"font-weight: 400\">Responsible Use Guide<\/span><\/a><span style=\"font-weight: 400\"> to help others innovate responsibly. And we recently announced <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/12\/purple-llama-safe-responsible-ai-development\/\"><span style=\"font-weight: 400\">Purple Llama<\/span><\/a><span style=\"font-weight: 400\">, a new project designed to help developers and researchers build responsibly with generative AI models using open trust and safety tools and evaluations.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We also believe it\u2019s vital to work collaboratively across industry, government, academia and civil society. For example, Meta is a founding member of <\/span><a href=\"https:\/\/partnershiponai.org\/\"><span style=\"font-weight: 400\">Partnership on AI<\/span><\/a><span style=\"font-weight: 400\">, and is participating in its Framework for Collective Action on Synthetic Media, an important step in ensuring guardrails are established around AI-generated content.<\/span><\/p>\n<p><span style=\"font-weight: 400\">There is a big role for governments to play too. I\u2019ve spent the last several months meeting with regulators and policymakers from the UK, EU, US, India, Japan and elsewhere. It\u2019s encouraging that so many countries are considering their own frameworks for ensuring AI is developed and deployed responsibly \u2013 for example, the White House\u2019s voluntary commitments that we signed up to last year \u2013 but it is vital that governments, especially democracies, work together to set common AI standards and governance models.\u00a0<\/span><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p class=\"jetpack-slideshow-noscript robots-nocontent\">This slideshow requires JavaScript.<\/p>\n<p><span style=\"font-weight: 400\">There are big opportunities ahead, and considerable challenges to be overcome, but what was most encouraging in Davos is that leaders from across government, business and civil society are actively engaged in these issues. The debates around AI are significantly more advanced and sophisticated than they were even just a few months ago \u2013 and that\u2019s a good thing for everyone.<\/span><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><script async defer src=\"https:\/\/platform.instagram.com\/en_US\/embeds.js\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2024\/01\/davos-ai-discussions\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI dominated the discussion as political, business and civil society leaders gathered in Davos for the World Economic Forum this week \u2013 from the opportunities and risks AI creates, to what governments and tech companies can do to ensure it is developed responsibly and deployed in a way that benefits the most people.\u00a0 I attended [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":15567,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-15566","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/15566","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=15566"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/15566\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/15567"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=15566"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=15566"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=15566"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}