Mark Zuckerberg’s Remarks at AI Forum

[ad_1]

Opening remarks by Meta Founder and CEO Mark Zuckerberg at AI Insight Forum, hosted by Senators Schumer, Rounds, Heinrich, and Young on September 13, 2023.

As prepared for delivery

Thanks, Senator Schumer, for pulling this group together. A lot of the progress in AI recently is being driven by the organizations in this room. It’s not surprising that so much leading work is being done in the US, from foundational research through to consumer products. Talented people want to build new things here and I think that helps our global competitiveness. And while AI will bring progress everywhere, I expect our leadership will create some durable benefit to the US over time.

So I agree that Congress should engage with AI to support innovation and safeguards. This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that.

While the conversation is mostly focused on generative AI right now, we shouldn’t lose sight of the broader progress across computer vision, natural language processing, robotics, and more, which will also impact society. So we welcome thoughtful engagement here to help secure the best possible outcomes for people.

Looking ahead, I see two defining issues for AI.

The first is safety. New technology often brings new challenges, and it’s on companies to make sure we build and deploy products responsibly. At Meta, we’re building safeguards into our generative AI models and products from the beginning and working with others to collaborate on establishing guardrails. And we’re going to be deliberate about how we roll out these products.

That means partnering with academics and publishing research openly, sharing Model Cards and setting policies that prohibit certain uses – and in the case of Llama 2, red-teaming our models and releasing responsible use guides for developers. It also means working with experts across society – for example, through the Partnership on AI’s effort to work out how to identify and watermark AI content.

We think policymakers, academics, civil society and industry should all work together to minimize the potential risks of this new technology, but also to maximize the potential benefits. If you believe this generation of AI tools is a meaningful step forward, then it’s important not to undervalue the potential upside.

The other major issue is access. Having access to state-of-the-art AI is going to be an increasingly important driver of opportunity in the future, and I think that’s going to be true for individual people, for companies and for economies as a whole.

At Meta, we have a long history of open-sourcing our infrastructure and AI work, including Llama2, which we released in close partnership with a number of other companies here today.

But I want to stress that we’re not zealots about this. We don’t open source everything. We think closed models are good too, but we also think a more open approach creates more value in many cases. Based on where the technology is now, we think this is the responsible approach, but here are some of the things we’re thinking about as we try to balance this.

First, I think it’s important that America continue to lead in this area and define the technical standard that the world uses. The next leading open source model is out of Abu Dhabi, and other countries are working on this too. I believe it’s better that the standard is set by American companies that can work with our government to shape these models on important issues.

Secondly, we’re able to build safeguards into these systems to make them safer – including selecting the data to train with, extensively red-teaming internally and externally to identify and fix issues, fine-tuning the models for alignment, and partnering with safety-minded cloud providers to add additional filters to the systems we release and most people will actually use.

Third, it’s generally accepted that open source software is safer and more secure, because more people can scrutinize it to identify issues and then share and propagate solutions that can then be used to harden systems.

And fourth, open source democratizes access to these tools, and that helps level the playing field and foster innovation for people and businesses, which I think is valuable for our economy overall.

Now, if at some point in the future these systems get close to the level of superintelligence, then these equities will shift and we’ll reconsider this approach. But in the meantime, there’s a huge amount of value to unlock for people with the current generation of technology, and we’re ready to partner with government on this too.



[ad_2]

Source link

Share:

Leave a Reply

3 latest news
On Key

Related Posts

Google AI Works goes back to school

[ad_1] Editor’s note: We are announcing the launch of a first-of-its-kind pilot with multi-academy trusts (MATs) Lift Schools and LEO Academy Trust to understand the

Solverwp- WordPress Theme and Plugin