Everything We Know About Generative AI Regulation in 2024


Artificial intelligence (AI) has played a significant role in digital advertising for years now. Initially used for basic data analytics and targeting, the technology has evolved considerably since its first applications in advertising, and its use has grown more advanced and widespread in kind. Today, digital advertisers rely on AI for campaign automation, data-driven decision-making, creative optimization and personalization, audience insights, and more.

Over the last several months, a specific type of AI has been making big waves—specifically, the kind that can write and perform songs, turn images into poetry, and clone individuals’ voices with an alarming level of accuracy. Since generative AI (GenAI)’s public debut in late 2022, leaders have begun to test its new features within their campaigns, particularly those related to content creation, design, and creative optimization/personalization. And given GenAI’s pattern recognition and data processing abilities, this technology also has the potential to have a significant impact on analysis, media buying, and even strategic decision making. All in all, it’s not hard to imagine a world where the efficiency, speed, and ease of launch GenAI offers shapes nearly every aspect of the digital marketing process.  

But with the opportunities it offers come warnings and concerns from a variety of experts, as well as questions around its appropriate usage and regulation. With GenAI regulation in its beginning stages, leaders must understand what aspects of GenAI use will likely become regulated and stay abreast of legislative developments in order to make the most of the technology while maintaining compliance and fostering consumer trust.

AI Regulation in the EU

In summer 2023, the EU came out with the world’s first comprehensive AI regulation: the EU Artificial Intelligence Act. The law was approved by the European Parliament in March 2024, and the EU has since established an AI Office that is tasked with implementing the regulation.

The EU AI Act approaches AI regulation by classifying different AI technologies and outlining specific obligations for providers of those technologies according to their level of risk. Beyond outright banning certain types of high-risk AI systems, it also establishes regulation for lower risk and general purpose GenAI. For instance, the act requires that GenAI providers comply with existing copyright laws and disclose the content used to train their models. It also requires that companies disclose when their content has been manipulated by AI.

Though agency leaders and brands not operating in the EU aren’t legally required to comply with this legislation, they can benefit from understanding, and perhaps even embracing aspects of, the AI Act. For example, some teams may want to disclose when their content has been AI-generated or modified—not just because the AI act requires companies working in the EU to do so, but because 75% of consumers feel it’s important. Whether or not businesses working outside the EU choose to comply with parts of the AI Act, understanding its requirements for advertisers is beneficial, as they may reflect consumer preferences around AI, and may eventually be adopted in US legislation.

AI Regulation in the US

The US, on the other hand, has yet to implement any nationwide, comprehensive AI regulation. But that doesn’t mean it hasn’t been a topic of significant discussion and focus.

Over the last few years, Congress has held committee hearings on oversight of AI, and in September 2023, Senate Majority Leader Chuck Schumer convened a closed-door AI insight forum where tech leaders, two-thirds of the Senate, and labor and civil rights leaders gathered to discuss major AI issues and implications.

Since then, many bills have been introduced aimed at regulating AI. Additionally, House leaders recently announced a new, bipartisan AI task force that will explore how Congress can balance innovation and regulation as AI technology continues to evolve—with a focus on its intersection with safety and security, civil rights issues, transparency, elections, and more.

Beyond these developments on Capitol Hill, President Joe Biden signed an executive order in late October 2023 on the “safe, secure, and trustworthy development and use of artificial intelligence.” Though this order outlines clear action steps for the oversight and regulation of AI—including implementing standardized evaluations of AI systems, addressing security-related risks, tackling questions related to novel intellectual property, and more—these are just strong recommendations at present and would require congressional action to become enforceable law.

This order also tasks the Department of Commerce with developing a report that outlines potential solutions to combat deepfakes and to clearly label artificial content. Though the results of this report are forthcoming, brand and agency leaders should be aware that its outcomes could have an impact on how they label marketing collateral that is AI-generated. The executive order specifically cites watermarking as a potential way to label such content, and it’s possible that marketing teams could be responsible for watermarking all AI-generated content in their campaigns in the future.

Additionally, the Federal Trade Commission (FTC) has made it clear that AI oversight and regulation is one of their current areas of focus. They have proposed new AI-related protections, and, at the IAB’s recent Public Policy & Legal Summit, they emphasized how critical it is for advertising leaders to be aware of the risks of bias, privacy, and security posed by GenAI, and to regularly conduct AI-focused risk assessments to help mitigate these potential risks.

In terms of US copyrighting-related regulations, AI-generated content currently cannot be copyrighted. However, the Copyright Office recognizes that “public guidance is needed,” especially when it comes to works that include both human-generated and AI-generated content. As such, they have launched an agency-wide initiative to further explore these issues.

At the state level, nearly all US legislatures in session are considering AI-related bills. Many of these are focused on algorithmic discrimination, which is when an AI-powered tool treats an individual or group of people differently based on protected characteristics. Like the EU’s AI Act, several of these bills approach AI regulation by distinguishing between high-risk AI systems vs. more general-purpose AI models, with different regulatory requirements depending on a tool’s classification.

Though AI-related regulation in the US remains primarily in the realm of guidance for now, advertising leaders can proactively utilize this guidance to plan for the impacts of forthcoming regulations. By building out systems to safeguard consumer safety and trust against the risks posed by AI now, advertising leaders can foster an environment of ethical AI usage, and set their teams up to adapt effectively as regulation becomes more concrete.

Implications for Advertising Leaders

In many ways, what we’ve seen so far is just the beginning of AI regulation, and advertisers can expect to see a lot of movement in this space in the months and years ahead. Those brands and agencies that seek to understand current guidance to develop ethical AI practices will be well-positioned to adapt as these new regulations and recommendations arise.

At present, advertising and marketing leaders can benefit from expanding their knowledge and understanding of new GenAI tools, as well as their potential risks. Digital advertising leaders should be aware of the top threats GenAI poses to advertisers, including its ability to:

  • Create and/or spread mis- and disinformation
  • Perpetuate biases
  • Proliferate made-for-advertising sites and an abundance of low-quality ad inventory
  • Contribute to data privacy concerns, since many AI tools rely on personal data to fuel their algorithms
  • Pose ownership- and copyright-related legal questions regarding ownership of AI-generated content

To navigate these risks, it can be helpful for teams to conduct AI-focused risk assessments and to request their partners/vendors do the same, so they can identity and proactively address any challenges specific to the tools they are using. And, when it comes to using AI-generated content, simply ensuring that all materials are reviewed and edited by a human can help prevent biased content from ever leaving the chat box or image generator, and can halt the spread of mis- and disinformation. By implementing these processes now, brands and agencies will have a leg up as more concrete AI regulation develops in the future.

Generative AI and the Future of Marketing

As generative AI continues to evolve, so too will the regulations that govern it. Marketing and advertising leaders will be well-served to approach this technology in a balanced way that allows them to both harness its power and navigate its risks. By putting systems in place to evaluate and assess AI tools and to address their potential risks head-on, leaders will not only ensure they’re using this technology in safe and productive ways but will also prepare their teams for complying with the types of legislation we’re likely to see coming down the line.

__

Want insights on how marketers and advertisers are using generative AI and how they think it will change the industry moving forward? We surveyed over 200 marketing and advertising professionals from top agencies, B2B and B2C companies, non-profits, and publishers to understand how industry professionals feel about GenAI’s impact on the advertising industry—and how it could shape the future of marketing.



Source link

Share:

Atbildēt

3 latest news
On Key

Related Posts

Gemini comes to Workspace for Education

At Google for Education, we believe AI has the potential to help students, educators and education communities save time, create captivating learning experiences, inspire creativity

Strengthening democracy in Europe

This week, I went to the Copenhagen Democracy Summit to recommit Google’s support to the cause of strengthening democracy. It was only two years since

Solverwp- WordPress Theme and Plugin