Post

The Global Debate on Intricacies of AI Regulation

By John Bailey

AEIdeas

May 26, 2023

The month of May has witnessed a whirlwind of intense discussions surrounding the regulation of artificial intelligence (AI) as policymakers grapple with the challenge of maximizing its benefits while mitigating its inherent risks. During this time, divergent viewpoints and approaches have emerged around the most effective strategies for addressing the potential dangers posed by AI, highlighting the complex nature of this multifaceted issue.

On May 11, the EU’s Artificial Intelligence Act moved closer to passage as it received approval from a pivotal European Parliament committee. The prospective law, with its risk-based approach, prescribes obligations for AI systems proportionate to their risk levels: unacceptable, high, limited, or minimal/no risk. Unacceptable risk applications, such as those using manipulative techniques, inferring emotions in education, or infringing upon an individual’s privacy, are outrightly banned. Detailed requirements are also set for developers of ”foundation models” including mandating safety checks, data governance, risk mitigations, and compliance with copyright law before public release.

Via Reuters

The EU’s approach is primarily prescriptive, so much so that OpenAI may not serve the EU. Additionally, the International Economy journal recently asked experts from Europe and the US where the EU currently stood in global tech competition. The overarching sentiment among respondents was not just that Europe is “lagging behind in the global tech race,” but also that its chances of emerging as a global epicenter of innovation are bleak. One analyst’s conclusion encapsulated the mood of the symposium: “The future will not be invented in Europe.”

The G7 also called for the adoption of international technical standards for AI, including “developing evidence-informed risk and human rights impact assessment frameworks” and protecting children.

Back in the United States, OpenAI CEO Sam Altman, IBM executive Christina Montgomery, and NYU professor emeritus Gary Marcus discussed the profound impacts AI might have on the economy and democratic institutions during a May 16 Senate hearing. Altman suggested the creation of a government agency to regulate AI systems beyond a specific capability threshold, likening it to an “IAEA for superintelligence efforts” that would provide system inspections, audits, safety compliance checks, and potential deployment restrictions. There was a remarkable bipartisan openness to this idea, with Senators Durbin, Coons, Graham, and Welch all embracing the notion of regulating this technology akin to the regulation of nuclear technology.

And finally, the White House announced an updated National AI R&D Strategic Plan and a report on AI from the US Department of Education’s Office of Educational Technology. Perhaps the most significant announcement was a new Request for Information (RFI) process launched by the White House Office of Science and Technology Policy (OSTP) seeking input on national priorities for mitigating AI risks. It’s notable because it comes as the comment period for the National Telecommunications and Information Administration (NTIA) RFI on AI is drawing to a close. And while NITA lacks the authority to promulgate rules around AI, OSTP’s efforts could, in contrast, shape broader Administrative actions.

Such diverse approaches underline the complexity of the AI regulation challenge. Excessive regulation could stifle American innovation and give an advantage to developments in China and other countries. Centralized oversight also faces challenges due to the vast complexity of different AI systems. For example, regulating AI in autonomous vehicles is different from regulating AI in drug discovery or intelligent tutoring systems. Overly broad, one-size-fits-all frameworks and mandates will not work.

Several principles should guide policymaker efforts moving forward:

  1. Policy should present an affirmative view of the outcomes we want for society, not just the harms we want to avoid. It’s striking how quickly policy conversations have focused only on managing the risks instead of how to maximize the benefits.
  2. Policy should encourage AI providers, policymakers, civil society organizations, and national security entities to implement new institutions and coordination mechanisms to effectively address the diverse threats and challenges posed by AI.
  3. AI regulations must be carefully balanced, taking into account potential trade-offs. This is to ensure that well-intentioned rules do not inadvertently become overly burdensome or restrictive, thereby stifling American innovation or curtailing societal benefits.
  4. Policy and regulatory systems should utilize a risk-based approach. NIST’s AI Risk Management Framework provides a strong starting point.
  5. Policy must provide agility within regulatory systems to keep pace with rapid technological advancements. A promising approach could involve creating “regulatory sandboxes” for new tech experimentation, benefiting policymakers with improved tech understanding and providing entrepreneurs with clear regulatory guidance.

While excessive regulation may hinder innovation and global competition, an absence of effective regulation could lead to unmitigated risks. As AI is complex, along with the environments in which it is being deployed, there are no simple answers. It is essential that policymakers and industry leaders continue to wrestle with the competing tensions presented by these rapidly evolving technologies.


Sign up for AEI’s Tech Policy Daily newsletter

The latest on technology policy from AEI in your inbox every morning