Post

Becoming Ungovernable: Is AI Regulation Even Feasible?

By Bronwyn Howell

AEIdeas

June 02, 2023

The heat is coming on policymakers, as yet another plea is made by AI developers to regulate the technology. This week, Sam Altman, Geoffrey Hinton, and a cast of thousands have issued yet another open letter, interpreted in the mainstream media as AI posing a “risk of extinction” to humanity on the scale of nuclear war or pandemics. Mitigating that risk should be a “global priority,” in their view.

This view supposes that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes “superintelligent”, then it could become difficult or impossible for humans to control. Just as the fate of many endangered species depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

Via Adobe Stock

Yet is it feasible for regulations to be created and adopted to mitigate these risks? And what may be their consequences? Regulation in and of itself is far from a risk-free process. After all, it isn’t clear that regulations adopted to mitigate the risks associated with a coronavirus-based epidemic have not been equally, if not more harmful overall to society than the risks they sought to address. While lockdowns may have pushed back the time at which individuals contracted COVID-19 to reduce the spread of infection, the economic consequences of high inflation, educational consequences of lost schooling, and social (even neural) consequences of disrupted face-to-face human engagement pose “existential threats” to a significant subset of humanity, and impose lifelong limitations on many, many more.

The problem is that the human beings who will design and impose AI regulations are boundedly rational. This plays out in two ways. First, the regulations must necessarily address a wide array of possible outcomes which at the current point in time cannot be known by anyone simply because we just cannot anticipate (even with the best possible state of current knowledge or computing power) just what will occur in the future. Second, even if we do have some ideas about what may occur, mitigating every single anticipated risk is prohibitively expensive. So necessarily some choices must be made about how those risks will be prioritized. Asking (or expecting) these priorities to be chosen by political agents facing a wide array of conflicting objectives invokes the risk that they will prioritize interventions that protect their own ongoing existence and that of the stakeholders (voters and funders) who benefit from their ongoing control over decision-making.

The latter point speaks to the question raised in my last blog as to why it is only now that AI developers are calling for regulatory intervention when its possible impacts have long been debated amongst academic and industry participants. The former raises the question of whether effective AI regulation is even feasible in the first place—and who should take the lead?

Acknowledging my own possible biases, I posed the questions “Can AI regulation be effective”, and “Who should regulate AI” to ChatGPT, to see what the cumulative body of human knowledge embedded in that tool could offer. The output, unsurprisingly, reflected the current boundedly-rational human position on the subject.

The first prompt raised a laundry list of “reasons why AI regulation can be effective” which were so generic that they could apply to practically any regulatory question (addressing societal concerns; promoting trust and transparency; mitigating risk and harm; encouraging ethical practices; balancing innovation and protection; providing legal clarity; harmonizing international efforts; evolving with technology). It concluded “While AI regulation can be effective, it is important to strike the right balance to avoid overly burdensome or stifling regulations that may hinder innovation and growth. Ongoing collaboration between policymakers, experts, industry stakeholders, and civil society is crucial to developing and implementing regulations that are well-informed, practical, and responsive to the evolving nature of AI.”

The second prompt yielded “regulating AI is a complex task that requires the involvement of various stakeholders, including governments, industry experts, researchers, ethicists, legal professionals, and civil society representatives.” Also, “collaboration and coordination among these stakeholders are crucial to develop comprehensive, effective, and well-balanced AI regulations that protect individuals’ rights, promote innovation, and address societal concerns.”

So is the most recent call to regulate AI not a specific call for restraints on the technology itself but rather a plea to protect a specific set of human positions (or privileges) that we observe today? If that is the case, then Christina Montgomery’s proposal to regulate “to govern the deployment of AI in specific use-cases, not regulating the technology itself” appears both a pragmatic and honest way forward in our messy human reality. ChatGPT apparently concurs.

Note: As part of her work at AEI, Bronwyn Howell will commence a study on the practical implications of regulating Artificial Intelligence in 2024.


Sign up for AEI’s Tech Policy Daily newsletter

The latest on technology policy from AEI in your inbox every morning