Post

White House AI Commitments: A First Step to Industry Self-Governance?

By Bronwyn Howell

AEIdeas

August 01, 2023

In July, the Biden-Harris administration secured voluntary commitments from seven leading artificial intelligence (AI) companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) to “behave responsibly and ensure their products are safe.” The agreement is the administration’s “first step in developing and enforcing binding obligations to ensure safety, security, and trust” as it seeks to “safeguard our society, our economy, and our national security against potential risks.”

The agreement comes merely two months after the Senate Judiciary Committee’s high-profile hearing on “Oversight of AI: Rules for Artificial Intelligence,” when OpenAI CEO Sam Altman, amongst others, called for United States lawmakers to regulate AI technologies.

Via Reuters

Commentators’ responses, however, have been mixed. As my colleague John Bailey noted, the agreement is an undeniably positive move for the AI sector, but the terms’ phrasing is vague and appears to merely reinforce what the seven firms are already doing. Also, the agreement is voluntary, neither clearly assigning responsibility for ensuring the signatories abide by the terms nor describing the consequences of noncompliance. Others, such as AI Now Institute executive director Amba Kak, believe “a closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough.” These critics fear the agreement is “a boon for deep-pocketed first-movers led by OpenAI, Google, and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.”

Some critics prefer the European Union’s more prescriptive risk-based regulatory framework proposal on artificial intelligence. Yet such an approach invokes over-cautious application of the precautionary principle, eschewing otherwise-beneficial applications simply because developers don’t know in advance exactly what their applications’ outcomes will be. It also begs the question whether it is even possible to classify the potential risks of an AI application in its infancy.

It’s almost certain neither Mark Zuckerberg nor the Winklevoss twins had any idea how an embryonic website facilitating hook-ups between college students would evolve into current-day social media platforms with all their benefits and consequences. Regulating based on the limited number of sectors in which the application will be applied, as the EU proposes, would not have required an embryonic Facebook to be classed as a “high risk” application at its outset. Yet by the time its consequences became known, it was too late to put the genie back into the bottle. It is almost certain some harmful AI apps will similarly escape from the EU regulatory environment, while the fear of such consequences will dissuade developers in the first place.

In this context, the White House’s approach offers some potential for a different development path from the EU’s. It bears remembering that the origins of one of today’s most prolific areas of regulatory oversight—the financial sector—began in industry self-governance. When specialist knowledge is required to understand a new technology’s risks and potentials, and unethical or illegal activity is most likely to be spotted first by industry rivals, then self-governance is more likely than third-party governance to constrain the undesired behaviors.

The original stock exchanges that emerged in London’s Lombard Street in the seventeenth century began as collectives of traders meeting at rival coffee houses. The collectives sought to outbid each other in the effectiveness of the ethics rules they imposed on their members to persuade the public to trade with them instead of their rivals. They wrote the rules, so they knew what breaking them looked like. Any member acting unethically reduced the entire collective’s reputation, so all members had an incentive to monitor each other and expel any malfeasant member as soon as undesirable behavior was identified. The collectives competed on the content of their rules, and they all eventually adopted those that worked best (i.e., best protected the public) as the standard.

Only then was it feasible for monitoring and enforcing the rules to be transferred to a third party (i.e., government or an external regulator). However, even then, the third party relied on collective members’ reports to identify infractions and then proposed amendments to the rules as new ways of acting unethically emerged.

One can see the germs of self-governance emerging from the White House agreement. The onus is on the firms collectively to establish their own rules to monitor and enforce them going forward. Competing groups (and codes) are to be welcomed. This way, the best set of rules will surface for the benefit of all.


Sign up for AEI’s Tech Policy Daily newsletter

The latest on technology policy from AEI in your inbox every morning