Thumbnail

How regulation is the key to AI

· 9 min read
“As AI, including GenAI, is more widely adopted, the role of Risk will be critical to innovating while maintaining trust. In the absence of formal legislation or regulation (and even when such may come), companies must proactively set appropriate risk and compliance guardrails and “speedbumps”. Keep in mind, regulators have made clear that existing authorities and regulations apply to “automated systems”, including algorithms, AI, and innovative technologies.”

Amy Matsuo Principal and National Leader Compliance Transformation (CT) & Regulatory Insights KPMG LLP - “Where Will AI/Gen Regulations Go?”, November 2023


To some, it is counterintuitive that any regulation has the wherewithal to boost development at all. For others, it is the linchpin. Polarized thinking may have you believe that over-burdening regulation is the only way forward if it protects all relatively weaker interests in all circumstances in an asymmetric world. Conversely, for others, any imposed restriction is antithetical to the all-correcting market economy, the devil incarnate poised to choke the lifeblood from any prospect of growth. Only unbridled engines of growth could secure such a prosperous future.



Of course, neither of these extreme positions can be correct - if only the world were so simple. These are ideological misfirings, not even borne out by history in terms of success. Yet these opposing positions have circulated within the liberal tradition for centuries at this point. Reality, however, demands both nuance and balance.



And yet, the reminders to us all are everywhere. Hidden in plain view. Let’s pick a couple. What does anyone think the law of contract is but a mechanism for balancing competing interests? No reasonable mind would argue with the notion that it has been the fundamental enabler of economic prosperity. Yet, it is a regulating mechanism in a world of complex interactions. Nothing more, nothing less. And who would ride in an elevator that is not subject to mandated inspection? Not me. Though I wouldn’t favor regulations so onerous that a building could not operate either. Life is inherently risky. The question is what is acceptable.



And so we see all around that effective regulation is always a question of degree, approach and perspective. It is not an either/or proposition at all but one of how and to what extent. The extreme views at the edges of the spectrum do no more than frame the space within which the real work gets done.


Perhaps the first area to seek agreement on, then, is that false dichotomies, i.e. only this or only that with no room in between, are pointless. They have no flexibility ingrained within them with which to navigate a complex world. Oh, I know, a lobbyist dressed up like a think tank might have you believe in them, or even a dyed-in-the-wool naysayer, but it doesn’t make them any more true or viable.



So, what does this mean for the development of Generative AI, given its impressive yet potentially very damaging capacity, the geopolitical demands surrounding its use, the neoliberal impulses of market titans, its social impacts, and, not least, the practical risks on the ground of its deployment within organizations? Let’s get into it.



When an AI model provider/developer claims that zero legal regulation is necessary and that the industry itself has it all covered, it induces, in me at least, a wry smile. For multiple reasons, this position simply doesn’t make sense. Firstly, the framing is wrong. It is and should not be a choice between external- versus self-regulation at the model level. With such a potent agent as AI, both are required. To suggest leaving this technology wholly unregulated (even if the entire industry is wholly responsible - unlikely) leaves us all vulnerable to its worst excesses. To be clear, we all operate from and within society, not outside of it, as much as some may pretend otherwise. Any other approach comes across as boneheaded.



Secondly, no country is a metaphorical island. Our world is more than inter-connected; it is operationally an integrated unit on many levels. The EU is introducing its own legislation on AI. Various U.S. States have also done so, and others are jumping on board in increasing numbers. It is not tenable for the U.S. nor the U.K. not to implement regulations which, ideally, should also align with each other as far as possible. Yes, there will be differences in approach on the risk/reward axis, even in the context of geopolitical imperatives, but there should be broad consensus on strategic objectives around security, transparency and fairness.



Thirdly, regulation appropriately struck is a check and balance that we all can benefit from. Failure to regulate AI model developers leaves the market players themselves and the wider community exposed to market non-compliers who choose to shortcut self-regulation (i.e. they cheat). History is replete with examples of the catastrophic consequences borne by others from the actions of irresponsible participants in the system. With the power of this technology, what happens if things go so awry that there is a sudden need for a knee-jerk and disproportionate reaction to stem the bleeding? Who benefits then? “Be careful what you wish for” comes to mind.



Then there are the deployers of these systems (i.e. the corporate users) who see professional and business advantage in this technology that is literally sweeping the globe. Once again, the specter of false dichotomy rears its ugly head in the idea that any regulation (legal or otherwise) need only occur at the developer level, and that risk management around deployment is unnecessary. On the observable facts alone, this is clearly wrong. This technology is agentic, carrying risks directly inside and from deploying organizations. Calibrated regulation at this level is also required.



There is already specific AI legislation in the works at the EU level imposing legal obligations on both developers and deployers, such as documented human monitoring of AI output, impact assessments, breach notification requirements, tracking and explainability. Once again, the world is simply too small to ignore the need for alignment. Moreover, the consequences of failure at the deployer level will likely be disproportionately harsh in terms of the impact on any affected organization, perhaps threatening its very existence. Self-preservation alone should be a good motivator. What sensibly minded business would leave itself open to the whims of the AI industry without protecting itself?



And so, in the end, logic dictates that there is no substitute for multi-level governance. Multiple jurisdictions, institutions and international organizations have already put forward and adopted risk management frameworks for use at each stage of the AI lifecycle. The proposed steps echo many of these same concepts of accountability, non-automated decision-making, data privacy, explainability, fairness, security and transparency. These frameworks emanate from the White House, Government Departments and others at the U.S. national level, the U.K. authorities, the OECD, international standard organizations and many more. How effectively these principles distill down into practice on the ground will make all the difference.



Lastly, it is worth reflecting that there is a deeper symbiosis at work in all of this. As AI progresses and continues to penetrate all aspects of business operations, it will become increasingly indistinguishable from human output, reorganizing the way that business is done, literally from the inside out. That’s a level of synthesis and agentic “stickiness” not yet seen. Considering our increasing dependence on this technology, self-regulation and feedback reporting at the deployer level is vital to assist model developers in safeguarding against the potential excesses of this technology. This will be to the benefit not just of discrete entities but society at large.



In summary, societal and enterprise-wide understanding of these issues is imperative if we are to be positioned to operationalize AI competitively, while mitigating and containing its risks. Surely, that is something we can all get behind, now that we have dispatched false dichotomies to the dustbin - which is where they belong.


AI Generated Podcast from NotebookLM discussing the issues raised in this article:


https://notebooklm.google.com/notebook/a6ba41e7-0036-4ceb-b04e-f60bb9131ae9/audio




R. Scott Jones

About R. Scott Jones

I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.

DISCLAIMER

The content here is for informational purposes only and does not constitute tax, business, legal nor investment advice. Protect your interests and consult your own advisors as necessary.