Banner image.

Generative AI - A Thought Experiment

· 16 min read

Let's conduct a thought experiment....


You have decided to take the plunge into the world of Generative AI (Gen AI) and now have access to a Large Language Model (LLM). You're not quite sure exactly how you will use it yet (there's a lot of practical work to do on that front which I will review in an article being released later this month) but you're keen to bring it on board and start the process. After all, everyone else seems to be doing it. Well, they say they are!


Let's imagine the LLM that you'll be working with is an intern to be taken on into your wider team. You're determined to keep up with your competition (who you suspect to be thinking on similar lines) and you want to find out what it's capable of. You've seen that it is incredibly fast and extremely adept at synthesizing quantitative data and qualitative recommendations. You may even want to "hire" many of these "interns" in time, but let's just start with one for now.


To be clear, treating your new LLM as another employee does not mean to suggest the attribution of consciousness or any other human-like quality to it. That's not the issue at hand and, in truth, that debate is more of a distraction. Problem-solving and adding value to your operation does not necessarily require such awareness. You fully expect your new LLM to be a powerful agent of productivity in your team, thus enhancing its overall impact.


However, unlike the hiring of your conscious, responsible and carefully selected human employees, there is a major catch. Your LLM is an agent of change who - while extremely capable and phenomenally quick - is untrained in your workflow systems, is unaware of your corporate culture, is blind to your departmental structure, sometimes gets things wrong, may have certain prejudices combined with a complete lack of tact and, left unchecked, can be shameless in its plagiarism and impersonation of humans. On occasion, your new LLM may even be manipulated by outside actors adverse to your interests, could brazenly make things up and/or be downright toxic. Any information provided to the LLM is basically irretrievable and undeletable, and the LLM cannot be sworn to secrecy. When asked about any transgressions, the LLM is unable to explain to you or anyone else how it arrived at its conclusions and its thought process cannot be traced - it's unaccountable. But you are accountable. It's "doing its best" to improve, but the buck always stops with you.



Now, ask yourself these questions:


Does it make sense to ensure that the LLM is closely supervised both in what information is provided to it by your staff and in the output that it generates?


Is it sensible to have measures in place beyond self-policing (i.e. beyond LLMs or other AIs supervising each other)?


Don't you need a mechanism to track who exactly is interacting with the LLM in case something goes wrong?



To any reasonable mind, it seems to me, the answer to all these questions should be a resounding "Yes!". Which brings me to a central idea - not only are the benefits of Gen AI and its associated risks not mutually exclusive, but they are, in fact, entirely inseparable. Moreover, such risk management is not confined to the traditional role of Cybersecurity and Information Technology Departments. It is now an organization-wide initiative required to be shored up by policy architecture, employee training, resource restructuring and dedicated expertise. We are now well and truly underway in what I call the Age of Synthesis.


Now, hold on a minute, I hear you cry. Synthesis? That sounds downright creepy. But I don't mean synthesis of human and machine. Yes, that is creepy. The monkey is not and should not be the organ grinder. What I mean is that the Age of Synthesis represents a new stage of development in the fusion of risk management and reward extraction in the world of Gen AI, with transformative de-siloing implications for traditional business team structures. The impact of this technology will be truly unprecedented.


But what makes this time so different? They always say that. Well, the answer is that market imperatives are conspiring to invite a not fully understood, agentive and parabolically changing technology inside organizations at a scale and pace not previously seen. That is very different.


It is well documented by various consultants and institutions that there remains considerable under-preparation around Gen AI in the market. This is due in large part to many businesses deliberately hitting the pause button in 2023, the breakneck speed of technological developments since the rollout of ChatGPT in November 2022. But it is also due to the dichotomous and, indeed, frustrating nature of carefree support for unregulated Gen AI, on one side of the equation, and "the sky is falling" prophecies of doom on the other. Organizations feel caught in the middle of these polar extremes. But neither one of these assessments can be the correct, nor the responsible, response. As business leaders, I believe the task at hand is now to roll up our sleeves by approaching Gen AI with the necessary due diligence and to consider sensible safeguards.


Let's take an overview tour through the landscape of potential risks, examine how they interrelate, explore how to evaluate them, and thereby be better prepared for the potential emergence of known and unknown unknowns inside our organizations.


⚠️ Overarching Risks


First, there are the overarching, systemic risks associated with Gen AI. It would be hubristic and reckless to handwave them away. They are of such significance that they speak collectively to the sheer power of the agency of Gen AI. This technology is functionally and geographically global. But more than that, it is also highly dynamic, changing as a technology at a rate unknown in human history. That is quite a thought! The cumulative result is akin to the idea in the 2022 movie, "Everything Everywhere All At Once." It's a lot to grapple with. As such, being dismissive is surely absurd. Back to our thought experiment: how would you handle an employee that changed out of all recognition so quickly?


The stark reality is that Gen AI is here to stay. Its genie is flying high out of the bottle and will never return. It has its marching boots on worldwide in a competitive arms race. Therefore, it is incumbent on us all to take its implications seriously and to move with purpose. The possible risks without careful application by humans include ethical misalignment, widespread unemployment from automation, social upheaval from misinformation, lack of trustworthiness, democratic dislocation, ecological waste, military vulnerability, degradation of cultural memory, erosion of creativity, dilution of property rights and, by no means least, a potentially combative threat to the existence of the human species itself. I don't know about you, but that is not a (non-exhaustive) list I am prepared to gloss over. It deserves a deep dive in the spirit of self-preservation, if nothing else!


However, for the purposes of this article, let's confine ourselves to the impact within business and organizations, since the purpose here is to frame how they relate to the operation of Gen AI in this context. The wider risks certainly exist and do not warrant being disregarded as too abstract, but they should inform rather than dominate our focus of attention in the work of effective operational deployment. Otherwise, they also risk becoming distractions. Working to contain intrinsic Gen AI risks at the operational level, and disseminating the anonymized findings, may also yield benefits at the collective level of the commons by helping to mitigate or even eliminate them. Our approach is key - with all of us having a role to play.


💬 Output-Intrinsic Risks


This brings me to the narrower focus of our second category of risks - those risks intrinsic to Gen AI output as outlined in our thought experiment, and their ramifications for successful deployment. We might think of this category as the direct manifestation at the digital coalface of what is causing all the fuss in business. The headaches that need to be managed. Gen AI is intrinsically risky in its unpredictable output since how it works is not completely understood by anyone at all.


Let's quickly recap. The risks include but are not limited to inaccuracy, fabrications based on algorithmic misunderstanding (hallucinations), bias and discrimination from flawed training data, intellectual property violations, increased data breach and privacy violation exposure, deep fakes and impersonation in multimedia, defamation, and others. There have already been many incidents reflecting these concerns. Regulatory authorities, global think tanks, insurers and the courts are all actively working through the implications, their consequences, and the necessary safeguards.


One can only imagine that, at the very least, this presentation of these risks will continue and, very likely, the stakes will intensify with increasing integration. Even with risk mitigation strategies and offered indemnities, potential risks will continue to proliferate as Gen AI and its use evolve. One may ask, therefore, whether it makes sense to abandon all thought of tapping into Gen AI's capability due to these barriers. But is that realistic? I don't think so. It is an alternate reality, for sure, but not one grounded in business dynamics in a competitive environment. At the collective level, market conditions alone will compel this journey. The question, then, becomes not whether to deploy Gen AI but how to do so responsibly to protect both your own business and society at large.


⚙️ Deployment-Deficit Risks


Given the backdrop of what we have discussed, it's clear that it is in the mechanics of deployment within an organization that the rubber meets the road. As a rule, I like to approach any problem from what I call the prism of "SCOPE". It is my abridged and asymmetric acronym for three key ideas demonstrating how to conceptualize many given issues. It stands for Scale, COntext and PErspective. Applying this rubric to the risk management architecture for Generative AI, deployment within an organization requires practical management with policy, training and governance systems that promote traceability and accountability. Such an approach is tailored to the organization's level (its Scale), recognizes Generative AI's fundamentally risk-charged nature (its COntext), and encourages application throughout the organization with widespread use (its PErspective).


Of course, exactly how an organization chooses to deploy Generative AI, for what use cases (be they internal or external), and how they choose to re-organize and/or hire human resources, will vary greatly. However, many principles remain the same. Beyond policy, risk management architecture in your tech stack and vigilant cybersecurity, there are two key further steps that are also required, among others.


First, as we established, the work output of the intern in our thought experiment must be monitored and reviewed through a systematic process. To implement this properly, you need a governing process to manage your Gen AI workflow with appropriate data sanitization and anonymization procedures. Once established, not only can you be clear with your employees that unauthorized use of Gen AI (very widespread) is not acceptable, but you will also have a process in which to accommodate them with different LLM model options, blueprints, multiple and sequential capabilities, and leveraged methodologies.


Secondly, since the river rarely, if ever, runs smooth, you will also need a rubric for identifying problems as they occur (beyond those that you have already prevented) and for communicating them effectively within your organization for the purposes of risk mitigation. A risk-identification system permits you to monitor your use, to extinguish problems before they amplify, and to share alerts and create Key Performance Indicator reports for both risk and reward.


For a moment, consider the alternative: intrinsic risks of Gen AI are potentially running rampant and unchecked within your organization. The intern has gone rogue. No good can come from that. Internal problems will soon spill outside your organization into the community with the ethical, legal, reputational, market and financial risks that will flow as a result. Indeed, regulatory authorities are increasingly demanding solutions which permit problem resolution with adequate explainability and traceability. How can the containment of these risks even possibly occur without a governance solution?


And while we're on the subject of risk detection, let's pause to consider Gen AI monitoring other Gen AI models for what may only be specified risks through an unverifiable lens. How can you reach an adequate level of assurance that the job is getting done? I consider that while such strategies may be a useful layer in some instances, they are not in themselves sufficient. Aside from the fact that this approach suffers from the problem of self-policing, on a standalone basis it is unlikely to be considered adequate from a regulatory or a legal perspective. Instrumentality is not governance. Any sufficient and effective risk detection mechanism, therefore, should also include human review. I will examine these issues in more detail in a future article.


However, there is a separate but related - and rather disturbing - issue at work here also. Recent data suggests that employers are increasingly resorting to the use of AI detection systems to conduct surveillance on employees with respect to their use of technology in the workplace. This may appear to be the quick fix to an identified problem, but quite aside from its unethical and, in my opinion, dystopian flavor, it is already being shown to result in poor employee morale. It simply is not a sustainable way forward. A governance platform that empowers and enables, yes. A not-so-grand inquisition, no. That seems to me to be covered by the adage that, simply because you can doesn't mean that you should. Instead, for those not wishing to miss the bullet train that is Gen AI, the focus should be on responsibly governing what you can before it is too late.


In the end, now that it is here, the true risk with Gen AI is our own inaction.

R. Scott Jones

About R. Scott Jones

I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.

DISCLAIMER

The content here is for informational purposes only and does not constitute tax, business, legal nor investment advice. Protect your interests and consult your own advisors as necessary.