Thumbnail

Binary thinking in Generative AI: Beware the binary frogs!

· 13 min read

There’s an often-used expression, particularly in business, called “eat the frog," which is intended to extol the virtue of completing difficult or tedious tasks at the beginning of the workday and thereby having them over and done with. The term was coined by motivational speaker Brian Tracy. This approach to strategic prioritization is something I have some time for. After all, we all know the feeling of having the Sword of Damocles hanging over us or, sometimes worse, that gnawing unease that attends persistent procrastination. It’s a good message.


However, besides the fact that I personally don’t enjoy the visual image of consuming an amphibian—I rather like frogs—there is also the danger of oversimplification. That problem you’re contemplating and wish to solve, while not an unsolvable one, is one that will need some unpacking. It may not be something that you can so easily dispatch. No one benefits from analysis paralysis, but, on the other hand, as the saying goes, only fools rush in where angels fear to tread. As the astute comedian Steven Wright observes, “A conclusion is the place where you get tired of thinking.” Just make sure you know your ground. Only then, in the spirit of the meaning, should you tear the turkey’s head off, pull the trigger, or whatever other metaphorical impetus takes your fancy, first thing in the morning.


This leads me to the general idea, which seems widely manifest in public discussions surrounding AI, of relentlessly reductionist thinking. Oversimplified nonsense, essentially, leaves you feeling pretty much stuck. Let’s face it, it’s everywhere. Many are either huge fans of generative AI or they are withering in their criticism. Neither of these approaches is either helpful or realistic. I put it down to the human tendency towards binary thinking, which, in turn, explains a lot of the polarization and ideological grandstanding that consumes us today. The lure of the binary is its lazy appeal.


The wider cultural dimension of abdication from thought and its implications are a subject for a different day, other than to say that the fundamental problem with binary thinking is that most things in the universe don’t work that way. Generally, most things are explained by an interweaving of opposites. A fusion of sorts. Generative AI is no exception. I have previously described its recent emergence as the nascent stage of what I call the Age of Synthesis. What I mean by that is that its risk and reward are inseparable, and, given its widespread agency, so is our interaction with it. It is we humans who must adapt and decide what our future will look like while interacting with generative AI. Reversing the arrow of time, however, is not an option.


Curiously, this leads me to another frog. In this case, it’s the slowly boiling one. This is the notion that, whether by our inaction or ill-considered action, we (and that may be at the scale level of an organization, society, nation, or species) are at risk of being insidiously overcome by the dangers of AI. Through complacency or an ineffectual approach, we end up being slowly boiled! Clearly, that’s a real danger when an excess of kool aid is on offer and being dutifully consumed. Binary thinking strikes again. I think it was a character in Ernest Hemingway’s novel “The Sun Also Rises” who, when asked how he went bankrupt, answered, “Two ways. Gradually, then suddenly.”


Okay, enough of the frogs for now. They don’t deserve this treatment, and, as far as I’m concerned, we can do better than that, both for them and for us. Generative AI presents a myriad of risks, such as elevated exposure to data breaches, hallucinations (making stuff up), bias and discrimination, and intellectual property infractions. deep fakes and impersonation, not to mention ethics violations, market risks, generalized distrust, and degradation of the degradation of the quality of human life. Oh, and there’s plenty more. But I think that’s enough to get on with, don’t you? Clearly, serious attention is needed.


At the same time, however, market drivers are creating imperatives that cannot be ignored. Employees are often using AI without employer consent, models are multiplying at accelerating velocity, and, to top it off, customers, insurers, and other stakeholders expect organizations to use the technology responsibly and ethically.


So, as an organization, how do you square this circle?


Back to the avoidance of binary. Only with effective engagement and consideration of your objectives, organizational culture and values, existing architecture and resources, and plans for development can you possibly hope to extract rewards from generative AI while containing and mitigating its potential risks. Let’s look at some threshold questions to consider before diving in, and then review in a little more detail some of the not-so-helpful thought traps lying in wait. Remember, it doesn’t have to be a minefield if you have the right map of thought.


For example, it pays to ask yourself, What state is your organization’s current data in? How will you marshal your proprietary and sensitive information? How will you generate results by synthesizing proprietary data with current and validated research? How will you promote organic flexibility in your organization’s approach, with guardrails, governance, and quality control, on anongoingbasis and in a rapidly changing environment? How will you develop effective use of AI within your organization? What are the risks associated with multiple decentralized “agents”? Of course, there are many other questions, but this is not the time or place for definitive answers. The key to insight always starts with proper attention, effective engagement, and probing inquiry.


There are a few questions swirling around, however, which lead us to a road to nowhere. First up, artificial general intelligence and consciousness. This technology is not conscious—at least not yet. This whole area is fraught with definitional problems, and the subject is for another day, but suffice it to say that there is no evidence at this stage that it demonstrates awareness. Super-smart capability is modeled upon the operation of neural networks, yes, but it produces fundamentally probabilistic output in language formation. It literally seeks to predict the next word. In fact, it has been called a stochastic parrot, albeit based upon a volume of training data that no one human could possibly ingest. That may underplay it, but no one knows what level it will reach. All that said, given that few dispute the already remarkable intelligence and capability of generative AI, isn’t that enough to make it important to govern it appropriately?


On the other end of the spectrum of opinion, generative AI is also not useless, as is claimed in some quarters. It is not always the best at mathematical problem-solving and logic, as agreed. Nevertheless, it synthesizes information at lightning-fast speed. It presents itself with such authority that it can be very dangerous if relied upon too heavily, given that it makes mistakes without any hint of reservation. However, suggesting that it is useless due to impending data bottlenecks, denying its existence as anything of business value, or wishing to put its genie back in any bottle we can find are all errors of logic that are doomed to failure.


Whatever one’s view of generative AI, it is here to stay. These polarizing ideas are all distractions from purposeful action in dealing with the realities of generative AI on the ground. Once again, the thinking is too binary. Vigilance and criticism are important, yes, but so is realism. We are called upon to engage.


Then there is the idea that somehow generative AI can be molded into a deterministic algorithm. Like an output from a super-duper pivot table or some such. That is also flawed logic. The whole point is that there is an indeterminate answer to this technology. It is also dealing with the world of language, which is notoriously slippery. You cannot simply cleave the indeterminate into a Procrustean bed. A large language model-generative AI interacting with a vector database will produce synthesized qualitative and quantitative output, but there is literally no way of being certain as to the precise output in advance, nor even how the “black box” operation arrived at its output in a probabilistic operation.


The confusion on this issue in part explains some of the commentary regarding prompt requests for generative AI, with the idea that fixed approaches and auto-prompting are in some way a magic bullet for defined outcomes. How could that ever possibly be the case with a probabilistic system that is rapidly evolving? Answer: It just can’t. Not only is it not possible to pin down the precise nature of prompts, but the requirements also differ depending upon themodel you are using.And that’s just today. Binary again. Principles are useful; fixities are not. The questions that we ask—and their adaptability to different circumstances over time—make a difference.


Which leads me to the issues of the transparency and explainability of generative AI. Another canard flying around in the ether is that somehow it is exclusively AI systems that can and should police other AI systems. Certainly, AI systems can be useful for cross-checking and validation. Trust, but always verify. However, it is only human-in-the-loop validation, intuition, alignment, and workflow that can break the recursive cycle of AI self-policing. Quite apart from increasing legal requirements for the avoidance of automated decision-making in critical areas and the need for demonstrable audit trails, exclusive AI self-regulation runs straight into the logic problem of Godel’s Incompleteness Theorem, which, in summary, states that a system cannot rely entirely on itself to prove its own validity.


The inclination to succumb to this fallacy also butts up against two related ideas. The first is the notion that just because you can does not mean that you should. Enough said on that. Its meaning is an intuitive one. The second is what I regard as the politics of inevitability and regulatory capture. In this context, it is the idea that because AI is so pervasive and we are in a geo-politically competitive space, there is no room for, nor any value, in legal regulation of its worst excesses. But how can it reasonably be argued that failure to act against biometrics, facial recognition, surveillance, unfettered deep fakes, and other violations and intrusions is in any way justifiable? Plainly speaking, in my view, it’s a lack of care and ideological contamination. Beyond that, we should also make sure that we are each minding our own business and regulating the use of generative AI within our own environments. That’s just good housekeeping.


Last but not least, let’s touch upon the pantheon of possible values and how AI is prone to distort collective thinking. I mentioned the speed of AI. The unfathomable speed! Such utility and efficiency are unprecedented in human history. It’s a pinch-me moment just to make that statement. However, utility and efficiency are not the only values, nor even the primary ones. Fundamentally more important are ethics in a complicated world, fairness, inclusion, the dignity of work, democratic principles, and, not least, social cohesion. There has been a lot of discussion about the anthropomorphizing of AI. Arguably, a bigger danger is the atomization and mechanization of humans. The world has changed so radically and so quickly that we are trying to deal with cultural, ethical, legal, and market imperatives all at once. At breakneck acceleration. To reflect and engage, we must. We are not machines. Not even close. Nor are the poor frogs, of course.


Beware the binary!

R. Scott Jones

About R. Scott Jones

I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.

DISCLAIMER

The content here is for informational purposes only and does not constitute tax, business, legal nor investment advice. Protect your interests and consult your own advisors as necessary.