Since the advent of AI, but particularly since the emergence of Generative AI, there’s been much discussion about the geopolitical importance of maintaining preeminence in the development of AI systems. Given the computational power, increasing sophistication, and versatility of these systems, who could doubt the validity of this concern? It cannot be hand-waved away.
The West could quickly find itself outmaneuvered in what is an extremely high stakes environment. Talk of financial bubbles is also a distraction from the key issue that AI itself will not be going anywhere, even if some players disappear (as is normal in markets). The idea of walking away from AI as a defective project is a complete nonstarter. Such suggestions are, at best, irresponsible in the competitive space of this technology and are often hopelessly one-sided in their analysis.
The alternative to maintaining leadership in a world of significantly divergent ideology weaponized with AI-generated agency could literally make all the difference, as Sam Altman pointed out in his opinion piece in the Wall Street Journal on July 25, 2024.
As Altman suggests, public-private partnerships to build out broadband infrastructure, investment in training and research, and strategic review of resources and export controls are all key issues. So, too, are new models and international governing organizations (in addition to standards institutes and business coalitions), with particular emphasis on AI safety. Whether all developers have shown the necessary commitment to AI safety thus far is a question for another day. It does not change the fact that our future could come down to whether we now do the necessary work to ensure a democratic versus an authoritarian vision for AI. The stakes could not be greater.
In my view, we also need to dig deeper than this - to where the action is on the ground of Generative AI deployment. This issue is inextricably linked to these wider concerns. I have previously described Gen AI’s arrival as heralding the inception of the Age of Synthesis, particularly in business. There are multiple aspects to this, including implications for team restructuring, cross-discipline, and cross-entity collaboration, but at the heart of it is what is meant by AI itself as it continues to evolve in its use, i.e., the fusion of risk and reward. At Generative Consulting, we have written extensively on both. Note for now that it literally makes no difference what we each individually may think about AI - it is here, it is powerful in the wrong or even untrained hands, and it needs to be managed properly.
As such, management of the risk equation requires more than Sam Altman suggests in his admittedly short overview article. There are larger and longer-term risks with AI, such as democratic slippage and power imbalance, yes, but there are also short-term risks of bias, fabrication, and impersonation that can equally be devastating for a business. You may ask, why focus on business? Because this is where the real work will get done on AI. For AI to develop in the way envisioned, i.e., both safely and with more sophistication, we need to get the understanding locked in that there is a feedback loop between model development and operational deployment that requires technology solutions to support confident and productive use. This is a systems theory concept that is, sadly, so often overlooked. Think fossil fuels, social media and austerity - but that's for another day also.
The reality is that AI development on the ground will not keep pace if we do not build the structures to support its reliable use. This issue is also outside of discussions around what development approaches are likely to be the most beneficial in the future, be they deep learning or neuro-symbolic AI. The deployment market doesn’t care about those considerations in its own field of view—just give us products we can use.
Why do I say this is such a big deal? Well, we are all reading stories about the flattening of the AI curve. This is the idea that adoption rates in business are stalling - the hot air is draining from the balloon. Those with an ax to grind against AI will be inclined to use this data to add fuel to their I-told-you-so fire. But this is simplistic reasoning. It doesn't solve the unanswered problem of improving productivity or otherwise getting ahead with insights from AI; your competition is already doing so. This issue is always a key driver at both industry and geopolitical levels.
So, let’s look for explanations for the lack of traction. The principal reason is that businesses are not yet confident because integrated deployment and governance systems are not yet widely available. Benedict Evans is correct in his assessment that large language models are not an adoption technology but systems that enable the platforms, applications, and interfaces that meet business needs. This is a vital distinction. The market has not yet matured enough to catch up with itself.
Goldman Sachs also threw cold water on the AI fire with its recent study, which raised valid questions about the profitability of its business model, ecological waste, and what improved efficiency from AI actually looks like. One of the contributors, Daron Acemoglu of MIT, opined, "Many of the tasks that humans currently perform...are multifaceted and require real-world interaction, which AI won't be able to materially improve anytime soon."
In addition, why would already burnt out employees want to work harder to implement systems that they believe will eliminate their position? If all the business community can reach for is some easy hits to cut costs with limited exposure, it's no wonder there isn't much pep in the step.
But the rush to judgment on the premature death of AI also carries risks, as we have previously noted. This is no ordinary technology. Much of the analysis offered is also based upon the current state of play, but the whole point of AI is in its interactive evolution - its capability of learning by doing. This requires safe operationalization and experimentation. Many companies have already reported significant productivity and insight enhancements with this technology, but often using bespoke deployment and governance systems. On the key concern of energy efficiency, smaller Large Language Models are vaulting in their capability and adaptability, thereby bringing down costs.
So, if this project is to be a success, what this really comes down to is the need for more widely available platforms for integrated, human-first AI deployment, along with creative vision, risk management policy, and extensive organizational training. Whatever criticisms that may be legitimately leveled towards AI, at this stage, will inevitably act as cues for further development. And there will always be a potential enemy in the wings, waiting willing to take the plunge. The idea that the world will call a halt to this phenomenon is a hapless illusion. No matter the economics, this technology impacts everything from virtual partners to national security - and that has big consequences.
In order to achieve both a widespread productivity liftandemployee satisfaction with this technology, more robust and universal engagement solutions are required. To play Devil's advocate for a moment, let’s assume I am wrong. What if you decide to deploy in some knee-jerk reaction with an ill-considered approach because you suddenly realize you're being left behind? Answer: it will probably end in trouble since your risks are doubtless not being fully managed. The market does at least seem to understand this prospect - hence fear driving the low adoption rates for now. Yet, the market is also playing catch-up on developing new tools for implementation. Deployers need to do the groundwork of use case identification and policy development in anticipation of their arrival. As the saying goes, hustle while you wait.
Indeed, my argument is that if we solve this nested problem of deployment, the wider strategic objectives become more achievable at the collective level. I agree with Sam Altman that it's time to go to work. I also agree that wider institutional initiatives are required. I simply believe that we need to think not just horizontally but also vertically.
There is an ancient African proverb that says if you want to go fast, go alone; if you want to go far, go together. With issues of this magnitude, however, literally everyone is already involved. We need to engage with both the wider geopolitical enterprise and the nitty gritty of execution if we wish to take seriously the promotion of the values that we cherish. It is a collective call to action – whatever our views on AI may be. Meanwhile, the market needs to get to work on solid solutions.
About R. Scott Jones
I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.