Thumbnail

AI and Cybersecurity Breach - The Slippery SLOPE

· 53 min read

Frank Herbert, the famed writer of “Dune,” once warned, “Technology is both a tool for helping humans and for destroying them. This is the paradox of our times, which we're compelled to face.”


Never have the global technological systems on which we collectively rely been so interconnected - and hence so potentially vulnerable to far-reaching cyberattacks. Not only is there a patchwork quilt of suppliers in physical hardware, on-premises software, and multi-cloud architecture, with increasing internationalization, but there is also a glaring lack of consistency in encryption protocols, interface methodologies, and governance procedures. This is further complicated by a lack of conformity in authentication techniques, outdated legacy systems, and a landscape of outsourcing agreements and vendor services. This often results in compromised consistency and a lack of responsiveness to constant digital flux.


We are all aware of the now seemingly constant hum of cyber failure stories in the news.

The consequences of this smorgasbord approach to globalizing technology, with clear areas of high concentration dependence, are seen in a significant uptick in sensitive data breaches, ransomware, and denial-of-service attacks.


The recent notorious Crowdstrike error, with severe disruption around the world, ironically stemmed from a routine cybersecurity update to prevent these very attacks, known as “endpoint security.” It was as if the whole world was being attacked by its own immune system. The Crowdstrike case highlights our interdependence. Many of us have also experienced the consequences of intentional security breaches directly with our own data being exposed. Malicious intent, whether from inside an organization’s digital walls or from without, can do a lot of damage very quickly. Let’s be honest; it’s far from a reassuring picture, especially given the backdrop of elevated cybercrime, now weaponized with more sophisticated AI.


The purpose of this article is to examine the origin, nature, and implications of cybersecurity breakdowns, focusing on specific categories of systemic risk, and to consider precisely how the increasing proliferation of AI, particularly with the advent of Generative AI systems since 2022, significantly raises the stakes.


We will also review the opportunities for risk mitigation that AI presents, and I will go on to explain how we need to reevaluate and tighten up our act, including solutions specifically for AI use internally if we are to avoid the prospect of AI exposing how unfit for purpose the current environment really is. As it stands, the overriding sense is that we are writing checks, and we cannot cash. No one wishes to be unduly alarmist, but we had better make sure that we are not setting fire to a lake of gasoline.


Since I personally find them useful, I have come up with a mnemonic - SLOPE - to frame these issues and to categorize the major areas of weakness where we see cybersecurity breaches appearing. SLOPE represents Stack, Laxity, Openness, Permission, and Error and is essentially a prism through which to examine this problem from different perspectives. In each category, we’ll explore the specific technical areas of vulnerability that are already waiting for us and consider how “uninvited” and “invited” AI change the risk equation. We’ll then analyze an approach to what we might call the “engagement layer” with Generative AI to uncover why Gen AI’s capabilities demand a tailored - and urgent - solution to governance than is currently in widespread use.


But before we get into the categories of weakness and fragility of the current landscape, we need to understand the types of cybersecurity breaches and failures that take place in order to develop a fuller picture of the current state of play. A warning before we start - this feels like going down the rabbit hole in this multi-directional arena. Bear with me as we travel through some shocking terrain. It may make the journey longer than you would wish it to be, and frankly, I share that view, but you have my assurance that it’s necessary to make sense of all of this. There’s a lot going on.


What is a cybersecurity breach?


Simply put, a cybersecurity breach is unauthorized access, disclosure, or manipulation of confidential information within a system, network, or organization. It can take various forms including email, text or social media phishing attacks, malware infiltrations, unauthorized access by insiders, or the exploitation of software vulnerabilities.


Data breaches include only those security breaches where someone gains unauthorized access to data. While we are focused on direct systems penetration, the physical theft of hard drives and USB flash drives is also a data breach, and their integrity may also be compromised by cyberattacks. A specific form of a data breach is known as “ransomware.” A ransomware attack may lock up a company's customer data and threaten to leak it unless the company pays a ransom.


Not all cybersecurity breaches or “cyberattacks” involve data. Malicious actors may have other purposes in mind. There may be errors, of course, but cybersecurity breaches can include, for example, cyber criminals engaging in Distributed Denial of Service (DDoS) attacks, in which the intention is to overwhelm an organization’s operating system to instigate a shutdown. A DDoS attack can immobilize a website, web application, cloud service, or any other online resource by overwhelming it with a bombardment of connection requests, fake data packets, or other nefarious messaging.


In summary, cybersecurity breaches can take various forms, and different techniques can be used to target specific areas of weakness. Cybercriminals are known to use IT failures, such as temporary system outages and known dislocations, to sneak into sensitive databases, compromise systems, target individuals, and plant pernicious software. The imagination seemingly has no limits - including when misdirected.


According to IBM’s Cost of a Data Breach 2024 report, the global average cost of a data breach to an organization is USD 4.88 million. Ransomware attacks are generally even more expensive on average. While organizations of every size and kind are vulnerable to breaches, and the costs to remediate can vary significantly, the reputational damage and legal ramifications that can arise from these attacks can be massive. For many smaller organizations, such attacks put them out of business entirely.


It feels like we would be here for millennia, analyzing the litany of cybersecurity breach cases even in the last couple of years, given their accelerating frequency. But we can dive into a few more to build up the picture. Another notorious case from a few years ago is a good example of the convergence of a number of issues: the aptly named WannaCry case. WannaCry is an example of crypto-ransomware, a type of malicious software used by cybercriminals to extort money. This ransomware attack of 2017 spread through computers operating Microsoft Windows. Use files were held hostage while a Bitcoin ransom was demanded for their return.


Had it not been for the continued use of outdated computer systems, poor procedures, and lack of education about the importance of software updates, the damage caused by this attack could have been avoided. Ultimately, computer systems in 150 countries were affected by this attack at an estimated aggregate cost of $4 billion! It is understood that the attackers used an “exploit” hack that was allegedly developed by the United States National Security Agency. Known as EternalBlue, this hack was made public by a group of hackers called the “Shadow Brokers” before the WannaCry attack was launched.


When it first occurred, many assumed that the WannaCry ransomware attack had spread through a phishing campaign (where spam emails with infected links or attachments lure users to download malware). Subsequently it was determined to have been caused by the EternalBlue hack using this exploit, allowing WannaCry to propagate and spread by installing a ‘backdoor’ on the compromised computers in order to execute the ransomware.


Much more recently, telecommunications giant AT&T disclosed just last month that the call and text records of 109 million cellular customers had been unlawfully downloaded from a third-party provider’s Snowflake’s cloud, highlighting how vendor relationships can amplify risks and suggesting the importance of thorough impact assessment reviews. The company allegedly paid hackers $370,000 to delete the records. Some have pondered the implications of this latest attack and sounded the alarm at the dangers of increasing centralization.


This trend seems almost impossible to keep up with. Only this week, another story appeared on my screen about the personal data of an eye-popping 2.9billion people, including full names, former and complete addresses going back 30 years, Social Security Numbers, and more, being stolen from a company misleadingly called National Public Data (NPD), by a cybercriminal group called USDoD. Surely, there must be some duplication of individuals here. This eye-watering new data breach was revealed as part of a class action lawsuit that was filed at the beginning of July 2024. The complaint sets out that the hackers attempted to sell the huge collection of personal data stolen from NPD on the dark web at the price of $3.5 million.


It has been suggested that NPD acquired much of this personal data by “scraping”, which is a legal yet unethical technique used by companies to collect data from websites and other sources online. But NPD is accused of scraping personally identifiable information of billions of people from non-public sources! I will come back to the practice of scraping and re-sale later in this piece.


Show me the stats!


I don’t know about you, but I regard all of this as some pretty scary stuff from the perspective of a sitting duck! But headlines are headlines. So, how do the actual numbers roll up? Although focused on the large enterprise market, 93% of almost 3000 Information Technology respondents confirmed in the 2024 Thales Global Breach Report that they believe security threats are increasing significantly in volume and severity. This is almost double the number of respondents from just last year. It’s a frightening trajectory, by any measure, and is suggestive of poor procedures and training in addition to the increasing weaponization of bad actors.


The report says that there was a 27% increase in companies that fell victim to a ransomware attack last year, with 8% paying the ransom. 43% of enterprises failed a compliance audit, with those companies ten times more likely to suffer a data breach. Human error has also been identified as the top cause of data breaches for the second year in a row.


Within the United States alone, the number of data breach victims has surpassed 1 billion for the first half of 2024 (with many falling victim to more than one breach), according to the Identity Theft Resource Center. Given the NPD case, that number may well be about to be left in the dust. But even without NPD, that is an increase of over 400% (!!) from the same time period last year: 1.07 billion victims compared to 182.65 million in the first half of 2023. The grim irony is that 2023 was the 20th anniversary year of the first breach notification law in the U.S. heralding from California in 2003, in which year the record was set for the greatest number of separate attacks.


To say that there is increasing evidence of more widespread data breach activity is like saying it’s windy with a tornado in our midst! There is also increasing evidence that AI bots and other techniques are being operationalized to wreak havoc. "Cybercriminals are focusing more on supply chains where attacks against a single, less well-defended vendor can give an attacker access to the data of many companies," James E. Lee, Chief Operating Officer of the Identity Theft Resource Center, said in 2023. ".....Generative artificial intelligence is making phishing attacks and social engineering schemes more effective and more successful."


When we connect the dots on these attacks, the precise statistics almost fade into irrelevance - it is the magnitude, trajectory, acceleration and AI being added into the cocktail that really put the frighteners on.


What's the law doing about it?


While there is still no comprehensive Federal data breach notification (nor privacy and cybersecurity) law in the U.S., all 50 U.S. states have their own data breach notification laws with onerous reporting requirements, penalties, and, in some cases, rights of private action for affected consumers. Clearly, however, the deterrence effect does not work for the bad actors. Nor is it incentivizing organizations to implement additional security and/or clean-up to prevent these attacks. The problem with the laws themselves is that requirements are post-event and also divergent. Despite the fact that a company facing a breach affecting customers and/or employees in multiple states must meet the terms of ALL the laws, there may even be a variation in the definition of what constitutes a breach in order to activate respective laws and regulatory interventions.


The Identity Theft Resource Center calls for a uniform definition of personally identifiable information that would trigger a mandatory data breach notice and also demands that state and federal agencies be notified within 24 hours and that potentially impacted individuals be notified within 72 hours of an incident. "The two-decade-old legislative and regulatory framework designed to alert consumers to breaches is broken," says Eva Velasquez, CEO of the Resource Center.


Two-thirds of states and some cities also have specific cybersecurity laws. As of writing, legislation specifically around AI is patchy, with no federal legislation and only 12 states addressing the regulation of AI risks and automated decision-making specifically (though data privacy provisions in all states also cover AI systems, at least in principle and in part).


The reality is that irrespective of this legislative backdrop, the prevention imperative of legislation is not making the grade. Cybercriminals are undeterred, while current practices and technological infrastructure are lagging seriously behind. With the globalization of AI use, this problem is only destined to become disastrously worse. More reliable work must be done at the operational level to avoid cybersecurity failures from occurring in the first instance, both through more effective regulation and, more importantly, tighter operational control.


Let’s get into the categories of weakness in SLOPE - Stack, Laxity, Openness, Permissions and Error. In my view, they demand immediate focus for us to have any shot at getting this problem under control and improving the status quo in the “new normal” of universal AI.


STACK


While all the areas of weakness are interrelated to some degree, the fragmented, duplicative, and outdated nature of a typical organization’s tech stack creates exposure. Older systems may lack modern security features and be harder to patch. This present inherent vulnerability is amplified by the burgeoning of nefarious AI deployments in the form of malicious bots and, now, even manipulated autonomous generative AI agents seeking attack opportunities.


There is some evidence of the recent rationalization of tech stacks, which requires conducting an inventory audit to determine business needs, identifying redundancy, insecurity, and incompatibility, and mapping to technology integrations. These are smart moves. Nevertheless, the statistics show that the absolute stack, on average, continues to enlarge. The average small business with 500 or fewer employees has 172 apps, including subscription SaaS products. Mid-market companies between 501 and 2,500 employees have 255 apps on average. And, as you might expect, it’s more than twice that for large enterprises, an average of 664 apps. If you consider that it may only require one corrupt - or corruptible - piece of technology to do harm, it makes sense to monitor and “prune the stack” where possible.


There is an inherent tension between consolidation and security vulnerability, especially with loose protocols. Some of the lessons to be drawn from previous attacks might suggest decentralization is the right method to limit exposure if something goes wrong. There are, however, clear inefficiencies offsetting this approach. Ultimately, it’s a trade-off. It does seem clear, however, that aggregating techniques such as the cloud or other hub-and-spoke applications demand more rigorous procedures to avoid the amplified negative consequences of failure. Given that such procedures are necessary for any event on the theory that it “only takes one tile to fall,” perhaps these should be the focus with ongoing training and evaluation while always keeping an eye on efficiency versus over-dependence.


The reality is that “uninvited” AI is becoming more sophisticated. This significantly lowers the cost of entry for cybercriminals. What may have at one time been disregarded as an acceptable risk entertained by a business is now no longer a tenable position. A further question that should be at the top of your mind is, even if you streamline the stack (perhaps using AI technology itself), what would the implications be if malicious AI is able to penetrate your firewall? Zero-day exploits have now become a recognized attack vector whereby flaws in software can be exploited before patches are developed. AI will only increase the “speed to market” of these attacks. In addition, is your deployed technology (including that of partners in your supply chain) being properly vetted? This is certainly complicated by the lack of transparency of Large Language Models, specifically if using Generative AI. A black box is bad enough, but a black box that is not rigorously tested both pre- and post-deployment is just plain madness.


LAXITY


This brings us more specifically to Laxity. While the totality of the stack creates more room for exposure with agentive AI being weaponized by bad actors, the intrinsic cyber insecurity of many applications also creates opportunity for manipulation. This fundamental laxity in technology should be distinguished from loose procedures of end-user operation - we’ll focus on those later. Laxity, as used here, refers to the innate vulnerability embedded within and at the seams between applications.


Now, before we go any further, it’s fair to say that if all this was easy, then it would already have been dispatched expeditiously by most, long before the prospect of AI bots and Generative AI models came along. It is also an organic and rapidly changing landscape. That does not extinguish the persistent problem, however. Like it or not, AI presents novel and serious challenges that need to be met by technology and cybersecurity departments to avoid serious consequences.


The first thing that must be addressed is the issue of data. Plain old information collected and presented. You cannot separate it from technology. Without it, there is nothing to talk about, and every organization has more of it than it would like in more places than it would like. The idea of data may sound simple, yet it is quite the opposite. After all, there is a whole industry and science built around it. Data must be found, collected, interpreted, redacted, sieved, combed for duplication and anomaly, categorized, formatted, and pooled. It’s a big job - but without this work, an organization’s own lifeblood can become its enemy if left poorly structured. All this is done before even thinking about using the data with new AI applications. Here, we’re just focused on cleaning up our existing act.


As indicated, with the proliferation of cloud architecture, data is stored in many locations and is increasing in volume. Indeed, studies suggest that storage costs are escalating at a vicious speed, with IT departments eyeing these budgets for a reduction in order to fund their AI projects. Some estimates are that storage costs are now consuming as much as 30-50% of IT budgets, with data volume more than doubling per annum across cloud and on-premises storage! Once again, it’s the direction of travel that matters – and it’s not good.


The diffusion of information across architectures also means that vendor management procedures must be included in data standards, or your efforts may be for naught. Data hygiene has become a must-have. Threat actors don't need to hit their targets directly. In supply chain attacks, hackers need only exploit vulnerabilities in the networks of your service providers to compromise your data integrity.


Just this past week, the National Institute of Standards and Technology released its voluntary code of conduct, the Generative Artificial Intelligence Profile. It identifies twelve key risks for developers and deployers, not all of which we need to cover now, but which include the following:


“Information Security: Lowered barriers for offensive cyber capabilities, including via automated discovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber operations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may compromise a system’s availability or the confidentiality or integrity of training data, code, or model weights.”, and


“Value Chain and Component Integration: Non-transparent or untraceable integration of upstream third-party components, including data that has been improperly obtained or not processed and cleaned due to increased automation from GAI; improper supplier vetting across the AI lifecycle; or other issues that diminish transparency or accountability for downstream users.”


As such, it is essential that your vendors undergo data breach readiness assessments in the context of the likelihood and sensitivity of data breach, and that they attest to such as a pre-condition for contractual arrangements. Vendors should also be examined for compliance certifications such as SOC-2.


The same third-party management issue is also true for another weak spot in Laxity - Application Programming Interfaces, otherwise known as APIs. Poorly secured APIs can provide attackers with entry points into your organization. An API is a bridge that allows a computer program to interact with another program in a way the programmer of the first program might expect. Vulnerabilities in APIs are a type of security flaw that can allow attackers to gain access to sensitive data and/or execute other malicious actions. They can occur when an API is badly designed, poorly implemented, or not adequately secured.


Good API development should include testing for bugs and clearing metrics for scalability and performance, such as how many client-server calls per second the API can handle, as well as other relevant factors. APIs with inconsistent or unreliable performance will generally be abandoned by developers. APIs used by third-party providers should at least be available for testing as you seek to control your environment. The real question for a business is how will you know you have a problem.


While it is not feasible to eliminate all potential leaks when using APIs, you can mitigate your exposure by using one of the many available AI anomaly detection tools to automatically detect sensitive data that may end up where it shouldn’t. The nature of the data to detect is your decision depending upon need, including personal info, addresses, facial images, keywords, and phrases associated with intellectual property, for example. In addition, AI algorithmic and behavioral tools are available to ingest employee and vendor communication patterns of behavior to detect suspicious activity. While nothing is guaranteed in the cyber arms race, by doing so, you may be able to thwart attacks before they get off the ground. The IBM Cost of Data Breach Report 2024, referred to earlier, found that security AI and automation also reduce the cost of an average breach by USD 1.76 million, or 40%.


Another method of directly breaching target systems is SQL injection, which takes advantage of weaknesses in the Structured Query Language (SQL) databases of unsecured websites. Hackers enter malicious code into user-facing fields, such as search bars and login windows. This code causes the database to divulge private data like credit card numbers or customers' personal details. You can be sure that cybercriminals will be using AI to seek access to these databases since this is the help they have been waiting for. Threat detection tools will help prevent this activity.


Other areas of Laxity include unpatched software with poor updating protocols, misconfigured cloud services, and Internet of Things devices such as home computers and drives with weak security. Laxity can also be seen in ill-considered over-dependence on one provider for multiple services. Arguably, this is what we see at work in the recent CDK Global car dealership ransomware attack. Once again, all these risks are turbo-charged and warrant re-examination in an AI-fueled world.


OPENNESS


With Openness, I am referring to a description of the technology landscape, including the new wave of Generative AI systems and the implications for the future.


Before we get into AI models, let’s talk about data scraping. As we know, the training data sets for Generative AI Foundation Models were populated with data scraped from the internet by web crawlers. This involved scraping data from all areas of the web, some of which are considered off-limits, and this has caused legal intellectual property disputes. These are serious issues but not the focus of our attention here.


I wish to focus my attention, more specifically, on the process of data scraping by third-party brokers, enabled by AI and supported by the permissibility of third-party cookies, with the express purpose of the re-sale of data for financial gain. The lack of regulation in this area - the Openness - is creating perverse incentives that militate directly against any sensible efforts to protect personal information. Data scrapers are now advertising products that not only capture publicly available information on individuals but also seek to scrape data at scale with AI bots from private domains. They even advertise their ability to “get past captcha blockers” and to circumvent IP blocking as badges of honor.


In my view, this unacceptable business practice needs to be prohibited. Public information is one thing - private is quite another. Remember NPD? There is also the irony that all data available to AI systems in training has now empowered, and is being used by, a few to steal the private data of many - and all for their profit. I don’t know about you, but it does not pass my smell test.


With that out of the way, nevertheless, AI detection and containment procedures need to be worthy of their adversaries in this information-scraping war. The agility of AI-driven scraping bots is enhanced by their ability to mimic human behavior patterns. They can simulate human-like browsing, clicking, scrolling, and interaction, making it difficult for security systems to distinguish between legitimate use and nefarious activity. The dynamic behavior of these bots means that they can seamlessly navigate through websites, follow links, access hidden data, and extract information from various sources. AI detection tools are essential but are engaged in an endless game of cat-and-mouse.


Another area of Openness relates more specifically to AI models themselves. While all AI Models have susceptibilities and risks associated with their lack of transparency, and questions will persist around AI Safety across the entire market, in certain parts of the open-source market, there is additional potential for harm due to the unregulated environment. The strategic reasoning behind the maintenance of the open-source market is explained by, among other things, the nascent nature of this technology and important geopolitical concerns. These are outside of this review. What it does mean, however, is that, once again, organizations need to be on their guard about the particular threats posed to some of these models and, hence, those who use them.


While an extensive subject, Generative AI is the subject of multiple threats, both innately at the model level itself and in its use. First, at the model level. Multiple studies have shown that all models are exposed to various techniques designed by bad actors to change the behavior of the language models themselves. Generative models now come in various sizes and degrees of sophistication and hail from many different groups, but they are all potentially subject to these risks. However, those from the open-source environment and those that are not otherwise under proprietary control, like Meta’s Llama products, are inevitably more vulnerable to these attacks.


Once again, NIST offers background information in a report on these risks, along with some mitigation strategies. The report considers the four major types of attacks on these systems: evasion (symbolic tampering), data poisoning (corrupting training data), privacy (post-deployment reverse engineering), and abuse attacks (secondary source data contamination). It also classifies them according to multiple criteria, such as the attacker’s objectives, capabilities, and knowledge. In addition, there is also the practice of what is known as “jailbreaking,” which is the practice of creating other conditions designed to disrupt the model’s guardrails of behavior, such as those against revealing dangerous, toxic, and sensitive information.


It’s become somewhat of a game in certain quarters to try to jailbreak AI models, getting them to say or do bad things. But while some might find it amusing to get ChatGPT to make controversial statements, it’s less entertaining if your customer support chatbot, branded with your name and logo, spews bile or gives away the store. However, this practice can be especially dangerous for AI systems that have access to tools, as we have seen. Next stop: an AI finds a way to execute an SQL query that corrupts your data.


To combat all of this, guardrails should be placed on your system so that no harmful actions can be automatically executed. For example, no SQL queries should be possible that can insert, delete, or update sensitive data without human approval. The downside of this added security, of course, is that it can slow down your system. A small price to pay?


The main takeaway here is that there are multiple attack vectors on models, both before and after deployment. “Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author of the report Alina Oprea, a professor at Northeastern University. NIST computer scientist Apostol Vassilev, another of the report’s authors, goes on to say, “Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences.”


Several reports indicate that there are many open-source models with back-door vulnerabilities that may expose them to attack. This is particularly concerning when you consider how these models may be used in conjunction with other more sophisticated attacks, using the models as weapons. Researchers have discovered a new security threat for vision transformers, a popular AI model. The attack, named SWARM, uses a "switch token" to secretly activate backdoor behavior in a model, making it highly stealthy and dangerous for users.


Indeed, HuggingFace, the open-source data science and machine learning platform for AI developers, was determined by one group to have in excess of 100 malicious AI and Machine Learning models accessible. Now, that does not translate to them being inside your firewall. But they are out there. What if a similar model - or its associated stealth features - makes itself more widely available without detection? What if it is then used to access accounts, steal data, and make promises to customers that result in legal consequences? These are purely rhetorical questions - all these events have already occurred.


Stemming from this concept of Openness, thousands of servers storing AI workloads and network credentials have been hacked in an ongoing attack campaign targeting a reported vulnerability in Ray, a computing framework used by OpenAI, Uber, and Amazon. The attacks, which have been active for many months since September 2023, involved tampering with AI models. They resulted in the compromise of network credentials, allowing access to internal networks and databases and tokens for accessing accounts on platforms including OpenAI, Hugging Face, Stripe, and Azure. Besides corrupting models and stealing credentials, attackers installed cryptocurrency miners on compromised infrastructure, which typically provides huge amounts of computing power. Attackers also installed reverse shells, which are text-based interfaces for remotely controlling servers.


Lastly, there is also an increasing tendency in the market towards smaller models, given their improvement in efficiency and lower energy costs. This makes it increasingly likely that there will be much more to track.


So, what can we do in response to all of this? Well, there are various services offered that specialize in direct adversarial testing of models -red teaming- with the purpose of exposing their vulnerabilities. These can help to fortify models and reduce developer and deployer risks. Since its first publication in 2014, one of the most important cybersecurity documents has been the (NIST) Cybersecurity Framework. This framework helps organizations identify, protect against, detect, respond to, and recover from cybersecurity risks. On the content side, synthetic (hypothetical) data may also help in some circumstances to train models while minimizing the risk of actual privacy harm. Algorithmic assessment services are also an essential component before and during any deployment.



However, the first step in dealing with any problem is to understand it. This is not a fix-it and forget-it situation. The problems identified persist all through the supply chain and may well not be caught at either the developer or the independent testing levels. We simply cannot be certain that all risks have been identified, even with exhaustive testing. The problems could well be lying in wait, playing the long game. And tomorrow is another day with a new set of conditions. Perhaps the last word on this should come from Apostol Vassilev, who sums it up perfectly, “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”



PERMISSIONS



Shifting gears to Permissions, maybe we should take stock of where we are. We've looked at the backdrop and the stats, and we’ve considered the stack and the open terrain. Let’s step down a few rungs of the ladder into the action of deployment and the kinds of issues that surface on a day-to-day use basis. The reason for Permissions as a category is that it draws attention to the identity of who has access to which systems and how they should be using them. This is not a new concept in cybersecurity. In fact, it’s fundamental to the compartmentalization of data, access controls, and business workflow.



My view is that the area of Permissions requires a re-evaluation, particularly in the context of Generative AI, but also more generally, given that AI attacks are becoming more prevalent and sophisticated. Who has access to what information? For what purposes and with what frequency? What are the safeguards in place? In truth, some of the standard AI attack vectors at the individual level, such as social engineering, which is the act of psychologically manipulating people into unwittingly compromising their or an organization’s information security, or other scams, such as deep fakes or impersonations, are threats that face every individual within an organization. That said, all organizations should re-assess the roles of employees and other stakeholders in the light of new AI capabilities to minimize successful attacks.



We’ve all had experience with fake emails and all; if we’re honest, at times, we have wondered whether some of them are legitimate or not, even possibly responding. According to IBM, “phishing, the most common type of social engineering attack, is also the most common data breach attack vector, accounting for 16% of breaches. Phishing scams use fraudulent emails, text messages, social media content, or websites to trick users into sharing credentials or downloading malware”. Spear-fishing is a targeted phishing attack on a specific individual. The target is usually someone with privileged access to sensitive data or special authority that the scammer can exploit, such as a finance manager who has the authority to move money from company accounts.



The MGM case in 2023 is a case in point, which reportedly cost the organization $100m dollars despite MGM’s refusal to pay any ransom. The attackers reportedly sent messages to targeted employees claiming they needed to re-authenticate their identities or update account information They then installed multiple versions of remote monitoring and management tools, providing them backup access to a system if they initially get caught. The case clearly highlights that it makes sense to confirm that procedures and access controls remain fit for purpose, perhaps with a working assumption that an AI bot has already infiltrated and not yet been detected.


People from all walks of life, especially the elderly, suffer from this pernicious activity. There are scammers everywhere. Apparently, there is even a product sold on the dark web called FraudGPT, which allows criminals to make content to facilitate a range of frauds, including creating bank-related phishing emails, or to custom-make scam web pages designed to steal personal information (why am I not surprised?). But, while individuals do not have sufficient interest to protect themselves,it seems, organizations have very different risk profiles and ethical responsibilities.



There is also increasing sophistication in AI voice cloning and deep fakes. Just when you thought you’d seen enough, this seems even more alarming. I dread to think what kind of exploitation we could be in for with these techniques. A finance worker at a multinational firm was recently tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police. Investing in deepfake detection software that utilizes machine learning algorithms to analyze video and audio for inconsistencies now seems essential. Once again, leaning into the latest technological safeguards is still not a guarantee of success.



Lastly, with respect to the use of AI models internally, there are other considerations around Permissions. As is well documented, fabricated output from models (hallucinations), inaccuracies, and even biased and/or toxic output are all possibilities that cannot be discounted. It is important to have a multi-level Permissions structure to ensure human review and possibly cross-validation to maintain output quality and to prevent the release of inappropriate content.



However, Permissions also impact the prompt request process, i.e. when asking models for their feedback. It is important for organizations to ensure that assigned individuals are following defined procedures in order to minimize the success of any interposing “prompt injection attacks” originating from bad actors seeking access. Trained users and monitors will be better prepared to prevent resulting harms. This is above and beyond the basic yet fundamental issue of not training your AI models on any sensitive data and not allowing them, by reference to training guardrails, to retrieve sensitive data.

Reportedly, OpenAI has developed a new technique called "instruction hierarchy" for its GPT-4o Mini model to prevent users from tricking the AI with unauthorized prompts, giving precedence to the developer's original instructions.



ERRORS



Let’s face it, even given the best will in the world, mistakes happen. Errors can surface in many areas, including defective access controls, failures to update, or bad coding. But the biggest common denominator culprit is garden variety human error. No system is failsafe. It’s important to think about systems as the guardian at the gate to catch users when they fall. This is where Crowdstrike went wrong. There wasn’t malintent - just a lack of routine procedures to catch the errors in transit (with devastating consequences). An ounce of prevention is worth a pound of cure, as they say.



One of the most reported incidents was when Samsung employees accidentally put Samsung’s proprietary information into ChatGPT, leaking the company’s secrets. It’s unclear how Samsung discovered this leak, nor how the leaked information was used against Samsung. However, the information is practically irretrievable from the black box, and the incident was considered serious enough for Samsung to ban the use of ChatGPT in May 2023. Established policy and prompt request procedures can help to minimize these kinds of events, as well as others. Prompt templates and review procedures with sensitive information can also help with obtaining accurate, reliable responses from models. They can also help reduce the risk of model hallucination and prompt hacking, including jailbreaking and the release of sensitive data.



Finally, let’s talk about passwords. Now, you may say that these kinds of security procedures are not related to Errors - but you’d be wrong. Authentication procedures are an identified nightmare when it comes to cybersecurity. It's because the average user (and I include myself) is so lazy when it comes to regularly updating passwords (who’s got the time?) that systems need to accommodate these errors and come up with better mousetraps. Research carried out by My1Login found that even among employees who have received cybersecurity training, 85% still reused passwords, and training had no effect at all on users choosing to write their passwords down. This has to be an urgent issue, given that an AI bot can spend from here to eternity engaging in educated guesswork at ridiculous speed without sleep!



Single sign-on (SSO) and Passwordless Authentication procedures can replace passwords with secure tokens using open security standards such as SAML and OIDC, or some can incorporate Enterprise Password Management to automatically generate strong, unique passwords which are undisclosed to users, and automatically enter them into login forms. This prevents many of the most common methods attackers may use to gain unauthorized access to applications, by taking the issue out of the hands of users. User-generated passwords are an Error waiting to happen.



Uninvited vs. Invited AI



Traditional AI has been around for quite some time - and as we’ve seen, there are AI supported tools that are already becoming essential for effective cybersecurity. While there are a number of frameworks out there for approaching cybersecurity threats, such as MITRE ATT$CK and STRIDE, in addition to those at NIST, the forces ranged against us do not stand still. As technology advances, so do the modus operandi of bad actors using uninvited AI. AI's ability to process vast amounts of data, learn from patterns, adapt its behavior and play the long game, has provided cybercriminals with a potent instrument to orchestrate complex attacks from crafting convincing phishing emails to optimizing malware distribution. AI enables adversaries to streamline operations, evade detection, and exploit vulnerabilities - such as APIs and poor data hygiene - with heightened precision.



This growing role of AI in cyberattacks underscores the urgent need for robust and adaptive security strategies that can counteract these emerging threats. The adoption of AI-powered bots for web scraping attacks introduces a paradigm shift in the capabilities of cybercriminals. These intelligent agents bring a slew of advantages that empower malicious actors in unprecedented ways. Effectively addressing and preventing security breaches, therefore, requires a comprehensive and proactive approach to cybersecurity, encompassing more robust technical measures, employee training, and continuous monitoring.



The last element to talk about in all of this isInvitedAI, i.e. those systems you have chosen or are proposing to deploy. Understanding the generally maladapted backdrop is critical context information for business leaders, helping them to develop thorough and updated action plans, including with these implementations. Assuming these other areas are being addressed, however, when it comes to deploying the new wave of LLM and other Generative AI systems, it is also important to have specific policies in place and to develop a permission-based architecture with tight procedures, to ensure that these systems can be utilized with confidence. Simply put, because these systems are generative, which means that they require additional information security, review and operational tracking. Once evaluated for selection, control mechanisms enable you to optimize Generative AI to particular use cases, provided, of course, your data is suitably clean, structured and de-identified with secure anonymization procedures.



You might as well assume that you have invited Generative AI into your operation by default if you do not have appropriate procedures in place to marshal it appropriately. This is not A N Other self-standing technology. Anyone can create an account with Generative AI. It is everywhere, with everyone, all at once. Whether authorized or unauthorized, it will be pumping (possibly incorrect and/or toxic) information into your organization at a rate you will be unable to keep up with. As such, it represents an entirely different risk management proposition. As such, what you need is a platform and a plan backed up by policy. If you have no permissions structure on AI use within your organization, you will have effectively given a nod to uninvited AI to “come on in”. The statistics on unauthorized use within organizations are staggering - and AI threat detection tools alone will not get the job done.



Personally, I also think the days of adding new apps to the stack without serious consideration are over. AI is rewriting the rules on how things work and what is possible - all with associated risks. I am not known as a merchant of the apocalypse. In fact, I am disposed to be quite the opposite. But any sane individual would need to be either a willful denier or an ostrich to ignore this situation. Maybe we should have all thought about this prospect collectively before even entertaining AI - but it’s here now. If you have come this far in this piece, I hope you found it useful. Thank you for your attention and determination, in equal measure. This is a lot to digest. First, we rest - but not for long. None of us can afford to languish on the slippery SLOPE!

R. Scott Jones

About R. Scott Jones

I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.

DISCLAIMER

The content here is for informational purposes only and does not constitute tax, business, legal nor investment advice. Protect your interests and consult your own advisors as necessary.