Thumbnail

The EU AI Act - Turning AI Literacy into Compliance

· 19 min read

This week, the European Union’s Artificial Intelligence Act (the AI Act) marks a significant step in regulating AI by being the first legislation to establish a framework to ensure ethical AI use while safeguarding the fundamental rights of EU residents. Hailed as landmark legislation in the realm of AI, the Act is expected to have a far-reaching impact on AI governance worldwide and is already doing so. It was published in the Official Journal on 12 July 2024, is effective from 1 August 2024, and introduces strict rules on the sale and use of specific AI systems while prohibiting others entirely. Provisions come into operation gradually over the next 6 to 36 months.


The new Act is part of a broader digital framework in the EU that regulates different aspects of the digital economy including the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. As such, the new AI Act does not address data protection, online platforms, or content moderation, which are already addressed in existing provisions. As specific operational standards for the new AI Act are developed over the next year to eighteen months by a working group established by the EU for this specific purpose, one of the challenges will be to determine the AI Act’s interplay with existing legislation.


The AI Act applies to importers, providers, and deployers of AI systems within the EU but also contains extra-territorial provisions, meaning that it also applies to organizations located outside the EU that do business within it and who could, therefore, impact individuals residing in the EU. The Act is comprehensive in application and adopts a risk-based approach. Prohibited AI systems under the AI Act represent serious human rights violations, such as AI systems used for social scoring by governments and manipulating people's behavior. Unacceptable risk systems include those that have a significant potential for manipulation either through subconscious messaging and stimuli or by exploiting vulnerabilities like socio-economic status, disability, or age.


The AI Act’s provisions impose detailed and nuanced obligations on organizations in order to balance commercial enterprise with ethics, rights, and security in the arena of high-risk AI systems. These obligations distinguish between those imposed on providers of AI systems and those imposed on deployers of these systems within their organizations. High-risk systems include critical infrastructure, biometric identification, educational or vocational training, employment, law enforcement, migration, and more.


By way of example, if a company develops an AI screening tool that can assist in the interview and employment process (e.g., by summarizing resumes and scoring applicants), it would generally be a high-risk system provider under the AI Act. Any company that licensed the tool and used it for recruitment would generally be a deployer of such. It is worth noting that an organization may be deemed both a provider and deployer under this legislation.


Aside from risk assessment, a further concept appears in the AI Act, which the framers consider to be an equally important element for the functioning of the entire AI ecosystem, that of “AI Literacy.” The Literacy provisions of the AI Act come into effect on 2 February 2025 (i.e., in 6 months, as of writing). As such, they go into effect in the Act's early stages. Requiring covered organizations to ensure that their relevant personnel are familiar with and trained in the fundamental principles of legislation is not unprecedented as an approach. However, the existence of the AI Literacy provisions is nonetheless a sure sign that the European Commission wants to hit the ground running.


Many resources are available that catalog and review the specific provisions of the Act. The purpose of this article is not to examine these in detail but to review the Literacy provisions in light of some of these substantive provisions that relate to the regulation of high-risk systems (the rules for which go into effect in 24 months). Given its horizontal application to virtually all organizations, we will focus on the use of AI systems in the specific area of employment and its related practices.


The reason for this approach is to illustrate how these Literacy provisions cannot be properly understood without reference to the practical dimension of implementing the AI Act. Many of the standards and regulations that will underpin it are currently in development. In this author’s view, protocols, technology tools, and solutions in the market will all be required to turn literacy into compliance with the AI Act. Many of these tools are not yet widely available, and as such, literacy in this context will need to be an ongoing process. After all, literacy is merely a concept in service of compliance. If the Act is to be successful in its mission, the market will need to develop operational teeth.


All that said, literacy begins with initial familiarity and consciousness. So, let’s look at the definitions used in the literacy provisions, why they are essential, and what they signify for the future deployment of AI within organizations in the employment arena and, by extrapolation, more widely.


AI Literacy - Definition


Chapter 1, Article 4 of the AI Actstates as follows:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”


Definition 56 further states:

“‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”


The upshot is that all those involved in supplying and using these systems are required to have sufficient training to make informed decisions about these systems and, depending upon their role, are required to be aware of the technical aspects, opportunities, and risks associated with their use. Clearly, the EU wishes to avoid some of the mistakes made in the past when technologies appeared in society without sufficient awareness or training to ensure people can use them safely and effectively. They are to be applauded for it.


However, as with all these things, the devil is in the details for those who are required to comply. Certainly, policy development and training will be required, but language is by its nature imprecise. In Article 4, what is meant by “to their best extent”? What do the “measures” to be taken look like in practice? How are all these mechanisms to be varied depending upon the “context the AI systems are to be used in”?


Action Steps


It is common that legislation should put the onus on those who are required to comply. Here’s the law - you figure it out! Nevertheless, despite the fact that the EU is currently developing standards to provide more clarity on operational standards, there will no doubt be significant questions from business leaders, HR training groups, and Learning & Development departments. At this stage, perhaps the best that is definitively available is to say, “Give it your best shot” with certain key action steps whilst thinking about the context you are operating in.


Training should be both organization-wide and role-specific. Soft skills like fact-checking and critical thinking need to be emphasized. In certain instances, however, more detailed and robust training programs to develop a deeper understanding should be considered, especially in circumstances where AI solutions are integrated into employee workflow as a critical function.


Let’s make it a little more concrete. Here are five key action steps that will be required:



  • Assess Current AI Literacy Levels:Identify the gaps and plan for upskilling
  • Establish Policies and Procedures:Rollout and monitor their efficacy
  • Develop and Implement AI Literacy Training:Customized approach - technical, ethical, and evolving
  • Collaborate with Industry and Regulatory Bodies:Engage and adopt best practices
  • Documentation and Reporting:More than what you do - explain with records.



But yet more is needed...


The foundational steps suggested or implied by the AI Act are clearly necessary, but Literacy also needs to be adequately connected to the substantive requirements under the Act, particularly in the deployment of high-risk AI systems. This is where the “how” actually gets done. Where the rubber meets the road.


Let’s first consider the requirements in the employment arena. As discussed, matters concerning employment practices such as hiring, terms, and management are considered to be high-risk under the Act, calling specifically for the avoidance of automated decision-making, bias, and discrimination. The legislation lists examples in this context, including the placement of targeted job advertisements, analyzing and filtering employment applications, evaluating candidates, and ongoing employee management.


AI systems intended to be used to make decisions affecting the terms of work-related relationships, promotion and termination of work-related contractual relationships, the allocation of tasks based on individual behavior or personal traits or characteristics, and the monitoring and evaluation of performance and behavior of personnel, will all likely be considered high-risk AI systems.


The legislators have said that such AI systems “may appreciably impact future career prospects, livelihoods of these persons and workers’ rights” and that there is a risk that their use could “perpetuate historical patterns of discrimination” or undermine “fundamental rights to data protection and privacy.”


Because they are considered high-risk AI systems, all of these activities will be subject to strict requirements, including data quality, transparency, record-keeping, human oversight, and documentation of the AI system. Additionally, conformity assessments will need to be completed before placing such systems on the EU market.


Now, let’s connect the AI Literacy requirements specifically to the obligations related to high-risk deployment in the employment arena.


Recital 91 of the AI Act, in excerpt, states the following:

Deployers should in particular take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. Furthermore, deployers should ensure that the persons assigned to implement the instructions for use and human oversight as set out in this Regulation have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks.”


This is where the vital convergence between legislative regulation and self-regulation occurs. I have argued previously that legal regulation and self-regulation are essential not only for successful AI deployment but also, in fact, the key to unlocking the reward potential of AI. According to a November 2023 Thomson Reuters C-Suite Survey, 93% of professionals in the legal, tax & accounting, corporate, and government spaces believe AI needs regulation.


Here is a perfect illustration of what I mean. The requirement to meet the fairness standards of the Act starts by selecting the appropriate AI systems with quality data that have already been subject to conformity assessment at the model provider level. Algorithmic assessment is required to ensure fairness of output. The deploying organization then needs to avoid the problem of automated decision-making without human oversight. In our example, human review will be required to ensure that employment-related decisions are without discrimination. In order to do this, the deployer needs effective internal mechanisms to ensure that this is put into effective practice.


Let’s look at some of the language used more closely. Recital 91 requires using systems “in accordance with the instructions of use,” but it is silent as to what those instructions actually are. This will no doubt differ depending upon the AI system being used. According to Article 26, deployers will be required to use the high-risk AI systems in accordance with the instructions of use issued by the providers, comply with any business sectoral legislation, ensure that the input data is appropriate, monitor the high-risk AI system’s compliance with its own terms of use and suspend/report its use when they have identified any severe incidents of harm (e.g., bias or personal data breach).


This seems clear in principle. Nevertheless, absent more detail, these admonitions need to resolve the question of what degree of latitude and interpretation in implementation is permitted by the instructions. Whether fixed or flexible, conformity with the Act will be judged based on the instructions' efficacy. In addition, operational practice at the deployer level will need to conform consistently to those instructions. This is a separate but equally important issue in effective and practical implementation. Lastly, in addition to “human oversight,” detailed “record-keeping” is also required to respond to any questions that may arise. Once again, the details lower down the chain matter.


Here are a few questions to consider as a deployer in the interaction between the Literacy provisions and compliant implementation:



  • How will you select the models for your high-risk AI systems?
  • What explainability criteria will you apply in your selection process?
  • What form will the instructionsyougive need to take within your organization?
  • How will you secure permitted use while preventing unauthorized use?
  • How will you integrate human oversight and record-keeping into the workflow?
  • What internal technology solutions will you require to manage the process?



Penalties


A word about penalties as they relate to AI literacy. While there are no specific fines for failing to ensure AI literacy, non-compliance with the literacy provisions will impact the extent of enforcement measures taken against organizations concerning other AI Act infringements. Supplying incorrect, incomplete, or misleading information to notified bodies or national authorities may result in fines up to€7,500,000or 1% of total worldwide annual turnover for the preceding financial year, whichever is higher. Non-compliance is, therefore, potentially very costly.


US Initiatives


It’s fair to say that the US does not take the same comprehensive approach as the EU regarding laws relating to technology and data. The US does not yet have federal legislation regulating AI systems within organizations, while an increasing number of individual states take action themselves. However, in its broader conception, AI literacy is gaining some traction.


In December 2023, in recognition of AI's growing impact on education and society more broadly, two members of the US Congress introduced the Artificial Intelligence Literacy Act. This bill recognizes that as AI becomes increasingly ubiquitous, AI literacy will become as important a concept as digital literacy.


Effective AI literacy initiatives encompass technical training and comprehensive education about the potential benefits and risks associated with AI technologies. They should engage stakeholders and communities at all stages, with specific outreach to communities disproportionately impacted by the digital divide, including minority and rural communities.


Although the bill is still in a subcommittee and has not been passed, it both defines and signals the importance of AI literacy and acknowledges the impact of the digital divide. The bill may help popularize the term AI literacy and solidify its place in the public consciousness in the USA.


It's a wrap - for now...


AI literacy supports the greater mission of compliance. It reinforces accountability by cascading knowledge throughout deploying organizations.


The AI Act represents a comprehensive effort by the EU to regulate AI technologies and to protect fundamental rights. Ensuring AI literacy is about more than avoiding penalties. It fosters trust while promoting ethical AI practices.


When taking proactive steps to assess, educate, and monitor AI literacy within your organization, consider the operational questions. These will become increasingly important—and likely the defining difference—as you navigate these regulations effectively and maintain a competitive edge in a rapidly evolving landscape. Execution will be critical as you turn literacy into compliance.

R. Scott Jones

About R. Scott Jones

I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.

DISCLAIMER

The content here is for informational purposes only and does not constitute tax, business, legal nor investment advice. Protect your interests and consult your own advisors as necessary.