EU AI Act and AI Literacy
The EU Artificial Intelligence Act (“EU AI Act”), one of the world’s first comprehensive legal frameworks for AI entered into force on 1 August 2024. It aims to promote the use and innovation of AI across the EU while ensuring adequate protection of fundamental rights.
The EU AI Act is being implemented on a phased basis. Specific provisions around prohibitions on certain AI systems and requirements on AI literacy entered into force on 2 February 2025. The remaining provisions will take effect over the next two years.
AI Classification under the EU AI Act
The EU AI Act distinguishes between two AI concepts: “AI Systems” and “general-purpose AI (GPAI) models”.
An “AI system” is a machine-based tool that can function with varying autonomy, and which may adapt after deployment. It infers inputs to generate specific outputs, like predictions, recommendations, or decisions, which can impact physical or digital environments. AI systems are classified based on the risks they present to fundamental rights into the following categories: prohibited, high-risk, limited and minimal risk. The obligations on organisations vary depending on the classification of the relevant AI tool they are providing or deploying in the EU.
A GPAI model is an AI model, typically trained on large datasets with self-supervision, that can perform a broad range of distinct tasks across many applications. They often form the basis of or are integrated into other AI systems. The definition excludes models used solely for research, development, or prototyping activities, before release to the market. Large language models, ChatGPT or Microsoft 365 Copilot, are an example of GPAI models.
AI Literacy Obligations
Article 4 of the EU AI Act requires that organisations must ensure that any staff or other persons using AI systems on their behalf have a sufficient level of “AI literacy”, considering their technical knowledge, expertise, education and training and the context in which the system is used.
“AI literacy” encompasses the skills, knowledge and understanding that enable those who develop, use, or are impacted by AI systems to make well-informed decisions about their deployment. It also involves recognising both the potential benefits and the risks, including any harm that AI might present, while taking into account each individual’s rights and responsibilities under the EU AI Act.
The goal of AI literacy is to help organisations harness the advantages of AI while ensuring that fundamental rights, safety, and democratic values are protected. By improving AI literacy, employees gain the knowledge they need to make well-informed decisions about AI usage, support compliance efforts, and uphold the proper application of the EU AI Act.
How Can Organisations Become AI Literate?
Article 4 does not set out the specific steps organisations should take to ensure compliance with AI literacy obligations. This provides flexibility to companies but may also pose challenges to organisations seeking to understand what their exact obligations are. Article 4 does require as a minimum that organisations should ensure a general understanding of AI within their organisation and build their AI literacy actions having considered the role of the organisation and the risks associated with the AI system being used.
To ensure compliance with the AI literacy requirements organisations should consider taking the following steps:
- Evaluate AI Use in the Organisation:
This evaluation should identify all AI tools currently being used in the organisation as well as any other AI systems which may be relevant to the organisation in the future.
- Assess the Current Level of AI Literacy in the Organisation:
Organisations should evaluate the current level of AI literacy within the organisation. While there may be a general awareness of AI throughout the organisation, this may not be adequate to meet the AI literacy requirements. The assessment should identify any gaps in knowledge and highlight where further training may be necessary.
- Design and Implement AI Tailored Training:
This training should be role specific considering the individual knowledge, expertise, education and training of each staff member and the context in which they will be using AI in the organisation, i.e. training will likely differ for various groups, e.g. the executive level will likely require a different training compared to product developers or individuals working in human resources. The training should cover the technical aspects of AI use, ethical concerns, compliance and risk management. Training should be continuously updated to keep up with technological advances and legislative developments.
- Establish Monitoring Mechanisms:
Organisations should establish clear processes for monitoring and documenting compliance with the AI literacy requirements as well as risk management frameworks and AI impact assessments. This includes maintaining detailed records of all AI-related training programmes and materials, attendance and assessment outcomes. Organisations should ensure that records are readily accessible and be prepared to provide documentation to regulatory authorities if requested. Organisations will also likely need to develop or update policies that translate AI principles and use into actionable guidelines, including privacy policies or terms of use policies.
Enforcement and Penalties
The EU AI Act does not include specific fines or penalties for non-compliance with the AI literacy requirements in article 4. However, any non-compliance will likely impact the extent of enforcement measures taken against organisations for other EU AI Act infringements.
Article 99 which came into effect on 2 August 2025, mandates that member states shall lay down rules for penalties and enforcement measures to include warnings and non-monetary measures for non-compliance with the provisions of the EU AI Act. These must be effective, proportionate and dissuasive. Notably, article 99 also provides that the supply of incorrect, incomplete or misleading information to regulatory authorities in response to a request may result in administrative fines of up to €7,500,000 or 1% of the offender’s worldwide annual turnover for the proceeding financial year, whichever is higher. Organisations must keep this in mind when providing regulatory authorities with information on AI literacy.
Separately, civil liability may arise if harm is caused due to untrained staff using AI systems improperly and organisations will likely attract regulatory scrutiny in investigations where a lack of AI literacy is evident. Moreover, indirect consequences may include reputational damage or increased legal exposure in the event of incidents involving AI misuse.
Karen Gallagher, Isabel Humburg and Peter Watts