An Introduction to The AI Act: What You Need To Know

AI Act

Whether you’re an organisation using AI-driven chatbots for customer enquiries, developing predictive algorithms for credit risk, or image recognition software for security purposes, the upcoming legal obligations of deploying certain artificial intelligence (AI) technologies under the EU’s landmark AI Act may severely impact your data handling practices.

A team of experts in data protection services and AI governance have penned the following article to help businesses understand the requirements of the AI Act, as well as what will be required of your organisation in order to achieve compliance.

What is The AI Act?

The AI Act establishes a regulatory and legal framework for the deployment, development and use of AI systems within the EU. The legislation takes a risk-based approach, categorising AI systems according to their potential impact on safety, human rights and societal well-being. Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment.

AI systems are categorised into different risk levels based on their potential impact, with the burden of compliance increasing proportionate to the risk. The three main categories are prohibited, high risk and low risk.

Prohibited systems are banned entirely, due to the unacceptable potential for negative consequences. High risk systems are those with a significant impact on people’s safety, wellbeing and rights. They are allowed, but are subject to stricter requirements. Low risk systems are those which pose minimal dangers, and therefore have fewer compliance obligations.

How will the AI Act and the GDPR work together?

“In many cases, the AI Act and the GDPR will complement each other”, comments one UK-based data protection specialist. “The AI Act is essentially a product safety legislation designed to ensure the responsible and non-harmful deployment of AI systems. The GDPR is a principles-based law, protecting fundamental human privacy rights”.

Where Are We At The Moment?

The AI Act was approved by the European Council on 21 May 2024, with a phased implementation schedule over two years, which has been designed to give organisations time to make necessary changes to their systems and practices.

The Act will apply to public and private organisations operating within the EU that develop, deploy, or use AI systems within the EU’s single market. This includes companies, institutions, government bodies, research organisations and any other organisations involved in AI-related activities.

When will the AI Act apply?

The AI Act’s finalised text will be published in the Official Journal of the European Union, officially entering into force twenty days after publication – expected by late June or early July 2024. The new law will then apply two years later, in 2026.

The EU Commission has also established the EU AI Office. From 16 June 2024, the AI Office will support the implementation of the AI Act across all Member States.

Timeline and Important Deadlines

  • The AI Act becomes law (expected late June to early July 2024)
  • 6 months later
    AI practices with unacceptable risks to health and safety or fundamental human rights will be banned.
  • 9 months later
    The AI Office will finalise the codes of conduct for developers and deployers of AI systems.
  • 12 months later
    Rules for General Purpose AI (GPAI) providers come into effect. Annual review of prohibited AI applications begins.
  • 18 months later
    Implementing acts for high-risk AI providers are introduced, including mandatory monitoring plans.
  • 24 months later
    The remainder of the AI Act applies, especially regulations on high-risk AI in Annex III (e.g., biometrics, facial recognition).
  • 36 months later
    Regulations in Annex I regarding high-risk AI systems become effective.

Conclusion

The AI Act represents a substantial legislative shift for organisations that use artificial intelligence in the EU, and organisations must plan for severe criteria and assessments, especially for systems posing a high risk. As the AI landscape continues to develop, staying informed and adaptable will be key for businesses to continue harnessing AI’s potential while adhering to new legal obligations.