How Businesses Can Comply with the EU’s Artificial Intelligence Act

By Nigel Douglas - APRIL 30, 2024

SHARE:

On March 13, 2024, the European Parliament marked a significant milestone by adopting the Artificial Intelligence Act (AI Act), setting a precedent with the world’s first extensive horizontal legal regulation dedicated to AI. 

Encompassing EU-wide regulations on data quality, transparency, human oversight, and accountability, the AI Act introduces stringent requirements that carry significant extraterritorial impacts and potential fines of up to €35 million or 7% of global annual revenue, whichever is greater. This landmark legislation is poised to influence a vast array of companies engaged in the EU market. The official document of the AI Act adopted by the European Parliament can be found here.

Originating from a proposal by the European Commission in April 2021, the AI Act underwent extensive negotiations, culminating in a political agreement in December 2023, detailed here. The AI Act is on the cusp of becoming enforceable, pending the European Parliament’s approval, initiating a crucial preparatory phase for organizations to align with its provisions.

AI adoption has quickly gone from a nice-to-have to global disruption. Now there is global race to ensure it happens ethically and safely.

Here are seven AI security regulations from around the world.

Risk-Based Reporting

The AI Act emphasizes a risk-based regulatory approach and targets a broad range of entities, including AI system providers, importers, distributors, and deployers. It distinguishes between AI applications by the level of risk they pose, from unacceptable and high-risk categories that demand stringent compliance, to limited and minimal-risk applications with fewer restrictions. 

The EU’s AI Act website features an interactive tool, the EU AI Act Compliance Checker, designed to help users determine whether their AI systems will be subject to new regulatory requirements. However, as the EU AI Act is still being negotiated, the tool currently serves only as a preliminary guide to estimate potential legal obligations under the forthcoming legislation.

Meanwhile, businesses are increasingly deploying AI workloads with potential vulnerabilities into their cloud-native environments, exposing them to attacks from adversaries. Here, an “AI workload” refers to a containerized application that includes any of the well-known AI software packages, but not limited to:


“transformers”

“tensorflow”

“NLTK”

“spaCy”

“OpenAI”

“keras”

“langchain”

“anthropic”

Understanding Risk Categorization

Key to the AI Act’s approach is the differentiation of AI systems based on risk categories, introducing specific prohibitions for AI practices deemed unacceptable based on their threat to fundamental human or privacy rights. In particular, high-risk AI systems are subject to comprehensive requirements aimed at ensuring safety, accuracy, and cybersecurity. The Act also addresses the emergent field of generative AI, introducing categories for general-purpose AI models based on their risk and impact.

General-purpose AI systems are versatile, designed to perform a broad array of tasks across multiple fields, often requiring minimal adjustments or fine-tuning. Their commercial utility is on the rise, fueled by an increase in available computational resources and innovative applications developed by users. Despite their growing prevalence, there is scant regulation to prevent these systems from accessing sensitive business information, potentially violating established data protection laws like the GDPR.

Thankfully, this pioneering legislation does not stand in isolation but operates in conjunction with existing EU laws on data protection and privacy, including the GDPR and the ePrivacy Directive. The AI Act’s enactment will represent a critical step toward establishing a balanced legislation that encourages AI innovation and technological advancements while fostering trust and protecting the fundamental rights of European citizens.

GenAI Adoption has created Cyber Security Opportunities

For organizations, particularly cybersecurity teams, adhering to the AI Act involves more than mere compliance; it’s about embracing a culture of transparency, responsibility, and continuous risk assessment. To effectively navigate this new legal landscape, organizations should consider conducting thorough audits of their AI systems, investing in AI literacy and ethical AI practices, and establishing robust governance frameworks to manage AI risks proactively. 

According to Gartner, “AI assistants like Microsoft Security Copilot, Sysdig Sage, and CrowdStrike Charlotte AI exemplify how these technologies can improve the efficiency of security operations. Security TSPs can leverage embedded AI capabilities to offer differentiated outcomes and services. Additionally, the need for GenAI-focused security consulting and professional services will arise as end users and TSPs drive AI innovation.1

AI compliance

Conclusion

Engaging with regulators, joining industry consortiums, and adhering to best practices in AI security and ethics are crucial steps for organizations to not only comply with the AI Act, but also foster a reliable AI ecosystem. Sysdig is committed to assisting organizations on their journey to secure AI workloads and mitigate active AI risks. We invite you to join us at the RSA Conference on May 6 – 9, 2024, where we will unveil our strategy for real-time AI Workload Security, with a special focus on our AI Audit capabilities that are essential for adherence to forthcoming compliance frameworks like the EU AI Act.

  1. Gartner; Quick Answer: How GenAI Adoption Creates Cybersecurity Opportunities; Mark Wah, Lawrence Pingree, Matt Milone; ↩︎

Subscribe and get the latest updates