The European AI Office has introduced the First Draft of the General-Purpose AI Code of Practice under the EU AI Act. This draft aims to regulate the development and deployment of general-purpose AI (GPAI) systems, addressing risks and promoting compliance. If your business uses AI or plans to, here's what you need to know about this important development
.
What is the AI Code of Practice About?
The Code focuses on two types of AI providers:
General-purpose AI models: AI systems designed to perform multiple tasks.
General-purpose AI models with systemic risks: AI systems that could potentially cause widespread harm or disruption.
Key themes include:
Transparency: Clear documentation of AI models for regulators and downstream providers.
Risk Mitigation: Measures to identify, assess, and minimise risks.
Governance: Best practices for responsible AI management.
Core Provisions
1. Transparency Rules
Providers must document:
How their AI models were developed, tested, and distributed.
Information required for downstream users to understand the model's capabilities and risks.
This encourages trust and enables users to comply with their legal responsibilities when integrating AI.
2. Systemic Risk Management
Providers of higher-risk models must:
Continuously assess risks using structured frameworks.
Address issues like data misuse, biases, and cybersecurity vulnerabilities.
3. Safety and Security Framework (SSF)
Businesses need to adopt policies for:
Mitigating risks at every stage of the AI lifecycle.
Safeguarding sensitive AI-related assets, including training data and algorithms.
4. Copyright Compliance
The Code requires compliance with EU copyright laws, particularly around using third-party content for AI training.
Implications for Businesses
Opportunities
Innovation with Integrity: Demonstrating alignment with ethical practices enhances brand reputation.
Access to EU Markets: Adherence positions businesses for opportunities within the EU market.
Competitive Advantage: Early adoption of these measures could distinguish companies from competitors.
Challenges
Documentation Burdens: Smaller firms may struggle with the administrative effort required.
Risk Assessment Costs: Systemic risk evaluations and mitigations can be resource-intensive.
Copyright Scrutiny: Businesses must ensure their training datasets comply with intellectual property laws.
Steps to Prepare
Audit Existing AI Systems
Assess compliance with transparency, risk, and copyright rules.
Identify gaps and plan remedial measures.
Implement Governance Policies
Create or update AI governance frameworks to align with the Code.
Include whistleblowing mechanisms and incident reporting processes.
Engage Stakeholders
Involve technical teams, legal advisors, and compliance officers early.
Educate employees and partners about new responsibilities.
Utilise External Expertise
Collaborate with AI governance consultants to reduce compliance burdens.
Explore tools and platforms that simplify documentation and risk assessment.
What’s Next?
The Code is still in its draft phase, with the final version expected by May 2025, so now is the time to get ahead.
At Quantum Risk Solutions, we specialise in helping organisations navigate regulatory changes like the EU AI Act. From developing AI governance frameworks to conducting systemic risk assessments, we’re here to ensure your business not only complies but thrives in this new regulatory era.
Get in touch today to future-proof your AI initiatives!
Comments