
Drafting of the European Union’s AI Code of Practice, meant to guide AI companies in compliance with the AI Act, has suffered delays as questions have been raised over the code’s alignment with EU copyright law, risk assessment measures and other issues. Delivery of the fourth and final version of the draft code, being developed by a committee of experts appointed by the European Commission, has been pushed back by at least a month, until May. According to a report in Euronews however, development of technical standards for manufacturers to ensure their products and processes comply with the Act has fallen even farther behind schedule.
The standards were scheduled to be ready by August, when provisions of the AI Act begin to apply, to allow for delivery of compliant equipment by the end of the year. But “based on the current project plans, the work will extend into 2026,” CEN-CENELEC told the outlet.
CEN, the European Committee on Standardization, is comprised of 34 national standards bodies, including from the 27 members of the EU, plus the U.K., and members of the European Economic Area. CENELEC is the committee on electrotechnical standardization within CEN, and was charged by the Commission with developing the technical guidance for hardware makers.
Once standards are finalized at the European level, CEN-member organizations are then required to implement the standards at the national level, a process that add more time to schedule.
Read more: Republican Lawmakers Urge Reversal of Biden’s AI Export Controls
“Standardization processes normally take many years. We certainly think that it needs to be stepped up.” Sven Stevenson, director of coordination and supervision on algorithms for the Dutch privacy watchdog agency Autoriteit Persoonsgegevens told Euronews “The standards are a way to create certainty for companies, and for them to demonstrate compliance. There is still a lot of work to be done before those standards are ready.”
EU member countries have until August 2026 to establish their national regulator under the AI Act. Most, as the Netherlands has done, are expected to assign the task their existing data privacy regulators created under the GDPR. The Autoriteit Persoonsgegevens has already dealt with cases in which AI tools were found to be in violation of GDPR. Last year, it fined facial recognition company Clearview AI €30.5 million for creating an illegal database with photos and unique biometric data on EU citizens.
The AI Act, which includes several data transparency provisions, is expected to be complementary to DGPR in that it also addresses data processing.
“The AI Act would apply in the sense that it’s about product safety,” Stevenson said. “If we prohibit this in the Netherlands, it will need to be consistent between the member states.”
Source: Euronews
Featured News
Meta Lawyers Try to Undercut Instagram Co-Founder’s Damaging Testimony
Apr 23, 2025 by
CPI
Tyson Foods, Others Settle Pork Price-Fixing Suit for $64 Million
Apr 23, 2025 by
CPI
NJ Sues RealPage, Landlords Over Rent Collusion
Apr 23, 2025 by
CPI
DOJ Probes Disney’s FuboTV Acquisition Over Antitrust Concerns
Apr 23, 2025 by
CPI
Former LG New Zealand Executives Plead Guilty Over Deleted Messages
Apr 23, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Mergers in Digital Markets
Apr 21, 2025 by
CPI
Catching a Killer? Six “Genetic Markers” to Assess Nascent Competitor Acquisitions
Apr 21, 2025 by
John Taladay & Christine Ryu-Naya
Digital Decoded: Is There More Scope for Digital Mergers In 2025?
Apr 21, 2025 by
Colin Raftery, Michele Davis, Sarah Jensen & Martin Dickson
AI In the Mix – An Ever-Evolving Approach to Jurisdiction Over Digital Mergers in Europe
Apr 21, 2025 by
Ingrid Vandenborre & Ketevan Zukakishvili
Antitrust Enforcement Errors Due to a Failure to Understand Organizational Capabilities and Dynamic Competition
Apr 21, 2025 by
Magdalena Kuyterink & David J. Teece