European lawmakers just rewrote the timeline for the world's most ambitious AI regulation. The European Parliament voted overwhelmingly to push back compliance deadlines for the EU AI Act until December 2027, giving companies building high-risk AI systems nearly two more years to meet requirements. But the delay comes with a catch - legislators simultaneously moved to ban nudify apps outright, signaling they won't compromise on AI systems that weaponize deepfakes against individuals.
The European Parliament just gave AI companies a reprieve on compliance while slamming the door on one of the technology's most abusive applications. In a vote that passed with broad support, lawmakers pushed back key deadlines in the EU AI Act - but simultaneously moved to ban nudify apps that use AI to create non-consensual intimate images.
The timing shift is massive for companies racing to comply with what's become the world's template for AI regulation. High-risk AI systems, those deemed to pose a "serious risk" to health, safety, or fundamental rights, now have until December 2027 to meet the law's requirements. That's a significant extension from earlier deadlines that had Google, Microsoft, Meta, and OpenAI scrambling to overhaul systems touching EU users.
But it gets even more complicated for certain sectors. Companies developing AI systems covered by existing safety regulations - think medical devices, toys, or automotive applications - won't need to comply until August 2028. The European Parliament's official press release confirms the staggered approach reflects the technical complexity of retrofitting AI systems with the transparency and safety controls the law demands.
The delays aren't unlimited, though. Rules requiring AI providers to watermark synthetic content and other baseline transparency measures are still moving forward on the original timeline. And the nudify app ban shows regulators aren't backing down on what they consider clear-cut harms.











