EU AI Act Delays and Germany’s New Oversight: A Turning Point for AI Regulation
The European Union’s ambitious AI Act is facing revised timelines, while Germany is establishing a national framework for oversight with the KI-MIG (AI Market Monitoring and Innovation Act). This dual development signals a complex balancing act between fostering innovation and ensuring responsible AI development.
EU Parliament Pushes Back Critical AI Deadlines
Rules for high-risk AI systems will come into effect later than initially planned. EU Parliament committees voted on March 18, 2026, to postpone the implementation date. Companies now have until December 2027 to comply with stringent requirements, a shift from the previously scheduled August 2026. Even stricter rules for AI used as a safety component in products are delayed until August 2, 2028.
The delay stems from the lack of finalized technical standards and testing tools. AI systems in critical infrastructure, law enforcement, and human resources are particularly affected.
Tackling “Nudifier” Apps and Content Transparency
Despite the delays, lawmakers are strengthening transparency rules. A strict ban on “nudifier” applications – software creating realistic, intimate images without consent – has been approved. Systems lacking safeguards against such generation will be prohibited.
Requirements for labeling AI-generated content, including audio, video, and text, are also being accelerated. Providers must implement watermarks by November 2, 2026, earlier than originally proposed by the EU Commission.
Germany Appoints Bundesnetzagentur as AI Oversight Authority
Alongside the European timeline adjustments, Germany is building its national oversight structure. The draft KI-MIG is currently before the Bundestag. This law doesn’t introduce new substantive rules but designates authorities and establishes penalties.
The Bundesnetzagentur (BNetzA) will serve as the central market surveillance authority and the primary point of contact for the new EU AI Office. A planned AI service desk will support small and medium-sized enterprises (SMEs) and advise authorities on risk assessment.
The law allows for practical testing of high-risk AI outside official test environments, with approvals granted automatically if authorities don’t respond within 30 days. However, oversight of AI used by state authorities will fall to the individual federal states, creating a hybrid model.
The Debate: Innovation vs. Protection
The extended timelines have reignited the debate over whether Europe is hindering its own AI industry or adequately protecting fundamental rights.
Industry associations welcome the flexibility, arguing it prevents a gap between regulation and implementation and allows time for harmonized standards.
Consumer and digital rights organizations express concern, warning against a weakening of essential safeguards. Legal experts note that individuals facing algorithmic discrimination still face significant hurdles in proving harm. The European Data Protection Board has cautioned against compromising fundamental rights protection.
Recent Court Ruling on AI-Generated Art
A recent ruling in Munich on March 18, 2026, denied copyright protection to several AI-generated logos. The court determined the technical contribution of the AI outweighed the human creative input.
Significant Penalties Remain Despite Delays
The postponement is not a reprieve. Penalties for violations remain substantial: up to three percent of global annual revenue or millions of euros, whichever is higher.
Parts of the EU AI Act are already in effect. Prohibited risk practices have been banned since February 2025, and employee training is mandatory. Providers of general-purpose AI models must meet transparency and governance requirements since August 2025.
Experts advise companies to use the additional time to implement automated monitoring systems, establish internal control bodies, and await final standards from European standardization organizations.
What’s Next?
The EU “Omnibus” proposal requires further approval from the Parliament’s plenary session, followed by negotiations with the Council and the Commission. A final vote is expected in June 2026, with publication of the revised version anticipated in July.
In Germany, the Bundesnetzagentur will prepare for its new responsibilities. By the finish of 2027, Germany must also establish regulatory sandboxes for AI testing, as required by the EU.
FAQ
Q: What is the KI-MIG?
A: The KI-MIG (AI Market Monitoring and Innovation Act) is a German law that designates authorities and sets penalties for enforcing the EU AI Act.
Q: What are the new deadlines for the EU AI Act?
A: Most high-risk AI systems now have a compliance deadline of December 2027, while AI used as a safety component has until August 2, 2028.
Q: What is the role of the Bundesnetzagentur?
A: The Bundesnetzagentur will be the central market surveillance authority and the primary contact point for the EU AI Office in Germany.
Q: What are the potential penalties for non-compliance?
A: Penalties can reach up to three percent of a company’s global annual revenue or millions of euros.
Did you know? The EU AI Act is the first comprehensive law of its kind globally, aiming to regulate AI based on risk levels.
Pro Tip: Begin documenting your AI systems now, even before the deadlines, to demonstrate compliance and mitigate potential risks.
What are your thoughts on the EU AI Act delays? Share your perspective in the comments below!
