
Secure Edge AI under the EU AI Act: What Product Teams Must Do Now
Artificial Intelligence is reshaping how people interact with embedded and electronic devices; from the tools that power factories and monitor energy systems, to the instruments that assist doctors and guide autonomous technologies. But as AI moves from cloud servers to physical devices, new regulatory and security challenges emerge.
The European Union AI Act and the Cyber Resilience Act (CRA) form the twin pillars of this new compliance landscape. Together, they define how AI must be developed, deployed, and secured, especially when it runs in edge devices that operate outside protected environments.
The AI Act focuses on the ethical use of AI, emphasizing principles such as transparency, accountability, and “do no harm.” Many edge devices are safety-critical including industrial control systems, autonomous transport, and medical devices where AI decisions can directly affect people’s safety and well-being. That’s why these Edge AI systems are also covered under the EU AI Act, which sets rules to ensure they operate safely, transparently, and ethically.
Edge AI is often chosen for privacy reasons, since data is processed locally rather than in the cloud. However, even when operating on-device, these systems still fall under the AI Act because they can impact human users and safety. This is why compliance applies not only to cloud AI but equally to Edge AI devices.
“Edge AI means running algorithms in devices that are physically in someone else’s hands and not in a protected data center.”
Jaakko Ala-Paavola
Technology Director at Etteplan
Edge AI reduces dependency on connectivity and enables real-time performance in offline or critical environments but it also means devices must be designed to operate securely, traceably, and ethically throughout their lifecycle.
Learn how compliance requirements translate into practical design decisions in Building Trusted and Compliant AI Devices with Secure Edge AI
Understanding the Regulatory Landscape

Between 2024 and 2027, several key EU frameworks including the NIS2 Directive (EU 2022/2555) on cybersecurity, the Cyber Resilience Act (CRA), the Artificial Intelligence Act (AI Act, Regulation EU 2024/1689), and the Data Act (Regulation EU 2023/2854) on harmonized rules for data access and use will come into force.
Together, these frameworks establish secure-by-design as a mandatory principle for both organizations and products operating within the European Union. This means that connected and AI-powered devices must demonstrate traceable documentation, strong data governance, and full software lifecycle control before entering the EU market.
For companies preparing to meet these standards in their product design, see AI-Empowered Products – Turning Intelligence into Value
- The EU AI Act
The AI Act is the world’s first horizontal framework regulating artificial intelligence. It categorizes AI applications by risk level and imposes duties accordingly from complete prohibitions for “unacceptable risk” systems to full documentation and audit requirements for “high-risk” ones. The goal to ensure AI systems in Europe are safe, transparent, and respect fundamental rights.
- The Cyber Resilience Act (CRA)
Running in parallel, the CRA establishes cybersecurity requirements for digital products and connected devices. Manufacturers must design, develop, and maintain products with security-by-design and by-default principles, disclose vulnerabilities, and provide security updates throughout the product lifecycle.
For Edge AI, both acts intersect:
- The AI Act governs how the AI logic behaves (ethics, risk management, data quality).
- The CRA governs how the device stays secure (software integrity, patching, protection from tampering).
Compliance with both is essential for any company selling AI-enabled products in the EU market.
Explore how choosing the right hardware platform ensures compliance from the start in Choosing the Right Hardware for Edge AI
The EU Regulatory Timeline What Actually Takes Effect and When
| PHASE | KEY PROVISIONS | EFFECTIVE |
| Phase 1 | Prohibitions on banned AI practices + AI literacy requirement for organisations | Feb 2025 |
| Phase 2 | Obligations for General-Purpose AI (GPA) models | Aug 2025 |
| Phase 3 | Full compliance for High-Risk AI systems | Aug 2027 |
Exemptions include national security, defense, scientific research, and non-professional (hobby) use. The Act requires that all employees who design, deploy, or use AI understand its capabilities, risks, and limitations. HR and Learning & Development teams should start building this competence now.
AI Act Risk Categories

- Unacceptable Risk – Banned
Manipulating human behavior, social scoring, real-time remote biometric identification, emotion recognition in public spaces, or predicting criminal intent.
- High Risk
Product-safety or control systems; critical infrastructure; and all systems evaluating people like education, healthcare, recruitment, credit scoring, migration, and justice.
- Limited Risk
Content generation or modification where transparency is required e.g., chatbots or AI-altered media must disclose that AI was used.
- Minimal Risk
Entertainment and low-impact tools like games or spam filters.
An example could be “Mood Recognition” in a research project analysing passenger emotions in trams. They might appear harmless, yet under the AI Act it could be classed as banned (behavioural manipulation) or high-risk (emotion recognition). Context determines compliance.
What “High Risk” Actually Requires
A | System or Product Requirements
- End-to-end risk management
- Data governance & data-quality controls
- Technical documentation & event logs
- Transparency & human oversight
- Robustness, accuracy, and cybersecurity
B | Organizational or Process Requirements
- Quality-management system (QMS) for AI
- Record-keeping & corrective-action logs
- Cooperation with authorities and notified bodies
- Authorized representative (where required)
- Fundamental Rights Impact Assessment (FRIA) – evaluate how your system affects human, social, and economic rights.
10-Step AI Act Readiness Checklist
- Define your AI use case and risk class.
- Map data sources and governance responsibilities.
- Create a threat model covering device, model, and data.
- Ensure full traceability of model versions and training sets.
- Establish a verification & validation plan.
- Provide human-in-the-loop or override capability.
- Implement comprehensive logging & audit trails.
- Document a Fundamental Rights Impact Assessment.
- Plan post-market monitoring and incident reporting.
- Verify suppliers / third-party component compliance.
Ready to de-risk your Edge AI roadmap?
Our multidisciplinary team combines AI engineers, security architects, and regulatory specialists to make compliance a competitive advantage, not an obstacle.
Our experts will map your use case, define your security architecture, and prepare the documentation you will need for certification. Book an AI Act Readiness Review with Etteplan. Contact us!
"We help you comply"
Jaakko Ala-Paavola

Ask our expert a question

Director