EU AI Act: obligations and responsibilities for Companies
What is the EU AI Act and why has Europe decided to regulate AI?
The EU AI Act represents the main European regulation dedicated to Artificial Intelligence. Effective from August 1, 2024, this legislation establishes specific rules for the development and use of AI, based on an innovative approach that classifies systems according to the risks they pose.
The need for regulation arises from the increasingly rapid spread of AI in critical areas such as work, education, and essential services. The EU AI Act therefore aims to prevent dangerous or irresponsible uses of the technology, while also defining responsibilities in cases where automated decisions may have significant consequences for individuals and society.
Objectives and principles of the AI Act in Europe
L’AI Act aims to create a clear and uniform regulatory framework throughout the European Union, balancing technological innovation and the protection of citizens. The provisions favor a single market for artificial intelligence, ensuring that systems respect the EU’s fundamental values: human control, transparency and privacy.
At the center of the regulation is the risk-based principle, which divides AI systems into four categories: unacceptable, high, limited, and minimal.
- Unacceptable risk
AI systems that threaten the safety and fundamental rights of the EU: prohibited (e.g., social scoring, behavioral or cognitive manipulation). - High risk
Systems with potentially serious impacts on health, safety, or rights: allowed but subject to strict obligations (e.g., biometrics, work, education, critical infrastructure). - Limited risk
Applications with low impact, subject to transparency obligations, such as informing users (e.g., chatbots). - Minimal or null risk
Systems with negligible impact: no specific obligations (e.g., anti-spam filters, simple video games).
Obligations of the EU AI Act for companies, developers and users
The AI Act concerns a wide range of subjects, including providers, users, importers, distributors and authorized representatives, including those located outside the EU, when the systems are used within the Union.
The obligations vary depending on the system’s risk level. Providers of high-risk systems must comply with strict requirements, such as:
- implementing a continuous risk management process;
- adopting data governance practices to prevent bias;
- maintaining detailed technical documentation;
- ensuring supervision of automated decisions.
For limited-risk systems, the focus is mainly on transparency: users must be informed when interacting with automated tools (such as an AI chatbot) or when digital content has been generated or modified by AI.
EU AI Act and Generative AI Models: what changes
The AI Act introduces specific rules for general-purpose AI models (GPAI, General-Purpose Artificial Intelligence), including generative AI models. Providers of these systems must adopt policies to comply with EU copyright laws and make detailed summaries of the datasets used for training available.
If a GPAI model is classified as “systemic risk,” providers must comply with additional obligations, including:
- documenting and reporting serious incidents;
- implementing adequate cybersecurity measures.
Furthermore, systems that produce digital content, such as deepfakes, must use readable formats to clearly label content as AI-generated or AI-modified.
Margot: the AI Agent at the Service of Businesses
Margot is an AI agent designed to support daily activities, adapting to the specific needs of users and automating the most repetitive or time-consuming tasks.
The platform operates in full compliance with the General Data Protection Regulation (GDPR), ensuring transparency, security, and data processing within the European Union. Thanks to advanced technological tools, Margot provides personalized solutions that concretely and immediately enhance operational efficiency and productivity.
Contact our sales team to activate Margot and her customizable AI agents.