- Unacceptable Risk: prohibited
AI systems whose impact on fundamental rights is incompatible with EU values. They cannot be used or commercialized.
Examples include social scoring by public authorities or systems that manipulate behavior subliminally.
- High Risk: permitted, but with strict obligations
Systems with significant impact on health, safety, or fundamental rights: employment, education, justice, essential services, democratic processes, etc.
They require:
- risk management
- data quality and governance
- documentation and logging
- appropriate human oversight
- robust cybersecurity
- Limited Risk: transparency obligations
Covers systems like chatbots or tools generating synthetic content (deepfakes). The main obligation is to inform users that they are interacting with AI or that the content has been artificially generated or manipulated.
- Minimal Risk: most applications
Everyday AI systems with no significant impact on rights or safety. They can be used freely with no extra requirements.
#InteligenciaArtificial #LeyIA #AIAct #UE #AI #AndroidDev
