The European Parliament endorsed new transparency and risk-management rules for AI systems, adopting a draft negotiating mandate for Artificial Intelligence's first-ever rules. The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring. MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights.
- What are the new transparency and risk-management rules for AI systems endorsed by the European Parliament?
- What is the approach followed by the rules for AI systems with an unacceptable level of risk to people’s safety?
- How does the new law promote regulatory sandboxes and boost citizens' rights to file complaints about AI systems?
- FoL Position paper
Comments