- Ada is recommending amendments to ensure that those affected by AI are 
recognised and empowered in the AI Act.
- Centring those affected by AI, Ada recommends enshrining legal rights for 
complaint and collective action and giving civil society a voice within 
standards setting.
- Ada recommends expanding and reshaping the role of risk in the Act. Risk 
should be based on ‘reasonably foreseeable’ purpose and extended beyond 
individual rights and safety, to also include systemic and environmental risks.

The Ada Lovelace Institute, an independent research institute based in the UK 
and Brussels, has today published a series of proposed amendments to the EU AI 
Act aimed at recognising and empowering those affected by AI, expanding and 
reshaping the meaning of ‘risk’ and accurately reflecting the nature of AI 
systems and their lifecycle.

As the first comprehensive attempt in the world to regulate AI, the Act has the 
potential to become a global standard in the regulation of AI and serve as 
inspiration for other legislative initiatives around the world.

Ada recommends empowering people by building ‘affected persons’ into the Act 
and enshrining their legal rights for complaint and collective action. The 
voice of civil society should also be increased by building representation for 
civil society organisations into the EU standards-setting process, which to 
date has had to tackle technical rather than societal issues.

Risk forms the foundation of the AI Act, and Ada is proposing changes to both 
how risk is determined in the Act and to its categories of risk.  The 
amendments recommend establishing a process for adding new types of AI to the 
‘high risk’ list and assessing risk based on the ‘reasonably foreseeable 
purpose’ of AI systems, rather than their ‘intended purpose’.

...

The policy briefing builds on an expert legal opinion commissioned by the Ada 
Lovelace Institute and authored by Professor Lilian Edwards, a leading academic 
in the field of internet law, which addresses substantial questions about AI 
regulation in Europe and looks towards a global standard.

https://www.adalovelaceinstitute.org/press-release/ai-act-must-recognise-those-affected-by-ai/

-- EN

=====================================================================
Prof. Enrico Nardelli
Direttore Laboratorio Nazionale "Informatica e Scuola" del CINI
Dipartimento di Matematica - Universita' di Roma "Tor Vergata"
Via della Ricerca Scientifica snc - 00133 Roma
tel: +39 06 7259.4204    fax: +39 06 7259.4699
mobile: +39 335 590.2331     e-mail: [email protected]
online meeting: https://blue.meet.garr.it/b/enr-y7f-t0q-ont
home page: http://www.mat.uniroma2.it/~nardelli
blog: http://link-and-think.blogspot.it/
=====================================================================
--
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to