Zero Trust AI Governance Framework

Ahead of DEF CON 31, Accountable Tech, AI Now, and EPIC offer an antidote to 
self-regulation for artificial intelligence.

Rapid advances in AI, the frenzied deployment of new systems, and the 
surrounding hype cycle have generated a swell of excitement about AI’s 
potential to transform society for the better.

But we are not on course to realize those rosy visions. AI’s trajectory is 
being dictated by a toxic arms race amongst a handful of unaccountable Big Tech 
companies – surveillance giants who serve as the modern gatekeepers of 
information, communications, and commerce.

The societal costs of this corporate battle for AI supremacy are already 
stacking up as companies rush unsafe systems to market – like chatbots prone to 
confidently spew falsehoods – recklessly integrating them into flagship 
products and services.

Near-term harms include turbocharging election manipulation and scams, 
exacerbating bias and discrimination, eroding privacy and autonomy, and many 
more. And additional systemic threats loom in the medium and longer terms, like 
steep environmental costs, large-scale workforce disruptions, and further 
consolidation of power by Big Tech across the digital economy.

Industry leaders have gone even further, warning of the threat of extinction as 
they publicly echo calls for much-needed regulation – all while privately 
lobbying against meaningful accountability measures and continuing to release 
increasingly powerful new AI systems. Given the monumental stakes, blind trust 
in their benevolence is not an option.

Indeed, a closer examination of the regulatory approaches they’ve embraced – 
namely ones that forestall action with lengthy processes, hinge on overly 
complex and hard-to-enforce regimes, and foist the burden of accountability 
onto those who have already suffered harm – informed the three overarching 
principles of this Zero Trust AI Governance framework:

  1.  Time is of the essence – start by vigorously enforcing existing laws.
  2.  Bold, easily administrable, bright-line rules are necessary.
  3.  At each phase of the AI system lifecycle, the burden should be on 
companies to prove their systems are not harmful.

Absent swift federal action to alter the current dynamics – by vigorously 
enforcing laws on the books, finally passing strong federal privacy legislation 
and antitrust reforms, and enacting robust new AI accountability measures – the 
scope and severity of harms will only intensify.

Continua qui:

https://accountabletech.org/research/zero-trust-ai-governance-framework/

_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to