Google's AI policy is incredibly dangerous

Google is claiming they haven't changed their policy about high-risk
applications, just "clarifying it" -- though Google already has a policy
that is less stringent than other firms in key respects.

Let me be 100% clear about this. These AI systems are being deployed
but the firms -- so far as I know -- are refusing to take ANY
responsiblity for impacts on people's lives that result from the
decisions and errors from those systems. These firms should be
required to take 100% responsibilty in these areas. 100%. Otherwise,
stop providing the services while trying to cover your asses with
"it's all on you!" disclaimers. If the firms won't take
responsibility, they should be required to do so by LAW. Before
millions of people are hurt. With AI Agents on the horizon, this is
more important now than ever before.

Also see:

https://techcrunch.com/2024/12/17/google-says-customers-can-use-its-ai-in-high-risk-domains-so-long-as-theres-human-supervision/

- - -
--Lauren--
Lauren Weinstein [email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
        PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues

Reply via email to