This is the script of my national network radio report from yesterday
regarding the use of generative AI by government agencies in ways that
could potentially be risky to millions of people. As always, there
were some very minor wording changes from this script as I presented
this report live.
- - -
Well, a lot of people who track developments in AI, especially
generative AI, are very concerned about this, and you don't need to be
an AI expert to understand why. Anyone who has spent any time
interacting with these generative AI systems now knows how they can
misinterpret and confuse information, and generate answers and reports
that may seem superficially convincing but are often completely wrong,
or even worse are partly right and partly wrong, which is a recipe for
even worse outcomes, because there might be enough correct info there
to cause you to miss crucial wrong information that's mixed in.
Now given the overall sorry state of these error-prone AI systems, I
think most of us would agree that one place we don't want them being
depended on now is in government agencies where the effects of AI
errors could drastically affect people's lives in potentially awful
ways. But unfortunately, as we had feared would be the case, there are
already instances of agencies rushing to use these AI systems for
time-saving to largely replace human activities, and while in theory
they have human oversight, experts are very concerned that these
systems will not be properly supervised and that the results could be
very problematic to say the least.
So here are just two examples. Police departments are beginning to use
AI systems that replace human-written police reports regarding
contacts with the public, all sorts of incidents, with AI generated
reports that are created using the audio from officer body cameras.
Now if you've ever watched bodycam footage on YouTube or whatever, you
know it can be confusing enough for humans who have a lifetime of
experience to know what's going on even when you have both audio and
video. Yes, writing up a police report can be time consuming, but at
least then you know that these crucial documents (and courts may often
have limited if any ability to treat those reports as other than
accurate) were generated by a human being not an AI system that
doesn't really think or understand anything in ways we typically would
use those terms.
Here's another example. A U.S. state has signed a contract with Google
to use their AI -- which they admit makes mistakes -- don't we know
it! -- to create reports that will determine if people receive
unemployment benefits -- which can be crucial to their even being able
to have necessities like food or shelter during highly stressful
periods in their lives. And it seems that even though there are
concerns about this, Google apparently gave quite the sales pitch and
got the contract.
It doesn't take a lot of imagination to see how confused AIs could
create reports in both of these situations that could severely
negatively impact innocent people's lives if the reports are not
completely accurate and fully trustworthy. The sorts of errors that
may only be an annoyance in a Search AI Overview could potentially be
devastating in a police report or determination about unemployment
benefits.
Now in theory humans are supposed to review these reports before
they're made official. One method for trying to enforce that is, for
example, to embed some really obvious loony statements in the reports
to see if the human reviewer catches them. But we know that the urge
to perhaps give a cursory glance at an AI report and then click the
"Looks good to me!" box and move on is going to be irresistible in so
many cases.
The Big Tech AI firms are great at giving sales pitches to
governments, and in most cases to government officials who are not
experts on AI. So they may find these pitches so attractive that
they're willing to use the public as essentially guinea pigs for AI
applications that may not be ready for prime time. And many actual AI
experts feel that these should not be used when errors could harm
people's lives in significant ways.
Again, our own experiences with these AI systems demonstrate how
disturbingly frequently they get things wrong, and as a society this
expansion of their use in these ways should be of enormous concern to
us all.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues