This is the script of my national radio tech report last night on how
Big Tech seems to not often be taking steps to improve AI safety for
users, and in fact appears to be making things worse. As always, there
may have been minor wording variations from this script as I presented
this report live on air.
- - -
Yeah, with Big Tech under so much scrutiny regarding abuses in their
LLM generative AI systems, you'd think that they'd be working
proactively to avoid more problems, but that just doesn't seem to
often be the case. We've talked about the misinformation/hallucination
aspects of these systems, the wrong and misleading and even sometimes
potentially dangerous answers they can give.
Google's Search AI Overviews in particular are notorious in this
regard, sometimes giving nonsensical or just plain wrong answers to
various simple math questions, or questions about the current date,
and so on. I saw two more examples just today. AI Overviews giving
different answers to whether or not a particular letter was in a word
depending on whether you asked the question with the lowercase form of
that letter or the uppercase form. Far more serious, AI Overviews were
answering a straightforward question about VPN configuration with an
authoritative looking completely wrong answer that would have deleted
the entire VPN configuration causing massive problems!
Another serious one was after the recent crash of the Air India 787
taking the lives of all but one person aboard. Google's AI Overviews
were sometimes saying that the plane was built by Airbus, not the
correct answer of Boeing.
It's not just Google. There have been new problems with people's
personal information showing up publicly in part of Meta's AI
chatbots. Meta has reportedly now put up a warning box telling users
that entries you make could end up being seen publicly -- that should
have been done in the first place!
Big Tech is in such a rush to push out these substandard, flawed
systems, that only after damage has been done are their execs willing
to take these kinds of steps to protect users.
Then there's the new generative AI video creation systems that are
advancing to the point where even experts can't necessarily tell the
difference between fake AI video and real video. These systems are
being used already for all sorts of scams and that's only going to get
worse. Some are even trying to trick YouTube creators into losing
control of their own YouTube channels.
One clue, for the moment, specifically regarding Google's new very
advanced system, is that only very short videos can be created
currently. So some scammers take these less than eight second segments
and put them together into longer videos with quick fades between them
to hide visible discontinuities. But you can't necessary even depend
on the segment fade clues, because scammers could put some sort of
information slide or something similar between the segments that would
look far more natural.
It really does appear that Google and the other Big Tech firms are so
convinced that this is the future, and so desperate to make money back
from the billions they've invested in these systems, that they just
don't seem to care at all about how much damage they're actually
doing.
Personally I don't believe that we can depend on these firms doing the
right thing on their own, so both federal and state legislation is
probably going to be needed to reign them in as the damage they cause
increases. And right now, Congress seems to be moving in the opposite
direction, actually banning many types of AI regulation.
Reasonable people can of course disagree about these issues, but
letting these firms run amok with these AI deployments at this time in
such an unregulated manner seems ill advised and frankly, to a lot of
onlookers, quite frightening as well.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues