This is the text of my national radio report yesterday on the topic of
whether or not you should choose to use AI. As always, there may have
been minor wording variations from this script as I presented this
report live on air.
- - -
Yes, so as we all know by now, Big Tech is in a desperate rush to push
their large language model generative AI systems into seemingly every
service, every app, and more and more hardware devices. Will the
kitchen sink be next? Don't bet against it.
And we've talked in the past about all the wrong answers and
misleading answers and misinformation and other garbage that can spew
forth from these systems, be they chatbots or summarization systems or
writing systems, or search overviews or whatever. And Big Tech admits
that these systems can make errors and you'd better check what the AI
systems tell you before you depend on it being true. And this of
course raises the question, if you have to do your own research to
verify the Search AI Overview or whatever it is, what's the real point
in having bothered with them in the first place?
So the term that's coming into popular usage now for this mess is "AI
Slop" and that can be texts or images or videos, more and more of this
kind of content is flooding the Web. And it's getting increasingly
difficult to avoid it, or to turn off these systems that firms like
Google and Microsoft and Meta and the others are stuffing everywhere,
because they're desperate to find ways to make a profit given the many
billions of dollars they're spending on all this.
Let's be clear about the term AI. We all know that stands for
artificial intelligence. But there's no real intelligence there -- not
the way we normally use the term intelligence. And in fact, the term
itself is really now a hype term, a marketing term that has no
rigorous definition.
And this means that when we talk about making decisions about whether
or not we're individually choosing to use AI, it's important to be
clear about what we're actually talking about in any given case. For
example, lots of AI goes on behind the scenes without our knowing it.
A great deal of what's commonly called machine learning falls into
that category.
So to name just one example medical test scanning applications. And
when you speak to a system that uses speech recognition to know what
you're saying, that can be machine learning -- that is, an AI
system -- such as what are called neural networks -- involved in processing
your speech. And these have been real breakthroughs in terms of
accuracy. You may remember years ago when the systems were very crude
compared with today, often limited to just spoken numbers and a few
other words, and every word had to be spoken separately with a pause
for speech to recognized properly, and often even then there were
recognition errors.
So these are not the kinds of AI where you can choose whether you want
to use AI or not, nor are they normally a problem. But the other kind
of AI, large language model generative AI systems, are another story.
This is where we get hallucinating chatbots and misinformation-laden
Google Search AI Overviews and all the rest. While at the same time,
as I mentioned, these firms typically warn you not to trust their
answers while simultaneously usually refusing to take responsibility
for inaccuracies in those answers or any bad things that might happen
as a result of those answers.
Many observers feel that part of what's going on is that these firms
want users to become dependent on these systems, to read for them, to
write for them, to create for them, irrespective of the often low
quality of the results. And while many of these are free or cheap now,
we can already see the game plan once users are dependent on these
systems. For example, Google is now charging a cool $250/month for
their top AI package. Google is now also pushing out a new free "AI
Mode" Search, that won't even show the normal search links at all, and
personally I'd recommend avoiding using it because it appears to be
taking you even farther away from those sites where actual accurate
information resides.
The character of Admiral Ackbar in "Return of the Jedi" famously
declared "It's a trap" -- and while that might be a bit strong when
referring to generative AI overall, it's obvious that Big Tech firms
wants to tighten their leash on us all. So when you can say no to
generative AI, it's definitely an option very much worth at least
considering.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues