This is the script from my national radio report yesterday on
generative AI mistakes and Google's continuing actions to push AI onto
users, whether they want it or not. As always, there may have been
minor wording variations from this script as I presented this report
live on air.
- - -
So yeah, this topic keeps coming back because the issues surrounding
generative "artificial intelligence", generative AI, based on what are
called LLMs -- large language models -- keep getting more and more
bizarre and to many observers increasingly alarming.
One useful analogy would be that of a fictional restaurant, that has a
sign up that says, "Some of the food at this restaurant looks great
and tastes great. Some of it is real bad, and it even looks so bad
you're not even going to want to taste it. But other items look good
and taste good but will make you very ill. It's up to you to test the
food before eating it to decide if you should eat it, the restaurant
takes no responsibility for any of this, you eat at your own risk!"
Big Tech seems to be taking an attitude rather like that toward their
AI Search Overviews and chatbots and summary systems and writing
systems and the rest. If there are errors, Big Tech isn't usually
taking responsibility. And the way these firms are pushing these AI
systems into everything, it's not reasonable to expect busy,
nontechnical people to research everything an AI system tells them to
try determine if it was accurate or wrong, or worst of all a confusing
mix of correct and incorrect information.
Obviously, not all incorrect generative AI answers are equally
serious. A few days ago Google Search when asked if it is 2025 kept
pushing out an AI Overview incorrectly saying that it is 2024, a
question that most humans could get correct. Google eventually fixed
this but didn't say what the actual problem was. Another recent case
was asking if there'd be garbage pickup on Memorial Day, and Google AI
confidently said no there wouldn't be, even though the information
page for that city's trash services clearly showed that Memorial Day
was a normal pickup day.
OK, we know what year it is, missing one garbage pickup could be
pretty inconvenient but probably usually not a disaster. Yet these are
the easy ones where the errors were obvious and not terribly
consequential. A case that has raised more concerns very recently is a
big report from the federal government's Health and Human Services
department that was just released. And it turns out that the original
version as released had lots of errors that should have been caught
before release. And these were the types of errors that are very
common with generative AI systems -- citing studies that didn't exist,
associating individuals with studies they had nothing to do with,
there were even some URL links that had specific clues suggesting AI
use. As far as I've heard up to now HHS has declined to answer queries
about whether or not AI was actually used to write that report.
I think we all can agree that health reports are an area where we
really do want the best accuracy possible and that "AI slop" as it's
called is not something we want in important documents, like health
reports or court documents or police reports, and so on -- you get the
idea.
But Big Tech really, really wants us to use these AI systems, so
they're stuffing them into document editors and browsers and seemingly
pretty much everything else, and as I've pointed out in the past they
can make it very difficult, sometimes impossible, to turn this stuff
off.
And yeah, now comes word that Google is going to be feeding Gmail into
the gaping mouth of their AI, apparently enabled by default, to create
"summaries" of some email threads, which given the error-prone output
we see in their AI Overviews does not exactly inspire confidence. This
for now apparently is for paid users but it seems a good bet that it
will find its way down to everyone eventually. Apparently the
summaries can be turned off (but perhaps not the ingestion of the
Gmail itself) by turning off Gmail "smart features", but that also
disables other Gmail functions as well.
Enough is enough. Generative AI is not ready for prime time. Maybe it
will be some day, maybe not. But constantly pushing this flawed tech
into people's faces is disrespectful for these firms to do, and may
suggest what amounts to disdain for their users and the community at
large.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues