This is the script of my national radio report last night on
technology ethics experts suggesting that the dominance of AI in our
lives is not necessary inevitable -- and how I feel about this
analysis. As always, there may have been minor wording changes from
this script as I presented the report live on air.
- - -
Yeah, so the basic question involved in this respect is whether or not
AI-based subjugation of so many aspects of our lives -- the way we
read and write and communicate and the way products are promoted and
the way we buy things and down the nearly limitless list -- whether
this widely predicted future is inevitable.
And there are folks who study the field of technology ethics who
apparently feel that no, this isn't inevitable, even though at the
moment Big Tech is pretty constantly yelling at us that there's no
escape from their seemingly endless deployments of often error-prone
AI systems.
In the first instance of course we need to define at least a bit about
what we're really talking about since the term "AI" has become a
catch-all for all manner of systems and related products, none of
which, not a single one, is actually intelligent in the way most
persons would use the word intelligent.
And we know that under this enormous rhetorical umbrella of "AI" there
are machine learning systems being used for medicine and science and
other areas where in some cases they are able to work through enormous
amounts of medical imagery or other related data more quickly and
sometimes -- but not always -- more accurately than humans, looking
for signs of disease and more. Similarly they can be used to more
quickly deal with the massive volumes of weather-related data -- all
sorts of good stuff.
And we know about the generative AI systems, the large language models
that are the subject of stratospheric hype by these firms as they try
to outdo each other with chatbots and AI Overview Search Answers and
summarization services, and whatever else their executives figure will
capture media attention and convince the public that these really not
intelligent systems are somehow actually intelligent after all.
So with the enormous, astronomical amounts of money being poured into
these systems, especially generative AI, and with the massive
financial pressures to try make them pay off, there tends to be an
assumption that all of this is going to continue along the same curve
in an unstoppable way.
And what some of these technology ethics experts are saying is that
there are examples where technologies seemed to be taking over
completely but as their disadvantages and collateral negative effects
became better understood, the public's attitudes changed and the tech
was somewhat reigned in.
Now when it comes to AI, again especially generative AI, so far (and
of course this could change) it seems that none of these firms have
been particularly successful at finding ways to have these systems
output money faster than they chew up the money being fed into their
development, in other words, to find the ultra big profits that
they're all hoping for.
So one viewpoint is that if these enormous profits don't materialize
for any number of reasons, or if the negative effects of these systems
on particular groups of people or society at large reach a critical
mass, that there might be a retrenchment and slowing down of spending
increases on these systems, perhaps gradually or one might postulate
rather more quickly -- with potentially pretty awful consequences for
the bottom lines of these firms.
So in such cases the predicted inevitability of AI smothering so much
of our lives may, perhaps, not take place as so many current
predictions seem to suggest.
Personally, I'm not sure -- I suspect that a lot more money is going
to be plowed into these systems and if a downward inflection is going
to occur it's probably not in the immediate future at least. So my
guess is that we can't anticipate an escape from the AI hype machine
anytime soon -- that is, if we're really ever able to escape it at
all.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues