This is the script of my national network radio report yesterday
discussing the implications of a Univ. of Penn. study expressing
concerns over the potential dangers in using AI/LLMs to control
robots. As always, there may have been minor wording variations from
this script as I presented this report live.
- - -
So this involves a study from University of Pennsylvania researchers.
And this is simultaneously pretty scary AND a rather fun topic. Scary
because we really don't want robots going on destructive rampages, but
fun because I get to invoke some sci-fi films into this discussion.
We know that the concept of dangerous robots is pretty much as old or
even older as the beginnings of sci-fi, and the idea of evil computers
controlling robots to cause them to do terrible things is probably
almost as old. And when you bring up this topic usually the first
example that people will mention is the HAL 9000 from 2001: A Space
Odyssey. But really, as we learned in the sequel to 2001, HAL wasn't
actually evil, he was suffering a mental breakdown due to the
conflicting instructions he had been given by his HUMAN programmers.
Perhaps more to the point is a rather obscure but still quite
enjoyable earlier 1954 sci-fi film called Gog - G-O-G. And in this
film we're at a secret underground military installation where they
have a pair of utility robots. And it turns out that an enemy foreign
power is able to gain control of the computer that controls those
robots, causing them -- even though they were designed to be
fail-safe -- not only to commit murder but to very nearly destroy the entire
installation.
Which brings us around to the reality of today and tomorrow and how
robots can be misused. The reason this ties in with AI Large Language
Models is because efforts are underway to use these systems,
essentially similar to unfortunately all too familiar generative AI
chatbots, to control robots. Now we've talked in the past about how
these chatbots can spew misinformation. And we've discussed how it
frequently turns out to be possible for users to bypass the guardrails
that system designers build into these systems to try avoid their
being used, for example, to get potentially harmful information.
It's pretty clear that having hazardous misinformation or
disinformation spewing from a chatbot or other query-response
generative AI system is bad enough. But now when you connect these
flawed systems to robots, whether industrial robots or so-called
humanoid robots or whatever, you create a situation where a failure of
guardrails could potentially create PHYSICALLY dangerous situations.
And what these researchers are reporting is that it turns out that
yeah, this seems often to be possible to do, and they seem to be
pretty concerned about this. Because now we're not just talking about
typed or spoken output that might be of concern, we're dealing with
robots that interact directly with the physical world -- that's pretty
much the whole point of robots.
The kinds of mistakes that might just be upsetting if read or heard
from an AI system could be far more serious if they cause robots to
perform dangerous actions in the real world, whether accidentally, or
as was the focus of this research, under the direction of an adversary
who finds a way to bypass the protections present in the system and
the robots themselves. This applies whether we're talking about robots
attached to other advanced AI systems or robots that have significant
AI capabilities of their own.
So yes, the combination of AI and robots would seem to be the proverbial double-edged sword. There could be major benefits, AND major dangers -- which is pretty much the usual story with technology, whether you're looking at sci-fi or the real world. But it would be incredibly irresponsible for us to permit these systems and robots to be deployed unless enough serious attention has been focused on the many possible ways that they could potentially go very wrong indeed.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues