Even More on Trust & Safety and AI
In answer to some questions I've received, let me put it this way. The
firms pushing out these AI chat systems seem to lack an understanding
of how ordinary persons exposed to them would react and use them. This
is not altogether surprising, we've seen this pattern in tech repeatedly
for many years, especially (but not exclusively) on the Internet.
While the firms have generally had disclaimers present on these AI
chat systems, to expect them to be fully understand in context by
random users of these systems is both unreasonable and potentially
dangerous.
Attempting to pause or stop AI training or other related research is
not practical nor desirable. But better communication with the public
is absolutely necessary. These systems need to be explained in ways
that non-technical, busy persons will appreciate in the context of
their own lives and experiences. The technologists designing these
systems need to realize that if sufficient resources are not dedicated
to these direct public communication and education needs, the firms
will be ever more targeted by politically-motivated attacks, and risk
their work being ever more mischaracterized by entities with political
motives of their own, to the detriment of the firms, their users, and
the community at large.
This must be understood and acted upon immediately, or the benefits of
AI will be consumed by false narratives and it will be too late for
much more than painful regrets.
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Twitter: https://twitter.com/laurenweinstein
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
Tel: +1 (818) 225-2800
_______________________________________________
pfir mailing list
https://lists.pfir.org/mailman/listinfo/pfir