Generative AI's fundamental problem -- the executives pushing it

I want to be very clear about this.
The fundamental problem is not the Generative AI systems themselves and
their often utterly wrong (or even worse) partly wrong (but oh so nicely
written and so convincing!) "answers".

The problem is that executives at Google and other Generative AI Tech
firms are putting these tools out there in ways that encourage their use
by the nontechnical public as "answer machines", when we (and presumably
the execs) know that those answers can be dangerously wrong. Disclaimers
saying "This is an experiment, there may be wrong answers, be sure to
check for yourself blah blah blah" are utterly worthless except perhaps
to satisfy their lawyers.

This situation is deeply unethical, even if we put aside their stealing
text for answers word for word from sites -- usually with no credit
given or links back.

It's pretty much the most alarming thing I've seen on the Internet so
far in my entire career, in terms of the potential damage that could be
done to websites and the public at large over time. -L

- - -
--Lauren--
Lauren Weinstein [email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Twitter: https://twitter.com/laurenweinstein
Mastodon: https://mastodon.laurenweinstein.org/@lauren
T2: https://t2.social/laurenweinstein
Founder: Network Neutrality Squad: https://www.nnsquad.org
        PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
Tel: +1 (818) 225-2800
_______________________________________________
pfir mailing list
https://lists.pfir.org/mailman/listinfo/pfir

Reply via email to