This is a DANGER ZONE that should be avoided right now, IMHO.

Look at the world today: we have this political system that has gone haywire 
and media outlets that hand anybody a microphone and five minutes to spout off 
anything they want without fact-checking it or even taking any responsibility 
for its accuracy or relevance. If someone wants to talk about how they have 
found “reliable sources” who claim the moon is made of green cheese, there are 
plenty of media outlets that are happy to tive them air time, since anything 
controversial “sells” and boosts their ad revenues. 

They feel their only obligation is to give five minutes of air time to someone 
at NASA or some PhD at a university who might try to debunk it. But at the end 
of the day, they end up creating a lot of confusion because they themselves are 
NOT “experts” in anything they cover, and they’ve lost their journalistic 
compass that tells them how to deal with this stupidity. Walter Conkite is 
turning in his grave.

AI is in no better of a position than today’s media outlets — you can ask it a 
question and it will offer up answers, but it has no idea if the answers it 
spits out are accurate or even relevant. And anybody who’s NOT a bona fide 
“expert” in the subject will have no frigging clue or any way to tell. Who ya 
gonna trust?

“Hey, Fox just ran a 15-minute segment where they had someone talk about how 
the moon really IS made of green cheese! The two peole they interviewed were 
very convincing, and the guys from NASA made no sense at all. So I’m inclined 
to believe that it really IS made of green cheese now.”

Welcome to the world of AI, where most people refuse to even question nonsense 
being given air time by major media outlets as if it’s real, legitimate, 
factual information. 

The sad fact of the matter is that after enough publicity promoting stories 
that assert the moon IS made of green cheese get published or aired (even just 
online), guess what will happen? ChatGPT will start generating answers 
supporting that totally bogus viewpoint and people will start believing it even 
more.

If you’re not enough of an expert to know it’s giving you bogus info, then DO 
NOT TRUST IT!

It’s that simple.

Believe me when I say there WILL be folks who will start feeding these AI 
systems nonsense like adding arsenic to brownies just to see what happens, and 
it will start reporting brownie recipies that require arsenic — along with 
instructions on how to extract it from rat poison — and people who don’t know 
any better WILL TRUST IT.  :-O

Then Fox will do a segment where they show that ChatGPT is offering up brownie 
recipies that contain arsenic and warnings from a couple of doctors who say 
it’ll kill you. And then they’ll inteview someone who will claim that arsenic 
in small doses is actually an aphrodesiac and improves your mental health and 
let YOU decide. They call that “balanced reporting” today.

-David Schwartz





> On Jun 6, 2023, at 1:49 PM, James Mcphee via PLUG-discuss 
> <[email protected]> wrote:
> 
> When I'm actually an expert at the thing I ask chatGPT for, yeah, like an 
> intern.  It'll say something that will prompt me to go down some road or 
> other, and I ignore the obviously wrong answers.  When I'm an amateur at the 
> thing, it sounds authoritative and I don't have the ability to know better.  
> But then, we're not paying for it to be RIGHT.  It's still a solution we're 
> trying to find a problem for.

---------------------------------------------------
PLUG-discuss mailing list: [email protected]
To subscribe, unsubscribe, or to change your mail settings:
https://lists.phxlinux.org/mailman/listinfo/plug-discuss

Reply via email to