Big Tech Needs to Vastly Improve Their Public Communications -- or
Potentially Face a Political Train Wreck Over AI (and More)
https://lauren.vortex.com/2023/04/11/big-tech-public-comms-train-wreck
In several of my past recent posts:
The "AI Crisis": Who Is Responsible?
https://lauren.vortex.com/2023/04/09/the-ai-crisis-who-is-responsible
State and Federal Internet ID Age Requirements Are Hell-Bent on Turning
the Internet Into a Chinese-Style Internet Nightmare
https://lauren.vortex.com/2023/03/23/government-internet-id-nightmare
Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai
and others in various venues, I have expressed concerns over the
"perfect storm" that is now circling "Big Tech" from both sides of the
political spectrum, with both Republicans and Democrats proposing
(sometimes jointly, sometimes in completely opposing respects)
"solutions" to various Internet-related issues -- with some of these
issues being real, and others being unrealistically hyped.
The latest flash point is AI -- Artificial Intelligence -- especially
what's called generative AI -- publicly seen mainly as so-called AI
chatbots.
I'm not going to repeat the specifics of my discussions on these
various topics here, except in one respect.
For many (!) years I have asserted that these Big Tech firms (notably
Google, but the others as well to one degree or another) have been
negligently deficient in their public communications, failing to
adequately assure that ordinary non-technical people -- and the
politicians that they elect -- understand the true nature of these
technologies.
This means both the positive and negative aspects of tech. But the
important point is that the public needs to understand the reality of
these systems, and not be misguided by misinformation and often
politically-biased disinformation that fill the information vacuum
left by these firms, often out of a misguided and self-destructive
fear of so-called "Streisand Effects", which the firms are afraid will
occur if they mention these issues in any depth.
It is clear that such fears have done continuing damage to these firms
over the years, while robust public communications and public
education -- not looking down at people, but helping them to
understand! -- could have instead done enormous good.
I've long called for the hiring of "ombudspersons" or liaisons -- or
whatever you want to call them -- to fill these important, particular
communications roles. These need to be dedicated roles for this
purpose.
The situation has become so acute that it may now be necessary to have
roles specific to AI-related public communications to help avoid the
worst of the looming public relations and political catastrophes, that
could decimate the positive aspects of these systems, and over time
seriously damage the firms themselves.
But far more importantly, it's society at large that will inevitably
suffer when politics and fear win out over a true understanding of
these technologies -- how they actually impact our world in a range of
ways -- again, both positive and negative, both now and into the
future.
The firms need to do this now. Right now. All of the greatest
engineering in the world will not save them (and us!) if their abject
public communications failures continue as they have to date.
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Twitter: https://twitter.com/laurenweinstein
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
Tel: +1 (818) 225-2800
_______________________________________________
pfir mailing list
https://lists.pfir.org/mailman/listinfo/pfir