We seem to be inadvertently empowering some VERY wrong people.

If you haven't noticed, a significant fraction of the population now
believes AGI is already here in a BIG way - not the way people here are
working toward, but in ways depicted in movies, etc. This appears to be
leading in some BAD directions.

In replacing God with fake AGI, there is some significant collateral
damage, like Buddhism and other ethics-based beliefs. This is now twisting
our society in some strange ways - just look at prime time TV now FULL of
crime drama shows that clearly present the proposition that might makes
right.

Some well-meaning people on this forum have inadvertently contributed to
this with crazy-optimistic predictions that unbridled AGI was about to
emerge.

I once made the rounds speaking at various colleges explaining how it was
physically impossible to shoot down sub-orbital warheads between their
launch and re-entry phases, to stop the crazy spending on SDI. Time has
proven me 100% correct, but time has also proven that the goal of SDI was
to bankrupt Russia and NOT to shoot down warheads. Somewhere in Russia, my
counterpart was probably saying the same things and being ignored - or
worse. In short, I was right, but fortunately I failed to get my message to
be generally accepted.

My point here is that correctness and social benefit often have
little/nothing to do with each other. Hence, we all need to do some
self-examination to see whether we are the social equivalent of 3-year-olds
with loaded guns.

I see NOTHING good coming from publicly promoting full AGI at this time.
However, it might be possible to reframe the discussion for everyone's
benefit, e.g. by dissecting the AGI concept enough to be able to   identify
which parts are socially responsible to discuss in public, and which parts
will only further twist our society, once the screenwriters get hold of
them.

In short, I think we should be attending to the ethics of our
not-yet-a-profession. I would start with something like "AGI appears to be
as potentially dangerous as cold fusion" and show surely-safe paths
forward. It is one thing to have an intelligent problem solver, and quite
another to arm a problem solver to enforce its (final?) solutions.

So, is anyone here interested in discussing ethics?

Steve

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T992b9674ba947ee9-M08d84f95b1fd75fbe6c176e1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to