On Saturday, November 2, 2024 at 2:33:46 PM UTC-6 Alan Grayson wrote:

On Saturday, November 2, 2024 at 12:48:22 PM UTC-6 John Clark wrote:

On Sat, Nov 2, 2024 at 2:01 PM Alan Grayson <[email protected]> wrote:

*> What form of crisis involving AI do you envisage? AG *


*Well for one thing humans will no longer be the ones calling the shots, 
but because an AI in 2028 or 2029 would be much more intelligent than I am 
I don't know what he would decide to do with us. He may figure we're more 
trouble than we're worth.  It's hard to predict the behavior of something 
that's much smarter than you are, if we're lucky Mr. AI might think we're 
cute pets and keep us around. *


If some national state puts AI in control of our nuclear weapons/ 
deterrence, it could make bad decisions from humanities pov. Other than 
that, I see no ostensible danger. AG 

 
Do you see any other AI/human interface that would pose a danger to 
humanity? AG 


*  John K Clark    See what's on my new list at  Extropolis 
<https://groups.google.com/g/extropolis>*
wqp

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/6654d257-ee10-4906-b782-64bb4c51e83en%40googlegroups.com.

Reply via email to