On Saturday, November 2, 2024 at 11:28:08 PM UTC-6 Alan Grayson wrote:

On Saturday, November 2, 2024 at 10:21:39 PM UTC-6 Brent Meeker wrote:




On 11/2/2024 7:37 PM, Alan Grayson wrote:



On Saturday, November 2, 2024 at 5:52:14 PM UTC-6 Brent Meeker wrote:




On 11/2/2024 3:41 PM, Alan Grayson wrote:



On Saturday, November 2, 2024 at 2:42:04 PM UTC-6 Brent Meeker wrote:




On 11/2/2024 9:47 AM, Alan Grayson wrote:



On Saturday, November 2, 2024 at 10:26:56 AM UTC-6 John Clark wrote:

On Sat, Nov 2, 2024 at 12:03 PM Alan Grayson <[email protected]> wrote:

*> I don't think transexual men should be allowed in women's sports.*


*I don't give a damn if transsexual men are in women's sports or not. 
Compared to the huge cosmic upheaval that we are 1000 days or less away 
from, the entire transsexual issue is of trivial importance. I have more 
important things to worry about. *


You mean global nuclear war? AG 


Yes, or even limited nuclear war, and global warming and pandemics and 
economic inequality.

Brent


Could AI *independently* attach itself to our war-fighting nuclear weapons 
and hypothetically disconnect any recall codes? Or could any nation states 
be so stupid as to deliberately connect it, and render human intervention 
null-and-void? AG 


You have ask yourself what is the "motivation" of this AI?  Generally they 
are just answering questions.  To create the kind of problem you're 
contemplating an AI would have to be given some motive for global nuclear 
war.

Brent


Does, or can AI have intentions or motives? I saw a video of Noam Chomsky, 
and he was very skeptical that they have intentions, or even language as we 
know it. IYO, do we really have anything to fear from AI? AG 

Well for LLM's their motivation is just to answer a question posed to them 
or produce a piece of art work.   But suppose you're working for Putin and 
prompt Claude, "Write a news reporter piece interviewing a Haitian 
immigrant about how to cook small animals like cats and dogs and put it on 
X."  That's not global nuclear war, but it might contribute to it.  Now 
write another one as if interviewing a Russian spymaster who has been 
paying Doug Emhoff for years to steal secrets." Here the AI doesn't have 
any nefarious motive, but it could have a big effect.

Brent  


In this model, AI can just answers questions, for good or evil. Can you 
imagine a scenario where AI directly runs, controls, or in any respect 
operates our nuclear weapons,  or those of other nation states? AG 


OR, can you imagine a scenario where AI would want, or strive to be 
situated as described above, with respect to any state's nuclear weapons? 
Is there any evidence it can have desires and intend to do anything? JC is 
worried about a crisis in about 1000 days where AI apparently and 
deliberately orchestrates a crisis. Is this willful or by accident? What is 
the presumed method or form of action? AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/ab0f9f09-bb78-46ee-9ad8-7efd2844d729n%40googlegroups.com.

Reply via email to