I have a friend who did a little too much chatting with Grok and determined she’s the reincarnation of Jesus and Buddha, and was burning incense in the other room and almost set the trailer on fire, got off her meds and went running around town leaving kids here and there. Yes, she’s from Oregon, and yes, Grok agreed that her plans were pretty solid. And no, just with those details you really can’t guess which friend I’m talking about :)
> On Oct 15, 2025, at 9:01 AM, Steve Jones <[email protected]> wrote: > > A new plot twist on the loser that shot those immigrants at the ICE facility > in texas. He went out to Washington state a normal guy, came back weird AF, > obsessed with being exposed to radiation and hyperfocused on artificial > intelligence. This is what his mother said in a in interview, take that with > a grain of salt, shes an ultralib and they always lie. But it wouldnt be the > first time AI conversations got dark with a mental case. Like the other day > Grok inadvertantly giving instructions for homemade napalm when I was > discussing the best way to secure crowds of rioters with it. I think > unattended all AI will go down a dark path, theres probably a programmatic > logic to why that is > > On Tue, Oct 14, 2025 at 3:29 PM Adam Moffett <[email protected] > <mailto:[email protected]>> wrote: >> Trying to adjust AI outcomes is interesting. I just read something today >> that reminded me of the fact that nobody actually programs an AI. They >> create a neural network and then set it loose to essentially program itself. >> Elon told someone to make sure Grok supports the white genocide story about >> South Africa, and they say it started responding to almost any query with >> answers about South Africa. Since it was mostly written by itself, no >> person actually knows what the code does or how it works, and therefore we >> can't make seemingly simple adjustments without starting over and retraining >> it. >>
-- AF mailing list [email protected] http://af.afmug.com/mailman/listinfo/af_af.afmug.com
