Absolutely Rob, I agree with your perspective. It's not AI itself that's
causing any fundamental crisis, it is our longstanding struggle to
understand ourselves and the systems we create. AI might amplify or expose
these issues, like how we define “fake,” what it means to be human, or how
we manage gratification- those tensions have always been with us, evolving
alongside all forms of technology, not just AI.

You’re right that the root issue isn’t nuclear weapons, social media, or
even overabundance, it's our unexamined drives, our inherited behaviors
from evolutionary contexts that no longer serve us. The “crisis” isn’t
technological. We're trying to navigate a complex world with instincts
shaped for simpler times, without yet having developed the cultural or
cognitive tools to regulate those instincts consciously.

Your analogy to war is a great one: the goal isn’t to rewind to a
supposedly more "natural" past, but to understand the origins of
destructive behavior and outgrow it

The fear of *AI existential risk*  relies on a fundamental
misunderstanding. In fact, recognizing that the brain simulator (LLM)
itself can't take off or crash🙂 since it's not truly flying (thinking),
only modeling flight/thought, highlights an important distinction: it
generates patterns that resemble meaning, however,  it doesn't inhabit them.

This understanding helps ground our expectations. The danger isn't that the
simulator *becomes* real, but that we forget it's a simulation .

In that sense, AI and modern technology could be catalysts, not for crisis,
it needs deeper reflection and growth. If we can face these challenges as
opportunities to understand ourselves better, maybe we can finally begin to
“get in tune” with a new kind of niche, one of our own conscious design.

I’ll be uploading a fourth manuscript soon that provides a clearer
explanation of the EDI topic

---=Dorian

On Sat, Sep 6, 2025 at 2:06 AM Rob Freeman <chaotic.langu...@gmail.com>
wrote:

> On Sat, Sep 6, 2025 at 4:55 AM Matt Mahoney <mattmahone...@gmail.com>
> wrote:
>
>> The goals of the Hutter prize are what I wrote mostly in 2006.
>> https://mattmahoney.net/dc/rationale.html
>>
>> I am retired now but I have been toying with some ideas for a new Hutter
>> prize submission. Since I'm on the judging committee, I am not eligible for
>> prize money. But a well documented open source program could still be the
>> basis for future submissions to speed research. Current submissions are
>> based on XWRT dictionary tokenization and PAQ context modeling. I have some
>> ideas for memory efficient context modeling that I want to test. So far I
>> have just been experimenting with decoding XML, HTML, and Wiki markup and
>> small dictionary encoding using byte pair encoding. The current leader
>> sorts the articles by topic, and so far I haven't been able to improve on
>> it.
>>
>> I have been following the unfriendly / unaligned AI debates on SL4, the
>> Singularity list, and LessWrong for about the last 25 years. I disagree
>> with the premise that AI is a goal directed optimization process that will
>> rapidly self improve to superhuman intelligence and kill us all because we
>> got the goals wrong.
>>
>> AI is a model of human behavior without goals. I think the most immediate
>> threat is that AI kills us by giving us everything we want. But I admit I
>> only came to that conclusion recently after LLMs started passing the Turing
>> test and creating a world where you don't know (or care) what's real and
>> what's fake, what's human and what's AI. Nobody thought about social
>> isolation and population collapse before it started happening. I don't
>> regret my efforts toward creating this world because I'm sure it would have
>> happened without me. All I can do is study the threat so I can warn people.
>> And the best way to study a threat is to help create it.
>>
>
> You have a very negative view of AI.
>
> I see AI, principally, as the science of understanding ourselves.
>
> I can't see how understanding ourselves better can be a bad thing in the
> long run. As with every other form of understanding. Perhaps we kill
> ourselves with ineptly controlled understanding, but with ignorance we die
> with more certainty.
>
> And arguably there is little point in living without understanding.
> Without understanding, life is hardly life at all. Understanding,
> awareness, consciousness, are much the same thing to me.
>
> I don't think AI has had anything to do with any crisis of humanity.
> Perhaps it causes us to examine what we mean by "fake", what it means to be
> a human. Is that bad? But crisis, social isolation, population collapse...
> perhaps you can trace them to social change consequent on technology
> broadly, all of technology, not just AI. But that's just saying we don't
> know ourselves yet well enough to deal with our technology. So it's a
> crisis mostly of not understanding, again. Perhaps life has got out of tune
> with our biological niche, so the environment does not control us in the
> ways we are too ignorant to do on our own. But to me that is just
> motivation to understand how we were shaped by our biological niche. We
> don't need to go on having wars, etc. They are just remnants of an old
> biological niche. The problem is not nuclear weapons, it's that we want to
> use them. You may think we should solve the problem of nuclear weapons by
> going back to some happy, naturalistic, time when we could only stab each
> other with spears. When war was adaptive, because... gene propagation...
> limited resources...? But the time of stabbing each other with spears was
> not that great either. The better solution is we figure out where the spear
> stabbing behaviour came from, and get beyond it, with understanding. If as
> a society today we're dominantly stuck in a dopamine doom loop, chasing
> easy gratification and never climbing out of the pit of dopamine system
> depression we dig for ourselves.... snacks, click-bait... The problem is
> that we don't understand, and properly game, our dopamine system, not that
> technology now gives us too much food.
>
> But I don't want to get stuck on ethical debates. To me they all come down
> to understanding ourselves. And we don't. So the little that can be said
> about the subject until we gain that better understanding will be empty.
> The way to resolve all these issues is to understand ourselves better.
>
> Key to that better understanding in the short term is, I believe, also key
> to finally closing the loop on AGI. It is that meaning itself is subjective.
>
> And maybe I'm hallucinating, but I hear that also in your conclusions
> about Wolpert's Theorem.
>
> You know, it really does strike me that Wolpert's Theorem is a form of
> what I've been saying all along. There are no perfect theoretical
> abstractions. Also a form of George Box, "All models are wrong, but some
> are useful", and Kuhn, SoSR, p.g. 192 (Postscript) "I have in mind a manner
> of knowing which is misconstrued if reconstructed in terms of rules that
> are first abstracted from exemplars and thereafter function in their
> stead"...
>
> So if you've come to that conclusion too, viz. embracing Wolpert's
> Theorem, that is progress in the AI world.
>
> The question is, what do we do with that.
>
> I agree with Dorian that the answer is to address the dynamics, which can
> change, and display a quantum like contradictory tension of uncertainty.
> Not be any single optimization.
>
> And the great thing is, we already have a strong clue from LLMs that the
> connections driving those dynamics are predictive symmetries (in sequence
> networks.)
>
> So I'm optimistic we'll soon be able to incorporate this tension of
> uncertainty, contradictory truth, which is actually a tremendous wealth of
> creativity. And that might finally enable us to untangle some contemporary
> knots we're tangling ourselves up with around truth and objectivity, as
> well. Your problem of what is fake and what is human, too, maybe.
>
> -R
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M2ec0b33dae4069ff13da35df>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Me9a11c547fc6a4b5d0a909fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to