I'll leave the relevance up to the interested reader:

Ideas by Statistical Mechanics (ISM)†
<https://www.ingber.com/smni06_ism.pdf>
Lester Ingber
Lester Ingber Research (LIR)
<ing...@ingber.com> <ing...@alumni.caltech.edu> [http://www.ingber.com]
Abstract
Ideas by Statistical Mechanics (ISM) is a generic program to model
evolution and propagation of ideas/patterns throughout populations
subjected to endogenous and exogenous interactions. The program is based on
the author’s work in Statistical Mechanics of Neocortical Interactions
(SMNI), and uses the author’s Adaptive Simulated Annealing (ASA) code for
optimizations of training sets, as well as for importance-sampling to apply
the author’s copula financial risk-management codes, Trading in Risk
Dimensions (TRD), for assessments of risk and uncertainty. This product can
be used for decision support for projects ranging from diplomatic,
information, military, and economic (DIME) factors of propagation/evolution
of ideas, to commercial sales, trading indicators across sectors of
financial markets, advertising and political campaigns, etc.

It seems appropriate to base an approach for propagation of ideas on the
only system so far demonstrated to develop and nurture ideas, i.e., the
neocortical brain. A statistical mechanical model of neocortical
interactions, developed by the author and tested successfully in describing
short-term memory and EEG indicators, is the proposed model. ISM develops
subsets of macrocolumnar activity of multivariate stochastic descriptions
of defined populations, with macrocolumns defined by their local parameters
within specific regions and with parameterized endogenous inter-regional
and exogenous external connectivities. Parameters of subsets of
macrocolumns will be fit using ASA to patterns representing ideas.
Parameters of external and inter-regional interactions will be determined
that promote or inhibit the spread of these ideas. Tools of financial risk
management, developed by the author to process correlated multivariate
systems with differing non-Gaussian distributions using modern copula
analysis, importance sampled using ASA, will enable bona fide correlations
and uncertainties of success and failure to be calculated.  Marginal
distributions will be evolved to determine their expected duration and
stability usingalgorithms developed by the author, i.e., PATHTREE and
PATHINT codes.

On Sun, Aug 24, 2025 at 11:21 AM Dorian Aur <dorian...@gmail.com> wrote:

>
> For over 80 years, from the abstractions of McCulloch and Pitts to today’s
> large language models, we have been building simulated intelligence.
> Today's "AI" is a digital replica of brain-like processes, running on
> silicon and operating through mathematical operations instead of biological
> ones.
>
> Think of it like a flight simulator. A simulator can accurately recreate
> the cockpit experience and train pilots - it will never fly.  *The
> simulator never actually leaves the ground*  . Digital AI is similar: it
> emulates brain intelligence however lacks a true physical embodiment.
>
> This is where *Electrodynamic Intelligence (EDI)
> <https://doi.org/10.5281/zenodo.16929461>* offers a transformative path
> forward.
>
> Rather than modeling intelligence through symbolic or statistical
> computation, in *EDI  cognition develops from real-time physical
> interactions, e.g. self-organizing dynamics of charges and fields within
> and across  neurons in an artificial brain*. The electrodynamic processes
> are not metaphors; they are materially grounded forms of computation that
> operate through ionic flows, field interactions, and nonlinear dynamics.
>
> *EDI <https://bit.ly/45JPjsg>* is not a simulation of the brain, it is an 
> *embodied
> approach to intelligence*, rooted in the same physical principles that
> underlie the biological brain. It opens a path toward systems that *act* 
> through
> matter rather than *model* through abstraction.
>
> Just as real flight requires lift, drag, and thrust, not numbers on a
> screen, this* embodied intelligence  requires the physics of the brain,
> not just the logic of the code*. Ben, I hope we’ll have the opportunity
> to feature a summary of this paper at the upcoming AGI conference
>
> Fifteen years ago, when we launched *Neuroelectrodynamics* as a
> theoretical  framework, the technological landscape simply wasn’t ready to
> support its implementation. However now, *Colin*, the situation has
> changed dramatically, we're in a position to build.
>
> - Dorian Aur
>
> PS Matt,  EDI doesn’t just solve these problems, it avoids them altogether
> by not sharing the same structural assumptions. It’s not about controlling
> artificial goals, it is about building intelligence that grows, adapts, and
> lives in the world as we do, not above it.
>
>
> [image: image.png]
>
>
>
> On Mon, Aug 11, 2025 at 9:46 AM Matt Mahoney <mattmahone...@gmail.com>
> wrote:
>
>> Discussion of AI existential risk on LessWrong. To summarize: we don't
>> know how to solve the alignment problem. If we build AGI, it will probably
>> kill all humans because we dont know how to give it the right goals.
>> Therefore we should not build it, or at least build an "off" switch to
>> quickly shut it down.
>>
>> My thoughts:
>>
>> 1. The premise seems correct. We measure intelligence by prediction
>> accuracy. Wolpert's law says two agents cannot mutually predict each other.
>> If an agent is smarter than you, then you can't predict its actions, and
>> therefore cannot control it.
>>
>> 2. An LLM has no goals. It just predicts text. However, applications that
>> use it do have goals. You can tell an LLM to express any human goals or
>> feelings. So alignment seems solvable, at least for now.
>>
>> 3. Let's say we do solve the alignment problem. Then AGI will kill us by
>> giving us everything we want. AI agents will replace not just workers, but
>> friends and lovers too. We will become socially isolated and stop having
>> children.
>>
>> 4. The goal of all agents in a finite universe is a state of maximum
>> utilitiy, where any thought or perception is unpleasant because it would
>> result in a different state. Your goal is death. You just don't know it
>> because evolution programmed you to fear death.
>>
>> 5. An "off" switch will fail because AGI could kill us before we knew
>> anything was wrong. I don't even know why they proposed it.
>>
>> 6. We will build AGI anyway because human labor costs $50 trillion per
>> year, half of global GDP.
>>
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M3844a63b5215cb68a2825e8c>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M71091800c2fc59331d0a6eea
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to