AGI,

 

The most salient feature of the brain is its ability to organize information
and build structures. These structures of information are part of our daily
experience. We have studied them from all imaginable approaches, and fields
of inquiry. We have classified them into many different types, and called
them with many different names,  Thoughts, ideas, notions, concepts,
decisions, opinions, results, questions, emotions, predictions, theories,
methods, procedures. Hawkins calls them invariant representations. For
Gell-mann they are regularities. For Mike they are patterns. Others talk
about associations and bindings. But no one managed to explain them. 

 

We can't explain them because we keep focusing on their differences, but
miss what is common to all of them. Causality is what they all have in
common. I just said it, above, and you just accepted it naturally. I said
that the brain organizes information. I even called them structures of
information, not just structures, meaning that they come from somewhere. I
claim that the brain makes all these structures by self-organizing the
information it has at hand, and that self-organization happens universally
by removing excess entropy from the information. 

 

I can explain my claims from the principles of nature, but only Giovanni
would understand such an explanation. So I'll just give an overview. In
Physics (the brain is a physical sytem), causality is formalized by causal
sets (causets). Causets, all causets, have a symmetry. This follows from the
fact that the order in causets is only partial. Every physical system that
has a symmetry also has a conservation law. A conservation law predicts that
a certain quantity exists that is invariant under certain transformations,
but does not say how to find the quantity. Again, researchers from many
disciplines have studied these conserved quantities and named them
differently: attractors, observables, self-organized structures, patterns,
recognized images. They are Jeff Hawkins' invariant representations, Boris
Kazachenko's hierarchies. And again, no one can explain how to determine
them, certainly not Jeff, not Boris. Not even Gell-Mann. 

 

Why are conserved quantities important? Because they are certain. They
provide something to hold on to in the middle of all that uncertainty. Think
how important is the conservation of energy. Did you know that the entire
science of predicting a hurricane relies on the conservation of energy? It's
a situation of chaos, or turbulence, so Navier-Stokes' differential
equations can not be solved. The hurricane is divided into blocks, say 1Mi X
1Mi X 1Mi, and equations are written for the balance of energy in each
block. That's all they can do. And that's why the smaller the blocks the
better the prediction. Conservation laws are fundamental in modern
theoretical Physics. 

 

I know a good deal about those conserved structures. Their number is
infinite countable. The number of causets is also infinite countable. There
is a bijective, one-one correspondence between each causet and its
corresponding self-organized structure. This map, I call function E, for
emergent inference. I propose E as a new, natural mathematical logic. E is
uncomputable, but is computable if the expression of the functional is
given. The inverse function E^-1 exists and is computable. One can go from a
given structure to its causet, and then back to the same structure (for a
prescribed granularity of description). There is nothing else left. If you
think of something, then you can ask for the causes of your thought
(attribution), and reconstruct that thought on a computer. It's not that I
invented the engine, I invented the whole car. 

 

The rest of the story, you already know. I discovered the action functional
(not invented, not engineered, but observed, in nature, in a physical
system), and the rest follows. I have claimed that the brain makes its
structures of information by self-organizing the information it has
available. Each one and all of them. I have explained how to determine
self-organized strctures from causal sets, I proved that this
self-organization actually happens, and in a few cases, within my resources,
I showed that my artificial self-organized structures are the same that the
brain naturally makes. I want to do more, but I need special hardware. There
is a daunting task ahead. But that task is not AGI. AGI has already been
solved. The task is to build an AGI machine and start using it for
applications. One conserved quantity at a time. 

 

On this blog, Mike Tintner is the only person who understands the nature of
function E. He feels that something is not right, but can't explain it. He
talks about infinite variety, infinitely many forms and shapes, patchworks
and patterns. Several others are catching the scent, but are not yet aware
of the full meaning and scope of function E. Steve Richfield is proposing
real numbers and derivatives to explain AGI, possibly from experience he has
in the area of dynamic control. Will this idea work? Sure, to some extent.
It will eliminate some of the entropy and explain some subset of function E,
but many other structures will be left unexplained. Same with OpenCog,
neural networks, brain reverse-engineering, large brain simulation projects.
Ben wants to optimize total computational resources. Same thing. He is now
thinking of a research institute in China. I know it takes courage to think
the way I do, or even consider what I am saying, because one has to throw
away so much. So what? The chinese have more courage than the americans? Is
that it? You know, 0 + 0 +... + 0 = 0. Just look what happened to the much
heralded Santa Fe Institute. They sure did a great deal of useful research,
but they did not explain self-organization - their original objective - and
they sure did not explain intelligence. 

 

In the meanwhile, I am being ignored, accused, even insulted. I don't care.
I am right, the others are wrong. And this is all that maters. 

 

Sergio

 

 

 

From: Ben Goertzel [mailto:[email protected]] 
Sent: Wednesday, August 29, 2012 2:31 PM
To: AGI
Subject: Re: [agi] LM741

 


Steve,

 

Picking one particular tiny illustrative detail of this - my realization
that neurons MUST communicate derivatives like dP/dt rather than straight
probabilities, to be capable of temporal learning without horrendous
workarounds. I thoroughly explained it on this forum, and no one objected to
any of it, yet it has changed nothing.

 

To those of us not working on neural net models, this sort of insight is
kinda irrelevant...

 

But still, this is an interesting observation.

 

It reminds me of work studying neural population coding using Fisher
information

 

http://prl.aps.org/abstract/PRL/v97/i9/e098102

 

[Fisher information being an average of the second derivative of a
probability density, it's kinda like the derivatives you reference...]

 

I'm curious: How would you modify, for instance, the Izhikevich neuron
equations

 

http://www.izhikevich.org/publications/spikes.htm

 

in accordance with your idea?  (I reference this just because it's the
neuron model I've worked with  most recently.)

 

Regarding your idea for a cross-disciplinary math/AI/neuro research
institute -- I wish I had the power to get something like that formed.
Maybe I'll be able to do it in a few years time, in HK or China or
Singapore, we'll see...

 

-- ben 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to