Adam,

ADAM> What sort of bottlenecks prevent emulating more neurons? Could you
reach human-brain level complexity with current technology (and your Entropy
Processor) with sufficient funding? What sort of resources would you need? 

SERGIO> The EP is an FPGA with a bunch of "neurons" and connections all
working at the same time. The computer sends in a high-entropy causal set
and out comes a low-entropy algorithm representing the invariant behavior.
The input comes from sensors, for example a camera. The output goes directly
to actuators, for example the limbs of a robot. 

There are many things that need to be learned before considering
human-level. That's why I want to build a device with just enough power to
run some decent experiments, say 50K or 100K neurons. It will be at the
bottom of the scale, 1D (3D will be needed), and using only first-order
causal logic (higher order needed). The purpose is to determine basic
parameters, how many gates per neuron, max nbr of connections per neuron,
etc. Then, start using the EP for something simple such as image recognition
and gain some public exposure. 

To do that I need a computer engineer (my ex-student is a candidate, he is
looking for work right now), and somebody to work with the image
recognition. As it is, I only have two other students who may want to build
the EP as a master thesis but will decide next summer. Of course, the EP can
be placed in the context of a larger research project, of which the first EP
will be part. I am a Physicist and I can do my part, but I can't do
neuroscience. You are a Neuroscientist, you can do the neuroscience. What do
you have in mind? 


STEVE> I myself am just as neuroscientist/psychologist, and am interested in
learning more about your work.
SERGIO> Sure. I can suggest some additional reading, and you can also visit
my website (scicontrols.com). In the references (just click any reference to
my name in the homepage), you can see my publications. My only major papers
published so far are (2009a) and (2011a). They'll give you background, but
keep in mind that much of what we talk about is still unpublished. You will
also find references to several of my talks at NASA. This is the Workshop on
Automation and Robotics, that meets once a year. This workshop has served as
a powerful source of inspiration (something I can not say about AGI). 

Oh, just remembered. A great review about embodiment was recently published
by a team of psychologists, Pezzulo, Barsalou, et al. You can find the
reference in that same references page. I also wrote some comments about the
review, follow link to The mechanics of Embodiment from my home page. 


STEVE> It seems that the "topographic organization" of sensory inputs is an
essential part of embodiment, but the multi-modal nature of the inputs
referring to a particular system (i.e., the body) provides rich,
time-varying, correlated inputs--where changes result result from
endogenously generated outputs to effector systems--that make it a
particularly good toe-hold for bootstrapping a mind. Ideally, I would like
to see your algorithm/implementation used as a control system for an
embodied agent as it couples with its environment. But I am not familiar
with your theory, so perhaps I am misunderstanding what you are proposing. 

SERGIO> You are understanding very well. Maybe you and I should write a
paper and publish it in one of the journals that Friston publishes his. That
would be one way to let him know we exist. And maybe, if we could get a
virtual reality guy, then it would be possible to use the EP to control an
avatar. 


STEVE> multilevel evolutionary developmental optimization
STEVE> ... Another mechanism for neural Darwinism that I don't mention--and
which I don't believe Edelman discusses either--is synaptic homeostasis. The
brain builds up an excess of connections each day through Hebbian/Hayekian
learning, which are then pruned back during sleep. Theoretically, since the
PFC (parietal frontal cortex) is a topologically central hub, this buildup
of connections (i.e., variation/replication), and then pruning during sleep
(i.e., selection) could help to implement an evolutionary optimization
process for minimizing connection lengths.
http://www.ncbi.nlm.nih.gov/pubmed/16376591

SERGIO> Kauffman says that selforg plays a very important role in evolution.
Tononi seems to suggest that the brain learns and accumulates entropy during
wakefullness, and sheds the excess entropy during sleep (i.e.,
self-organizes). I myself have experiences in that direction all the time. I
always have my best ideas when I begin to awake, and they go away in seconds
after full awakening. So I write them down while still semi-asleep. Later,
when reading what I wrote, it looks like a Picasso, but the ideas are there
and I only have to spell them out. This is how I developed much of my work. 


I still have to read the most interesting part, your attachments. I'll be
honored to collaborate with you. 

Sergio

-----Original Message-----
From: Adam Safron [mailto:[email protected]] 
Sent: Friday, August 17, 2012 11:48 PM
To: AGI
Subject: Re: [agi] Uncertainty, causality, entropy, self-organization, and
Schroedinger's cat.

Comments interspersed below:


On Aug 17, 2012, at 3:57 PM, Sergio Pissanetzky <[email protected]>
wrote:

> Adam,
> 
> I appreciate the detailed information you are posting. There are a couple
of critical points - embodiment is one of them - that I would like to
discuss. I can't promise much more, I'll do what time permits. I'll start
with a short introduction about what is happening here. 

A: I completely understand being time-limited, but am glad to be conversing.

> I have the massively parallel algorithm ready (see reply to Jim) and plans
to build a prototype Entropy Processor based on the theory and implemented
on an FPGA controlled by a PC. I was hoping for a USB device but that's
unsure. I was also aiming at 1M "neurons" but I'll be lucky if we make it to
50,000 or 100,000. The only purpose of the processor is to remove entropy
from a causal set, the PC does the rest. The processor is general purpose
and works very different from Google's patents on entropy processors, which
are specialized for video. 

A: That's very exciting news. What sort of bottlenecks prevent emulating
more neurons? Could you reach human-brain level complexity with current
technology (and your Entropy Processor) with sufficient funding? What sort
of resources would you need?

> The first student on this subject at U of H CL graduated last month with a
Master in Computer Engineering, which is an application of entropy
processing to parallel programming. I was the thesis advisor. The purpose of
the prototype is to demonstrate some practical applications and apply for
funding for a full-size one prototype. My level of participation is that I
continue developing the theory, advice in the project, and try to "entice"
the scientific community with possible applications to their particular
disciplines. AGI, neuroscience, parallel programming, image processing, and
the other GUAPs are among the various possible applications. Of course, the
specific work will have to be done by experts in each field, I am myself
just a physicist and not an expert in any of them. 

A: I myself am just as neuroscientist/psychologist, and am interested in
learning more about your work.

> ADAM> Having convergent validity with Friston is a good sign, because he
is a preternaturally brilliant scientist. As you review his work, I'll be
looking forward to hearing more about the ways in which your model is
compatible/incompatible with Friston's version of the "Bayesian brain." 
> SERGIO> I will try to continue posting on this blog. 

A: Wonderful.

> ADAM> In general, I've been compelled by the idea that cortex is a
particular kind of self-orginizing Bayes network, where the symbolic level
is continuous with -- and emerges from via experience -- the sub-symbolic
level. 
> SERGIO> If you were to say that the cortex is a particular kind of
self-organizing causal network, then you would have exactly my theory. It is
easy to see that the sensory organs send their information already organized
as causal sets. Douglas Hofstadter has written "The major question of AI is
this: What in the world is going on to enable you to convert 100,000,000
retinal dots into one single word `mother' in one tenth of a second?" The
100M dots of light are a causal set, and it gets transmitted to the brain
via the optical nerve. Embodiment and space perception come from the fact
that those retinal dots are located at fixed positions, and the causal set
is precisely the set of those associations. Same happens with hearing,
touch, etc. Chemical signals that the brain is sensitive to are also causal.


A: It seems that the "topographic organization" of sensory inputs is an
essential part of embodiment, but the multi-modal nature of the inputs
referring to a particular system (i.e., the body) provides rich,
time-varying, correlated inputs--where changes result result from
endogenously generated outputs to effector systems--that make it a
particularly good toe-hold for bootstrapping a mind. Ideally, I would like
to see your algorithm/implementation used as a control system for an
embodied agent as it couples with its environment. But I am not familiar
with your theory, so perhaps I am misunderstanding what you are proposing.

> So your sub-symbolic level seems to be the causal set. This is the level
where experience for the environment first comes in, and with it, energy,
entropy, and uncertainty, and all that is dumped into the cortex
(disregarding some preprocessing that takes place in the retina itself).
Now, causal sets, as I proved in the theory, can self-organize and converge
to attractors (Hofstadter's 'mother') . Which is, it seems to me, your
symbolic level. The cortex is causal itself, so it is no surprise at all
that it behaves as a self-organizing network. 

A: I think that would be the symbolic level I was considering. Specifically,
i was thinking of the "manipulation" of explicit representations--both in
terms of a temporal lobe semantic networks as well as fronto-parietal
simulation of visuo-spatial-tactile experiences--which seem like they would
be collections of attractors.

> But here is the critical question. Causal sets self-organize if a process
exists which removes the excess entropy. This requires making the
inter-neuron connections as short as possible. But how do neurons actually
do that? I can think of several possible ways. It may be that neurons make
so many connections (10,000 per neuron), just to test the condition of
"shortest." They make the connections, test them by sending signals, and
keep the shortest ones that satisfy Hebbian learning AND the length
optimization condition.

A: Length optimization might be achieved by virtue of the small-world
connectivity of the brain, with mostly local connections and fewer distant
connections.
http://en.wikipedia.org/wiki/Small-world_network#Small-world_neural_networks
_in_the_brain

As you say, the brain does start out with an over-abundance of connections
and then selects (i.e., doesn't prune away) only those connections that
successfully contribute to coherent functional ensembles.

> Now, Friston says that all three existing brain theories have only one
thing in common: they all recognize some form of optimization. I do too.
This is an important agreement. But I go one step further. I not only
indicate exactly what needs to be optimized, but also a candidate process
for neurons to do the optimization. 

A: In my mind, multilevel evolutionary developmental optimization, to be
precise.

> This is as far as I can go. This is what I want Friston to know. I asked
two other neuroscientists, one is delighted with the idea but provides no
further input, the other says that there is no experimental evidence. One
possible reason why there is no evidence is that neuroscientists are not
looking for it. And this is the second thing I want Friston to hear, where
to look for the missing evidence. If solid experimental evidence were
obtained that neurons can indeed shorten their connections some way, then we
would have a complete, functioning theory of the brain based on first
principles. Do you realize what this would mean? We are only a small step
away from that goal!

A: A functioning brain theory based on first principles would mean...
unification/integration of the vast quantity of data being obtained from
neuroscientists around the world. It would also help people to self-organize
in asking better questions as scientific communities. Eventually, it could
lead to an artificial brain, which would mean. a new world.

Hopefully Friston will respond when you contact him. Probably even more
difficult to reach--but perhaps also relevant--is the work of Gerald
Edelman. He has done extensive research on the specific mechanisms
underlying his theory of neural Darwinism. I've attached some of my own
thoughts on the issue (2 pages of text). Another mechanism for neural
Darwinism that I don't mention--and which I don't believe Edelman discusses
either--is synaptic homeostasis. The brain builds up an excess of
connections each day though Hebbian/Hayekian learning, which are then pruned
back during sleep. Theoretically, since the PFC is a topologically central
hub, this buildup of connections (i.e., variation/replication), and then
pruning during sleep (i.e., selection) could help to implement an
evolutionary optimization process for minimizing connection lengths.

http://www.ncbi.nlm.nih.gov/pubmed/16376591

The theory is discussed in depth by Edelman's former student, Giulio Tononi.
He has also presented a fascinating "integration information theory" of
"consciousness" which might be compatible with your framework. I have
attached another brief piece where I describe that theory (1 page of text).
So you might want to try contacting him as well, although I imagine his
dance card is quite full, and is currently collaborating with researchers at
IBM who are attempting to build neuromorphic processors (mostly using
results obtained from Henry Markram).

> I generally do not agree with building the theory directly from
observational experience gained by observing and cataloging the brain. I
believe that causality alone is sufficient for the theory. For example, how
did Friston know that tha cortex can minimize free energy and maximize othe
utility sequences? How did Hawkins develop his HTM model? How could you
formulate your hypothesis? My answer is, from learning by experiment and
observation, and by self-organizing the learned information into attractors,
and the attractors are the free energy, the HTM model, the hypothesis, etc.
If one were to build an AGI machine and hardcode the free energy, the HTM
model, and your hypothesis into it, then that machine would not be able to
self-organize the experimental information they received and derive the free
energy, the HTM model, and your hypothesis. My AGI machine has no computer
in it, and no program. It only has an optimization process that removes
entropy from learned information and generates the attractors. Just one
process, always the same, independent of any particular problem or domain.
Then this machine can learn and get the same results the cortex does.

A: In that case, I want to use your machine as a control system for an
embodied agent--a virtual embodiment might be sufficient, given a powerful
enough physics engine--raise the mind-child until it develops a human-like
mind, and beyond. But how long would I have to wait and how much would it
cost to do such a thing? If I were using an HTM-style emulated cortex--or
perhaps some sort of memristor system when that becomes available--then I
would just keep adding on additional memory-prediction units to the top of
the cortical heterarchy, and let them be incorporated into semantic
attractor-networks of the self-organizing mind. Could I do the same thing
with your machine?

> I got to go. I need a neuroscientist to collaborate with, I can do the
theory but I can't do neuroscience. 

A: I need an engineer/programmer/physicist to collaborate with. I can do the
neuroscience--given funding--but I don't know how to implement this
knowledge in a mechanical system.

> Sergio






-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to