Sergio,
Here some findings about neural connectivity and how the neurons minimize
the length and volume occupied by the physical connection.

http://www.kurzweilai.net/simple-mathematical-pattern-describes-shape-of-neuron-jungle

I love your approach to AGI (and world modelling in general) to include
physical constraints and optimization of the brain operation as foundation
for what the brain does. But maybe besides energy and entropy as
fundamental organizing factors, one should consider limited resources as
volume and building materials in the brain as other important physical
parameters that forces the brain to become an optimization machine.

For example one of the principles that underlies slow wave sleep is the
renormalization of synapses strength. The brain gets rid of connections
that are not important (does that were not used often during the day or
that didn't have a strong signal to noise ratio) during slow wave sleep.
The main driving force during this process is that the brain needs to be
careful in terms of resources allocation not necessarily only from an
energetic point of view.

What could minimize energy usage and entropy in a system that has not
volume or material constraints could be not what works for a system like
the brain that is necessarily confined by these parameters.

Giovanni











On Fri, Aug 17, 2012 at 3:57 PM, Sergio Pissanetzky
<[email protected]>wrote:

> Adam,
>
> I appreciate the detailed information you are posting. There are a couple
> of critical points - embodiment is one of them - that I would like to
> discuss. I can't promise much more, I'll do what time permits. I'll start
> with a short introduction about what is happening here.
>
> I have the massively parallel algorithm ready (see reply to Jim) and plans
> to build a prototype Entropy Processor based on the theory and implemented
> on an FPGA controlled by a PC. I was hoping for a USB device but that's
> unsure. I was also aiming at 1M "neurons" but I'll be lucky if we make it
> to 50,000 or 100,000. The only purpose of the processor is to remove
> entropy from a causal set, the PC does the rest. The processor is general
> purpose and works very different from Google's patents on entropy
> processors, which are specialized for video.
> The first student on this subject at U of H CL graduated last month with a
> Master in Computer Engineering, which is an application of entropy
> processing to parallel programming. I was the thesis advisor. The purpose
> of the prototype is to demonstrate some practical applications and apply
> for funding for a full-size one prototype. My level of participation is
> that I continue developing the theory, advice in the project, and try to
> "entice" the scientific community with possible applications to their
> particular disciplines. AGI, neuroscience, parallel programming, image
> processing, and the other GUAPs are among the various possible
> applications. Of course, the specific work will have to be done by experts
> in each field, I am myself just a physicist and not an expert in any of
> them.
>
>
> ADAM> Having convergent validity with Friston is a good sign, because he
> is a preternaturally brilliant scientist. As you review his work, I'll be
> looking forward to hearing more about the ways in which your model is
> compatible/incompatible with Friston's version of the "Bayesian brain."
> SERGIO> I will try to continue posting on this blog.
>
>
> ADAM> In general, I've been compelled by the idea that cortex is a
> particular kind of self-orginizing Bayes network, where the symbolic level
> is continuous with -- and emerges from via experience -- the sub-symbolic
> level.
> SERGIO> If you were to say that the cortex is a particular kind of
> self-organizing causal network, then you would have exactly my theory. It
> is easy to see that the sensory organs send their information already
> organized as causal sets. Douglas Hofstadter has written "The major
> question of AI is this: What in the world is going on to enable you to
> convert 100,000,000 retinal dots into one single word `mother' in one tenth
> of a second?" The 100M dots of light are a causal set, and it gets
> transmitted to the brain via the optical nerve. Embodiment and space
> perception come from the fact that those retinal dots are located at fixed
> positions, and the causal set is precisely the set of those associations.
> Same happens with hearing, touch, etc. Chemical signals that the brain is
> sensitive to are also causal.
>
> So your sub-symbolic level seems to be the causal set. This is the level
> where experience for the environment first comes in, and with it, energy,
> entropy, and uncertainty, and all that is dumped into the cortex
> (disregarding some preprocessing that takes place in the retina itself).
> Now, causal sets, as I proved in the theory, can self-organize and converge
> to attractors (Hofstadter's 'mother') . Which is, it seems to me, your
> symbolic level. The cortex is causal itself, so it is no surprise at all
> that it behaves as a self-organizing network.
>
> But here is the critical question. Causal sets self-organize if a process
> exists which removes the excess entropy. This requires making the
> inter-neuron connections as short as possible. But how do neurons actually
> do that? I can think of several possible ways. It may be that neurons make
> so many connections (10,000 per neuron), just to test the condition of
> "shortest." They make the connections, test them by sending signals, and
> keep the shortest ones that satisfy Hebbian learning AND the length
> optimization condition.
>
> Now, Friston says that all three existing brain theories have only one
> thing in common: they all recognize some form of optimization. I do too.
> This is an important agreement. But I go one step further. I not only
> indicate exactly what needs to be optimized, but also a candidate process
> for neurons to do the optimization.
>
> This is as far as I can go. This is what I want Friston to know. I asked
> two other neuroscientists, one is delighted with the idea but provides no
> further input, the other says that there is no experimental evidence. One
> possible reason why there is no evidence is that neuroscientists are not
> looking for it. And this is the second thing I want Friston to hear, where
> to look for the missing evidence. If solid experimental evidence were
> obtained that neurons can indeed shorten their connections some way, then
> we would have a complete, functioning theory of the brain based on first
> principles. Do you realize what this would mean? We are only a small step
> away from that goal!
>
> I generally do not agree with building the theory directly from
> observational experience gained by observing and cataloging the brain. I
> believe that causality alone is sufficient for the theory. For example, how
> did Friston know that tha cortex can minimize free energy and maximize othe
> utility sequences? How did Hawkins develop his HTM model? How could you
> formulate your hypothesis? My answer is, from learning by experiment and
> observation, and by self-organizing the learned information into
> attractors, and the attractors are the free energy, the HTM model, the
> hypothesis, etc. If one were to build an AGI machine and hardcode the free
> energy, the HTM model, and your hypothesis into it, then that machine would
> not be able to self-organize the experimental information they received and
> derive the free energy, the HTM model, and your hypothesis. My AGI machine
> has no computer in it, and no program. It only has an optimization process
> that removes entropy from learned information and generates the attractors.
> Just one process, always the same, independent of any particular problem or
> domain. Then this machine can learn and get the same results the cortex
> does.
>
> I got to go. I need a neuroscientist to collaborate with, I can do the
> theory but I can't do neuroscience.
>
>
> Sergio
>
>
>
> -----Original Message-----
> From: Adam Safron [mailto:[email protected]]
> Sent: Thursday, August 16, 2012 4:43 PM
> To: AGI
> Subject: Re: [agi] Uncertainty, causality, entropy, self-organization, and
> Schroedinger's cat.
>
> Sergio: "He also uses Bayesian statistical methods, which I don't agree
> with, because Bayes was a human and I want to know what in his brain made
> it possible for him to develop such a wonderful theory, not the theory
> itself. But Friston uses Bayesian methods because he doesn't know about my
> work, the entropy principle, or the inference that follows."
>
> A: Having convergent validity with Friston is a good sign, because he is a
> preternaturally brilliant scientist. As you review his work, I'll be
> looking forward to hearing more about the ways in which your model is
> compatible/incompatible with Friston's version of the "Bayesian brain." In
> general, I've been compelled by the idea that cortex is a particular kind
> of self-orginizing Bayes network, where the symbolic level is continuous
> with--and emerges from via experience--the sub-symbolic level.
>
> However, I think that machine learning approaches will fail to develop
> sufficiently robust causal reasoning for broad applicability unless they
> copy the human model by making their learning systems embodied. Embodiment
> provides useful inductive constraints that provide a toehold for the
> bootstrapping process that overcomes the challenge of impoverished stimuli
> and eventually leads to the full flourishing of higher level cognition. I
> suspect that organisms are such effective learners because they begin with
> a sense of their own embodiment as a kind of prototypical object, from
> which they can partially generalize to other dynamics in the world. I think
> they pay attention to this object because it is directly connected to the
> mechanisms of reinforcement. The body provides an initial set of values
> that constrains which of the countless aspects of evolving generative
> models will be optimized. I think they're able to learn the invariant
> properties of this object in the first place through hierarchical pattern
> abstraction, which is only sufficiently powerful in light of the fact that
> the cortical heterarchy allows for
> triangulation/mutual-constraints/useful-priors from multiple sensory
> channels, and more specifically an integration of these multimodal inputs
> through sensorimotor coupling.
>
>
>
> Sergio: "In the interest of science, I think it would be important for him
> to know. Do you know him, can you introduce me to him?"
>
> A:  Unfortunately, I don't know him personally, and I don't even know his
> work in depth (it's on my reading list). However, I am compelled by the
> idea of the brain as a control system that uses
> free-energy-minimization/successful-prediction-maximization for an embodied
> agent as it engages in sensorimotor--broadly construed--coupling to
> navigate the environment in which it is embedded.
>
> Theoretically, cortex could efficiently select for utility maximizing
> sequences by minimizing the �free energy� of the underlying processes
> (Friston, 2010; Hawkins, 2011; Kozma, Puljic, Balister, Bollobas, &
> Freeman, 2004). In Hawkins' HTM model (2004), if a minicolumn�s inputs are
> predicted in advance via stimulation of specific inhibitory interneurons
> within the column, then only those neurons without their respective
> inhibitory interneurons activated will increase their firing rates.
> However, if a sufficient number of non-predicted inputs occur, and a
> percolation threshold is surpassed, the entire column will become active,
> resulting in a cascade of activity-predictions in functionally connected
> columns.
>
> My hypothesis: Depending on the degree of functional connectivity with
> inhibitory interneurons of midbrain neuromodulatory nuclei (Watabe-Uchida,
> Zhu, Ogawa, Vamanrao, & Uchida, 2012), any dynamic that causes overall
> activity to be reduced should result in decreased inhibition of the
> production of these neuromodulators. This net disinhibition would enhance
> the most robustly active patterns, strengthen the connections underlying
> these patterns (i.e., reinforcement), and thus increase the efficiency of
> the dynamics contributing to successful prediction (i.e., minimized error
> signals).
>
> Although this activity-minimizing algorithm could potentially result in
> stasis, regulatory nuclei of the hypothalamus and midbrain would stimulate
> these inhibitory interneurons to the degree that action is needed to
> restore homeostatic balances. Thus an organism could not remain permanently
> inactive, as physiological signals such as hunger would result in
> stimulation of these regulatory nuclei, whose activity can be thought of as
> signifying the distance from homeostatic set points, or as signifiers of
> biologically specified predictions for which deviations result in error
> signals. Over time, cortical dynamics resulting in the minimization of
> error signals from these regulatory nuclei will become distributed across
> the cortical heterarchy as habitual predictions. The impact of these
> habitual predictions on overall functioning would constitute the evolving
> utility function of the organism.
>
> Friston, K. (2010). The free-energy principle: a unified brain theory?
> Nature Reviews. Neuroscience, 11(2), 127�138. doi:10.1038/nrn2787
>
> Hawkins. (2011). Hierarchical Temporal Memory: including HTM Cortical
> Learning Algorithms. Whitepaper Numenta Inc. Retrieved from
> http://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf
>
> Kozma, R., Puljic, M., Balister, P., Bollobas, B., & Freeman, W. (2004).
> Neuropercolation: A Random Cellular Automata Approach to Spatio-temporal
> Neurodynamics. In P. Sloot, B. Chopard, & A. Hoekstra (Eds.), Cellular
> Automata, Lecture Notes in Computer Science (Vol. 3305, pp. 435�443).
> Springer Berlin / Heidelberg. Retrieved from
> http://www.springerlink.com/content/jq3d3uj89p9ql7cf/abstract/
>
> Watabe-Uchida, M., Zhu, L., Ogawa, S. K., Vamanrao, A., & Uchida, N.
> (2012). Whole-Brain Mapping of Direct Inputs to Midbrain Dopamine Neurons.
> Neuron, 74(5), 858�873. doi:10.1016/j.neuron.2012.03.017
>
> -A
>
> On Aug 15, 2012, at 11:50 AM, Sergio Pissanetzky <[email protected]>
> wrote:
>
> >
> > Adam,
> >
> > Thanks a lot. You are right on target. In the following weeks or months
> I will be studying Karl Friston's work. He is a theoretical neuroscientist
> interested in that gray area between Physics and Neuroscience, and
> therefore of direct interest to me. Here is a quote from a paper by
> Daunizeau et. al., speaking about and in the context of Friston's seminal
> work:
> >
> > "...the functional role played by any brain component (e.g., cortical
> area, sub-area, neuronal population or neuron) is defined largely by its
> connections ... In other terms, function emerges from the flow of
> information among brain areas ... effective connectivity refers to causal
> effects, i.e., the directed in?uence that system elements exert on each
> other (see Friston et al. 2007a for a comprehensive discussion)."
> >
> > This is, precisely, the kind of things that I can predict for the brain.
> Predictions that I have made, and published, are nearly identical to
> Friston's, except that mine came from Physics and his came from
> observation. Agreement between experiment and prediction is a strong
> confirmation of both. When it is inter-disciplinary, it becomes fundamental.
> >
> > I note that Friston has recognized the role of causality, of the flow of
> information, the principle of free energy for action and perception, of
> active inference, in the brain. He uses causal models to infer architecture
> of the brain. I have been trying to draw conclusions from Physics about
> these same things, and so far, it seems to me, I have not been too far. He
> also uses Bayesian statistical methods, which I don't agree with, because
> Bayes was a human and I want to know what in his brain made it possible for
> him to develop such a wonderful theory, not the theory itself. But Friston
> uses Bayesian methods because he doesn't know about my work, the entropy
> principle, or the inference that follows. In the interest of science, I
> think it would be important for him to know. Do you know him, can you
> introduce me to him?
> >
> >
> > Jim,
> >
> > So far, I have only made four claims, one corollary, and two
> conjectures. They are listed in Section 2 of my Complexity  paper. I also
> apply the four fundamental principles of nature, causality,
> self-organization (or symmetry), least-action, and entropy (or 2nd. law of
> Thermodynamics). These are discussed some more in myhome page. I believe
> this pretty much takes care of all of Physics. If you know any law or
> experiment that contradicts my assumptions, the correct action would be for
> you to publish a paper explaining your views and let the scientific
> community decide. Note that in Physics, one single experiment that
> contradicts a theory may mean the collapse of the entire theory. Or, more
> usually, the emergence of a new theory of which the old one is a particular
> case.
> >
> > You ask me to prove all I say before saying it. You should tell the same
> to the AGI people. AI started 60 years ago, under the assumption that
> intelligence will be conquered by computers. With no proof. So they devoted
> themselves to writing programs. Sixty years later, AGI emerges, and is
> still using the same assumption. With no proof. You post your study of an
> algorithm on an AGI blog. Why would you do that? Because you think the
> study is a contribution to AGI. There is no proof of that. Science doesn't
> work like that. There is a thing called scientific discourse, where
> scientists communicate freely about their ideas. You are essentially
> telling me to but off because you seem to dislike my conclusions, or else.
> I can't hide in a hole, sorry.
> >
> > Isn't it time to try something different? Please, be patient, and keep
> trying to understand what I am saying. I know it is not easy and I
> appreciate your efforts to remain calm. If it is any consolation, it was
> very difficult for me too, back in 2005.
> >
> > I believe the outcome of my post - Adam telling us about Friston -
> overrides everything else you've said. Had I not advanced my hypotheses
> about the brain, this contact with Adam would not have been established.
> You would have undermined my chance to participate in our quest for
> understanding what we are, and the chance of Science to advance one more
> step ahead.
> >
> > Sergio.
> >
> >
> > From: Adam Safron [mailto:[email protected]]
> > Sent: Wednesday, August 15, 2012 9:56 AM
> > To: AGI
> > Subject: Re: [agi] Uncertainty, causality, entropy, self-organization,
> and Schroedinger's cat.
> >
> > You have already acknowledged the fact that the brain uses a lot of
> energy so why would you continue to insist that you know exactly how the
> brain acts to conserve energy without any experience in the field of neural
> science?
> >
> > Karl Friston's work may be relevant to this discussion:
> > http://www.fil.ion.ucl.ac.uk/~karl/#_Free-energy_principle
> >
> > Best,
> > -Adam
> >
> > On Aug 15, 2012, at 4:49 AM, Jim Bromer <[email protected]> wrote:
> >
> >
> > Sergio,
> > I am making an effort to try to understand what you are saying.  I am
> also trying to avoid making personal attacks.  However, I have major
> problems when someone claims that he has -the answer- when he does not have
> -the proof-.  So I have been making more personal criticisms about your
> attitude about your own theory, not to to win the argument or to personally
> trounce you, but to see if you are able to acknowledge that you cannot
> possibly be certain about your theory without actually making it do what
> you say it can do.  Once you acknowledge some serious uncertainty about the
> theory, or I come to the conclusion that you are unable to do that, I want
> to try to figure out what your theory is about.
> >
> > I did not understand this at first, but now I think that you are saying
> that the response a person makes in situations where some uncertainty
> exist, will be an invariant given those situations.  Is that right or is it
> wrong?  Regardless of the knowledge someone has about what might follow,
> the response that a person chooses in the face of uncertainty is one in
> which the entropy of the information that the person has about the
> situation will be minimized so that the useful information is retained.  Is
> this essentially right?  It should be obvious that this is going to be an
> imperfect process given that some situations are more complicated than
> others. Isn't that right?
> >
> > Is it possible that your theory is only a physical-reaction-of-the-brain
> response to a problem of overwhelming uncertainty and therefore not a sound
> theory derived from insight?
> >
> > Two more criticisms.
> > One is that you are choosing some of the laws of physics while ignoring
> others and then claiming that these laws that you have chosen explain how
> the brain works.  The brain is obviously a complicated organ, so how can
> you claim that your choice of abstractions from physics can explain it?
> >
> > Secondly.  We learn from previous experiences.  We learn that we do have
> choices.  And we learn that many of the choices we have can be made without
> immediately threatening our survival.  Why aren't my choices based on
> insight (right or wrong)?  Knowledge that is only derived from the essence
> of an abstract system is usually pretty frail. Isn't it possible that the
> mind is physical organ capable of dealing with insight and therefore
> capable of reacting in ways that are less efficient than your theory is
> suggesting.  You have already acknowledged the fact that the brain uses a
> lot of energy so why would you continue to insist that you know exactly how
> the brain acts to conserve energy without any experience in the field of
> neural science?  (I am not saying that we must not talk about such things,
> I am only saying that we cannot honestly claim that our knowledge of the
> basics of neural science are absolutely correct.)
> >
> > Jim Bromer
> > AGI | Archives | Modify Your Subscription
> >
> >
> > AGI | Archives | Modify Your Subscription
> >
> >
> > AGI | Archives  | Modify Your Subscription
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/17130918-e0ecc803
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to