Sergio,
I am listening to you.
However, I am very skeptical.
So far you haven't explained anything other than a few ideas that are
interesting but do not constitute convincing evidence.  I wish I could
understand what you are getting at more efficiently.

Let's try again.  You said

Until very recently, explaining the self-organization was not possible.
Recalling that we are talking about a physical system, there is a principle
in Physics that actually explains self-organization. It says that every
dynamical system that has symmetries, also has a conservation law that
applies to a "conserved quantity." The conserved quantity is something that
is: 1. a property of the system, and 2. remains invariant under the
dynamics. In other words, it is what we call an attractor. There are two
ways to calculate the conserved quantity: Noether's 1918 theorem (and its
many extensions), and my recent work with causal sets. Noether's theorem
and extensions are limited to Lagrangian systems and of little interest in
AGI. My theorem is general, and contains Noether's theorem and extensions
(so far I have proved only one particular case). My theorem says that every
causal system has symmetries and establishes a general procedure to obtain
the attractors. ****

** **

As a result, I now consider self-organization as fully explained.



So give me a simple example of fully explained self-organization.

Jim





On Thu, Aug 16, 2012 at 4:18 PM, Sergio Pissanetzky
<[email protected]>wrote:

> Jim,****
>
> ** **
>
> I would like to expand my previous reply as follows.****
>
> ** **
>
> JIM> The problem, as I see it, is that the complexity of the level of
> general knowledge that is (probably) required for a basic AGI program is
> too great for any of these methods to work with anything other than
> superficial or at best simple examples.  So if the day came when one method
> worked then a great many methods would probably work because it would mean
> that someone had figured out how to contain combinatorial complexity or had
> developed hardware sufficient to attain minimal AI.****
>
> ** **
>
> SERGIO> This opinion completely ignores the fact that the brain is a
> physical system, and physical systems are capable to self-organize
> themselves. I know that self-organization is not well understood. I also
> know about the controversy surrounding the role of self-organization in
> evolution and biology. But these three facts do not mean that
> self-organization is not a possible answer to either the structural or the
> functional complexity of the brain. The possibility that self-organization
> could explain both, is still open. ****
>
> ** **
>
> Of course, one can argue that such a possibility is very remote. But I
> just said that self-organization itself, or its role in the anatomy or
> physiology of the brain, are not well understood. Under such circumstances,
> nobody can say that self-organization *does not* play a significant role
> in the evolution of the brain, just as well as nobody can say that
> self-organization *does* play a significant role in the evolution of the
> brain. So, I'd say, there are two options, try to explain the complexity,
> or try to explain the self-organization, and both deserve a reasonable
> chance. ****
>
> ** **
>
> Until very recently, explaining the self-organization was not possible.
> Recalling that we are talking about a physical system, there is a principle
> in Physics that actually explains self-organization. It says that every
> dynamical system that has symmetries, also has a conservation law that
> applies to a "conserved quantity." The conserved quantity is something that
> is: 1. a property of the system, and 2. remains invariant under the
> dynamics. In other words, it is what we call an attractor. There are two
> ways to calculate the conserved quantity: Noether's 1918 theorem (and its
> many extensions), and my recent work with causal sets. Noether's theorem
> and extensions are limited to Lagrangian systems and of little interest in
> AGI. My theorem is general, and contains Noether's theorem and extensions
> (so far I have proved only one particular case). My theorem says that every
> causal system has symmetries and establishes a general procedure to obtain
> the attractors. ****
>
> ** **
>
> As a result, I now consider self-organization as fully explained. As
> everything, my conclusions are subject to scientific scrutiny, and a long,
> arduous process will have to follow to actually apply the theory to a
> miriad of particular cases, the brain being only one of them, the GUAPs
> being another, biology being a third. This will unify the theory of
> self-organization, or so I hope. ****
>
> ** **
>
> To conclude this presentation, I would like to explain why I was so happy
> yesterday when I learned about Karl Friston's work. I quote again from  
> Dynamic
> causal modelling: A critical review of the biophysical and statistical
> foundations<http://www.fil.ion.ucl.ac.uk/spm/doc/papers/Daunizeau_NeuroImage_58_312_2011.pdf>
> by  J. Daunizeau, O. David, K.E. Stephan, NeuroImage 58 (2011) 312–322:***
> *
>
> ** **
>
> " ... the functional role played by any brain component (e.g., cortical
> area, sub-area, neuronal population or neuron) is defined largely by its
> connections ... In other terms, function emerges from the flow of
> information among brain areas ... effective connectivity refers to causal
> effects, i.e., the directed influence that system elements exert on each
> other."****
>
> ** **
>
> This statement describes results obtained by neuroscientists from
> observation, mostly imaging techniques such as fMRI. My conclusions come
> from theoretical considerations about the mathematical properties of causal
> sets. And the two are nearly identical. This is strong agreement between
> theory and experiment. ****
>
> ** **
>
> At this point, I believe that the second option, that self-organization
> plays the largest role in the evolution of the brain, is the most likely
> one and needs careful consideration. There is no need to abandon the
> traditional "reason and code" approach to AGI. But you need to carefully
> rethink your complexity argument. ****
>
> ** **
>
> Sergio****
>
> ** **
>
> PS. I don't mean to disregard the rest of your presentations. I just reply
> as I can. ****
>
> ** **
>
> ** **
>
> *From:* Jim Bromer [mailto:[email protected]]
> *Sent:* Thursday, August 16, 2012 6:47 AM
>
> *To:* AGI
> *Subject:* Re: [agi] Uncertainty, causality, entropy, self-organization,
> and Schroedinger's cat.****
>
> ** **
>
> I said:****
>
> I have thought about taking actions that can be used to minimize the
> complications of numerous unknowns, but since this strategy has to be based
> on some method of avoiding the worse outcomes, that means that the strategy
> cannot be based on a simplistic way to minimize "entropy".****
>
> --------------****
>
>  ****
>
> A probability method is a form of narrow AI. It is based on a presumption
> that the probability of what is being measured will be both obvious and
> simple.  Neither of these situations are actual realities for AGI
> developers.  So simple binary examples examples where the state of the
> result can be easily observed, is fine for an introductory explanation but
> it is not adequate as a solution to the real problem.****
>
>  ****
>
> AGI and AI programs do not attain the level of child-like intelligence
> because many fundamental stages of learning (or of situational analysis) is
> too complicated for a contemporary computer program. So while we can
> imagine these different methods attaining human-like intelligence, it is
> not feasible at this time.  Another variant, like free-action agency is
> unlikely to break through that barrier.****
>
>  ****
>
> When combined with a simplistic theory, like the markov process, Sergio's
> theory that the physical stratum is the place where the miminalization of
> the information entropy is taking place is similar to those theories which
> posits that all reasoning takes place through the syntax of expressions.
> These kinds of theories have proven inadequate to even model the necessary
> types of subject matter.  It is clearly the semantics (or meaning) of
> expressions which is of interest to the AGI programmer or AGI dreamer.  So
> while we can say that the mind must be developing insight with the
> 'physics' of the brain, we must still be able to recognize that it must be
> doing so on the basis of the meaning of the objects that the information
> refers to.  The attempt to declare that all processes of mind must be
> reduced to the terms of purely physical processes is inadequate because it
> is presumptious (people do not know exactly how the brain works) and
> blatently false (a computer program operates on the basis of its electronic
> components but at the same time is is programmable so it can be used to do
> things that we want it to do - within limits.  The computer is not just
> operating on the basis of the physics of it components but also because
> people have decided to use it in certain ways.  This duality of causation
> may be understood as a result of other kinds of physical reactions but the
> theory of everything does not actually refer to everything but only a frail
> conjecture about a conjectured prototype. Without understanding everything
> we are left with dualities and multiplicities).****
>
>  ****
>
> It seems to me that the problems of multiplicities is inherently
> complicated and these complications will doom many over-simplifications
> of non-essential reductionism.****
>
>  ****
>
> Jim Bromer****
>
> On Wed, Aug 15, 2012 at 9:55 PM, Jim Bromer <[email protected]> wrote:**
> **
>
> Well, I am still sceptical of the theory for one basic reason.  The
> problem, as I see it, is that the complexity of the level of general
> knowledge that is (probably) required for a basic AGI program is too great
> for any of these methods to work with anything other than superficial or at
> best simple examples.  So if the day came when one method worked then a
> great many methods would probably work because it would mean that someone
> had figured out how to contain combinatorial complexity or had
> developed hardware sufficient to attain minimal AI.****
>
>  ****
>
> Weighted reasoning was at first believed to be a solution to combinatorial
> complexity.  Why didn't it work?  There were a few problems.  One was that
> not everything can be expressed in the terms of a range of values.
> Secondly, a weighting would, by the very method that it is able to simplify
> complex relationships, represent different kinds of things and these
> different things get mushed up and produce sub par results. Finally, these
> systems have no way to integrate different kinds of relations wisely.****
>
>  ****
>
> Of course, since weighted reasoning is usually used with correlations, one
> might imagine that correlation points could be developed to represent when
> these complicated relations can be fused and when they should be divided
> and how they can be structured and integrated.  This would require some
> trial and error methods to learn how to apply these techniques to real
> world modelling, but few people actually talk about stuff like this.  And
> there is no reason to think that this sort of method can produce
> intelligence without first transcending the combinatorial complexity
> problem.****
>
>  ****
>
> I have thought about taking actions that can be used to minimize the
> complications of numerous unknowns, but since this strategy has to be based
> on some method of avoiding the worse outcomes, that means that the strategy
> cannot be based on a simplistic way to minimize "entropy".  Taking an
> action when facing multiple unknowns has to be derived from a biased method
> that help the entity avoid the worse outcomes.  This in turn implies that
> biasing strategies could also be used in the hope of increasing the chances
> of better outcomes based on the projection of insights about the kinds of
> situation that the intelligent device thought it might be in.****
>
>  ****
>
> Jim Bromer ****
>
> ** **
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> ****
>
> <http://www.listbox.com>****
>
> ** **
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to