Jim,

 

JIM> I am listening to you. However, I am very skeptical. 

SERGIO> Fair enough. 

 

 

JIM> So far you haven't explained anything other than a few ideas that are
interesting but do not constitute convincing evidence. 

SERGIO>  It seems to me that you are better accostumed to a top-down
approach, where you start from complexity and try to simulate it. My
approach, instead, is constructivist, and this may be the source of your
feeling that I haven't explained anything yet. I start from simplicity and
build up. I try to explain how complex things come to be. My foundation are
the 4 fundamental principles of nature. I was trying to explain this first
and tell you how I got there. I tried to start from the Schroedinger's cat
example, but it didn't work very well, so I tried something else. 

 

Of course the two approaches, bottom-up and top-down, must meet somewhere.
Frister's approach is bottom-up, and he is meeting the top-down
observational results. Mine is more fundamental than his, so his work makes
mine a lot easier: now my top is the bottom he's working from (in these
matters only, I mean, not necessarily in AGI). 

 

I suggest you watch how I build, from the ground up. You need to watch of
course where I am going, but much more importantly you need to watch the
limits of the theory. If you read it carefully, you will realize that there
aren't any. There is nothing in the theory about the size of the causal sets
that could stop it from being valid. It is not like, it applies to sets of
size 14,703, but I can't prove anything for 14,704 or larger. That
realisation should put to rest your concerns, expressed on another post,
that many approaches work for toy problems but not for real-world ones.
That's possibly because they rely on assumptions or approximations that
apply only in certain conditions. A classical example is Fluid Mechanics,
where fluids are treated as continuous by way of differential equations such
as Stokes and Navier-Stokes, but the theory can not explain heat
conductivity  or viscosity. These are molecular phenomena, and when you
change scale, the continuous theory collapses. There is nothing like that in
causal sets because they are fractals: they are scale-invariant. The only
limit, will be the size of the computer you have. 

 

And no, there is no limit on time of execution either. This is still
unpublished, so I can only give you a hint. Assuming a neural-network
computer simulation where each element of the causal set is represented by
exactly one individual neuron, and assuming near-neighbor coupling, the time
of execution is constant and independent of size. This is massive
parallelism. I am just curious, are you still with me? May I ask a quiz to
verify? You don't have to answer, the answer is below. Aside from the
obvious fact that this is going to be fast, what is the real, profound
significance of this result? 

 

So this is, in a nutshell, where I am going. For convincing evidence you
should go directly to my Complexity paper. If you haven't, you can't blame
me for that. If you can't access or understand the paper, you can always ask
for help. 

 

 

JIM> I wish I could understand what you are getting at more efficiently. 

SERGIO> Again, the efficient way would be for you to read my paper and
comment on it, or publish another rebuffing me, or work with me to solve the
differences. I consider and respect the possibility that you may come from a
different discipline and lack the background or inclination to read my
paper, and this is the reason why I attempted a more expanded and detailed
approach. Tell you what. I'll try bigger steps, and you stop me if I go too
far. Again, there is no rush at all, I'll wait one year if I have to. 

 

 

I SAID> As a result, I now consider self-organization as fully explained. 

I ALSO SAID IMMEDIATELY AFTER>  As everything, my conclusions are subject to
scientific scrutiny, and a long, arduous process will have to follow to
actually apply the theory to a miriad of particular cases, the brain being
only one of them, the GUAPs being another. 

 

JIM> So give me a simple example of fully explained self-organization.

SERGIO> I already have. I said: "My theorem says that every causal system
has symmetries and establishes a general procedure to obtain the
attractors." The theory is published in my Complexity paper, where Section 4
presents a fully-developed simple example and summarizes hundreds of
computer experiments I carried out within the limitations of my computer. 

 

You may also want to see my (2009a) paper
<http://www.scicontrols.com/ReferencesForThisWebsite.htm>   where Sections
IV and V cover a fully developed real-world case study, a Java program used
in European universities to teach refactoring to students in Computer
Science. The study includes learning (with a teacher in this case, but that
makes no difference), simulates the increase of entropy and uncertainty
caused by learning (I didn't use those precise terms because my target
audience wouldn't understand them), the conversion of the program to causal
set format (I used a canonical matrix, but it's just a notation for causal
sets),  and the application of my theorem to the causal set resulting in
self-organization, which in this case is the block system illustrated in
Fig.6 of the paper. I did discuss but did not publish the resulting
object-oriented design or UML diagram in this paper. 

 

This reply would not be complete if I didn't explain why causal sets. In
brief (most is published too, but some is still in press), three steps: (1)
causal set = algorithm = computer program. Any algorithm that halts or any
computer program that halts is a causal set. Therefore, anything I say about
causal sets, such as they can self-organize, applies to computer programs
that halt; (2) computer programs have been used to simulate virtually
everything; and (3) virtually everything can self-organize. Living (such as
Hawkins' invariant repressentations) or not (such as a hurricane). 

Quiz: the more profound significance is that the brain also has the same
property, particularly noticeable in vision. Since my approach is bottom-up,
and my algorithm comes from the bottom, this is one point where the
bottom-up approach meets a direct and so far unexplained observation. This
result also explains why our brains need to be so large. 

 

It is now time to begin the long and arduous process I mentioned above. I am
trying to jump-start this process, and that's where I am getting at. If I
don't hear anything from you, I shall continue my presentations shortly. 

 

 

Sergio

 

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Thursday, August 16, 2012 5:55 PM
To: AGI
Subject: Re: [agi] Uncertainty, causality, entropy, self-organization, and
Schroedinger's cat.

 

Sergio,

I am listening to you.

However, I am very skeptical.

So far you haven't explained anything other than a few ideas that are
interesting but do not constitute convincing evidence.  I wish I could
understand what you are getting at more efficiently.

 

Let's try again.  You said 

Until very recently, explaining the self-organization was not possible.
Recalling that we are talking about a physical system, there is a principle
in Physics that actually explains self-organization. It says that every
dynamical system that has symmetries, also has a conservation law that
applies to a "conserved quantity." The conserved quantity is something that
is: 1. a property of the system, and 2. remains invariant under the
dynamics. In other words, it is what we call an attractor. There are two
ways to calculate the conserved quantity: Noether's 1918 theorem (and its
many extensions), and my recent work with causal sets. Noether's theorem and
extensions are limited to Lagrangian systems and of little interest in AGI.
My theorem is general, and contains Noether's theorem and extensions (so far
I have proved only one particular case). My theorem says that every causal
system has symmetries and establishes a general procedure to obtain the
attractors. 

As a result, I now consider self-organization as fully explained. 

 So give me a simple example of fully explained self-organization.

Jim

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> | Modify
Your Subscription

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> 

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  


 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> 

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to