Derek,

 

Thanks for the comments. My web site is a work in progress, I wish I had
more time for the "progress" part. 

 

Maybe "property" is not the best word, but here is what I was trying to say.
Given a causal set, any causal set, and just because it is a causal set,
there exists structure hidden in it, and the structure is unique and is a
hierarchy of partitions of the causal set. That's what I mean by property.
It's like you have matter and the matter has a mass inique to it. We say
that the mass is a property of matter, and so also the structure is a
property of the causet. 

 

Problem is, the structure is hidden. It is not visible, it can not be
detected, it can not be computed by any algorithm. That's where the
inference comes in. The known fact is the given causal set, which represents
knowledge about the world. The new fact, after the inference, is the
structure, which has now been produced as a conclusion from the premise of
the causet. 

 

The reason why I say EI is uncomputable (I believe, I haven't proved it), is
different. EI requires a functional to be minimized. So where is the
functional? I am trying to say, you need something *else* if you want to
find the structures, besides the causet. This functional has a mathematical
expression, and the *form* of that expression is uncomputable. This form, I
have discovered. I now know that the expression is actually very simple, and
it contains nothing the causet itself, so again I say that the form of the
expression of the functional is a property of causets and is unique for
causets. 

 

The expression happens to be very simple. You can implement it on your PC
very easily. You will, then, need a program to minimize its value, and you
are ready to calculate the structures for any given causet. 

 

Your explanation of EI is nearly perfect, the only detail you say "find a
permutation of a set." That would be of *the* set, the same given set, or
more precisely, of the elements of the given set. "Placing related elements
near each other" is exactly correct, but a little dangerous. Doing that is
known as association, also as binding, and it is "very difficult" to do, to
say the least. At AGI-11, there was a paper precisely on this subject. 

 

I'll use a camera instead of the retina. When light hits a pixel in that
camera, an electric signal is produced and travels to the brain, I mean the
computer. That's it, that's the causal relation, light + pixel (and the
pixel has a position, which is how spatial information gets encoded) cause
signal. Multiply that by 1 million pixels, and you have a big causal set.
>From the signals alone, you can't tell that the camera is looking at your
mother's face (in Hofstadter's words). But if you display the signals on a
screen, your brain will immediately recognize the image. That's EI. I did it
on a small scale on my  PC, and I now want to do it on a larger scale. 

 

Attached is a mini-article I wrote earlier today about how EI and Physics.
It is meant for the AGI community, so I just attached it here.

 

Sergio

 

From: Derek Zahn [mailto:[email protected]] 
Sent: Friday, July 13, 2012 4:17 PM
To: AGI
Subject: RE: [agi] Analog Computation

 

Hi Sergio,

I think you are doing it exactly right.  I am as baffled by Emergent
Inference as anybody else, but if you keep trying to explain it more and
more clearly and expand it so that others can understand your claims for it,
we will catch on (assuming its value is real).

In case it helps you, a couple of comments:

On your web site you say that EI is a type of inference, and that it is a
mathematical property, and that it cannot be run on a computer because its
structure is just there in the data.   But this is bizarre.  Inference is a
method for producing conclusions from premises.  If EI isn't that, then
don't call it inference.  If it is that, then it isn't a "property"... It is
like saying deduction (another type of inference) can't be implemented on a
computer because the conclusions are inherent in the premises.  It is very
confusing, and we havent even started yet.

As I understand it, EI is this: given a partially ordered set, find a
permutation of a set that satisfies the partial lorder and minimizes a sort
of energy function on the permutation order, which (according to your
interpretation and somewhat reasonably) reveals the inherent hierarchical
structure by placing related elements near each other.  Is that right?

You talk about a retina experiment.  How is a retinal input represented as a
partially ordered set?  How is the resulting permutation used for anything?


Derek Zahn

  _____  

From: [email protected]
To: [email protected]
Subject: RE: [agi] Analog Computation
Date: Fri, 13 Jul 2012 14:38:04 -0500

Agree. I am doing my level best to write and publish the best papers I can.
This process can take 25 years. Taking into account that someone must
continue developing the theory, and I am the only one doing it, what else
should I do? 

 

Sergio

 

From: Derek Zahn [mailto:[email protected]] 
Sent: Friday, July 13, 2012 12:28 AM
To: AGI
Subject: RE: [agi] Analog Computation

 

Steve Richfield wrote:

> Any ideas for a good solution?

Of course.  Lots and lots of people get their ideas implemented, and many of
them are not rich or overly privileged.  Emulate them.

You either get a good solid start on building something useful yourself, or
you make a convincing case to people who have resources they are willing to
invest in new ideas.  If such people are not convinced by your case, find
out why and address those points in the way they need you to address them.
Unconvincingness is not a problem with the uncaring world, it's a symptom of
a defective case.   If your points are not penetrating, it does no good to
blame the audience; sharpen the points!

  _____  

Date: Thu, 12 Jul 2012 18:57:20 -0700
Subject: Re: [agi] Analog Computation
From: [email protected]
To: [email protected]

Sergio,

This entire debate reminds me of the late pre-micro era. I had my own plan
to build the first microcomputer. It was to be a bipolar chip that
implemented a bit-serial architecture. It would have been about the same
speed as the early MOS micros, but would have modern-day word lengths and
hardware multiply/divide. In short, it was a better way that was never
built.

Since then, I have met two other people who had their own plans to build the
first microcomputer, each of which was quite different from the others, and
all of which were MUCH better than any of the early micros.

So, why did they waste good silicon building garbage like the 4004 and 8008?
Because we were on the OUTSIDE. Our proposals were being rejected by the
same sorts of folks who were working on the 4004, and so they had to be
killed lest they compete. We failed because we couldn't get past the front
door. However, the 4004 succeeded because they had easily avoided the
greatest barrier of all - the front door.

Here we fail because we are outsiders to all of the corporations who
desperately need what we know how to do. Of course we can always throw
proposals over their transoms, only to find their way to the very people who
would be threatened by them.

In short, this is a people problem, and not a technological problem.

Any ideas for a good solution?

Steve


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/4027887-e37ac021> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/4027887-e37ac021> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 


                                  07-15-2012


This is an article for my web site. 



Thermodymaics = thermos + dynamics, where thermos = heat = energy 
and dynamics  = in motion. Thermodynamics is the study of energy 
in motion. 

A physical system can have a certain amount of *free energy*, 
free = unbound. There are several mechanisms that allow that 
energy to travel inside the system from one point to another. 
But if the system is open, energy can also travel across the 
boundary in either direction, resulting in a net loss or gain 
of free energy for the system. 

In Physics, travelling energy is known as *action*. 
At any particular instant of time there is a certain amount of 
energy travelling inside the system. This amount is the action 
in the system. If the system is open and hotter than the environment, 
there is going to be a net outgoing flow of energy, or a net outgoing 
action. The system is said to be dissipative. 

In a dissipative system, the value of the internal action decreases 
until it reaches a stationary point or a global minimum. At that point, 
the system can not loose any more action, or any more of its free energy, 
and is said to be *conservative*. 

Action is a transformation that causes the state of the system to change. 
In the state space, where each state is represented by a single point, action 
causes the system to follow a certain *trajectory*. The Principle of Least 
Action states that the trajectory followed by the system between any given 
pair of points is one of least action. Or, to be more precise, one of 
stationary action. 

The important physical quantity known as *entropy* is also associated 
with action, and with order-disorder in the system. A "hot" system, one 
that has a large internal action, also has a large entropy and is very 
disorganized. As the internal action drops, the entropy also drops, and 
the system becomes more and more organized. When the action reaches a 
stationary point, the entropy also reaches a stationary point, and the 
system becomes conservative and organized. This is the point where 
*self-organized structures* appear. The system is still dynamic, but 
ity dynamics is steady, and the system is said to be in a steady state. 
This steady state is known as an *attractor*. Other names are: conserved
quantities, invariant representations, hierarchical structures. 

For more details, the reader can refer to a magnificent paper published 
by Thomas Pernu and Arto Annila in February 2012 in the 57th Volume of 
the journal Complexity, pp. 1-4. The title is "Natural Emergence.

The Principle of Symmetry, one of the most fundamental in theoretial 
Physics, associates the attractors with *symmetries* of the physical 
system. The principle does not say how to *calculate* the attractors. 
Noether's theorem allows us to calculate the attractors in one 
particular, very narrow, but also very important case. 

In causal set theory, the functional I have proposed represents action 
in the physical system. When the value of the functional is high, the 
system is "hot." Its entropy is high, its degree of organization is low. 
There are no stable or recognizable structures. 
When the functional is minimized and attains its least value, the system 
has cooled down, becomes conservative, its entropy is at a minimum, and 
stable, recognizable self-organized structures show up. Causal set theory 
allows one to calculate these structures by application of the principle 
of symmetry. The difference with Noether's theorem, is that this 
calculation is completely general and applies to all physical systems. 












-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to