Ben,

A causal set or causet is a finite, acyclic poset (details left out). It is
allowed to grow (learning) but it always remains finite. Causets were
introduced in Quantum Gravity Physics in the 90's as a special case of
posets. They are kind of monopolized by that discipline. I never mentioned
the term in any of my publications. Here is the story. For years, I had been
working with "canonical matrices," because they had interesting
self-organizing properties. I knew that canonical matrices and posets were
equivalent, thought that posets were more fundamental, and published my
paper in Complexity. After publication, I was contacted by scientists from
many countries, and the Russians pointed out to me that the posets I had
been using were actually causets, and that I had been working with
causality. One could say I rediscovered causets independently. I am not
claiming that, but I will never again omit "causet" from any future
publication. 

There are only two causet experts in the world. They are Rafael Sorkin and
Tommaso Bolognesi. Sorkin specializes in quantum gravity, Bolognesi has
published "Causal Sets from Simple Models of Computation" in 2010, where he
has proposed:
        "Causal sets are the only objects of physical significance and
relevance to 
        be considered under the 'computational universe' perspective, and
the appropriate 
        abstraction for shielding the unessential details of many different
computationally 
        universal candidate models." 
This statement, originally intended for quantum gravity applications, is
also true for macroscopic systems, and is positioned at the very core of
AGI. 

I will now address your concerns about "why causets?" Months ago, I posted
something like the following:
        "Consider a computer program, any program. It could be OpenCog, or
it 
        could be spaghetti code of the worst kind you can possibly imagine, 
        with numerous GOTOs, breaking into and out of the range of other 
        GOTOs or FORs or WHILEs. However, that poor devil the programmer has

        managed to make it work. You compile the program and run it on a 
        regular computer with only one thread. And the program runs
sequentially."
The compiler has converted the program into a causal set, written in
assembly language or machine language. There is in fact an isomorphism
between causets and algorithms or computer programs that halt (close
connection with the Turing halting problem, we can discuss this later). I
studied the isomorphism and published a couple of years ago the
transformations between software and causets, both ways. It can all be
automated, so for all further purposes software and causets can be
considered as equivalent. 

Now, think of the state of the art in computer modeling. Just about anything
from Philosophy to hard Science to Engineering to Government to Politics has
been modeled by software. Correction, by causets. Causets are not difficult,
or complicated, or rare. They are everywhere, we use them all the time. Any
computer program is a giant causet, our eyes and ears, and our sensors as
well, capture causets. We get causets when we read Braille. We work with
them when we develop algorithms, or theories. Or when we predict, or read a
mistery novel. I hope this answers a common objection that causet modeling
would be too difficult. Causets have an awsome power of representation of
the world. 

BEN SAID> Quantum mechanics certainly has no use for causality.
SERGIO REPLIES> So a transistor or a CMOS junction are not causal devices?
Perhaps microprocessors are not deterministic? 

BEN SAID> we feel a boulder knocking into a tree is a plausible causal
mechanism for knocking down the tree, because we can imagine ourselves in
the position of the boulder, knocking down the tree...
SERGIO REPLIES> Boulder <  knocking down the tree (where `<' means precedes)
is the causality. Nothing to do with feeling, or with us. Then: I saw a
boulder knocking down a tree < can I knock down a tree? To say nothing about
the "saw" part. 

BEN SAID> A difference in our approaches is that I tend to begin with
phenomenology rather than physics.  I tend to view the "physical world" as a
model that the mind builds up to explain certain subjective observations in
its memory. Whereas you seem to take this particular model as the
foundational starting point...
SERGIO REPLIES> I believe the physical world exists independently of us, and
that we are physical objects part of it. And we interact with that physical
world via our senses and muscles. I see the brain as a machine that helps us
to survive. This is my foundational starting point. Can you explain yours a
little more? Do you mean the physical world does not exist, that we only
imagine it? 


Sergio


-----Original Message-----
From: Ben Goertzel [mailto:[email protected]] 
Sent: Saturday, June 09, 2012 3:56 PM
To: AGI
Subject: Re: [agi] The Visual Alphabet

Can you remind me which of your papers discusses "causets"?  I don't
remember them from the stuff of yours that I read before...

I think I understand causality as the concept is used in physics.  The most
compelling use of the concept is in special relativity theory, with light
cones and such

Quantum mechanics certainly has no use for causality

By and large I think causality can be thought of as "temporal precedence,
plus conditional probability, plus existence of a plausible causal mechanism
as judged by a particular mind".....

Roughly "existence of a plausible causal mechanism" seems to come down to
analogical reasoning based on what actions the  mind itself feels itself as
being able to do.  For instance, we feel a boulder knocking into a tree is a
plausible causal mechanism for knocking down the tree, because we can
imagine ourselves in the position of the boulder, knocking down the tree...

So I see temporal precedence and probability as somewhat foundational, and
the *feeling* of subjectively impelling/willing something as somewhat
foundational -- but the assignation of causation to observed events as a
derived, conjectural, psychological thing, rather than something
foundational to base a theory of mind upon...

A difference in our approaches is that I tend to begin with phenomenology
rather than physics.  I tend to view the "physical world" as a model that
the mind builds up to explain certain subjective observations in its memory.
Whereas you seem to take this particular model as the foundational starting
point...

ben g

On Sat, Jun 9, 2012 at 3:03 PM, Sergio Pissanetzky <[email protected]>
wrote:
> Ben,
>
> you answered in bulk, and you are still ignoring important facts. And 
> you are barking at the wrong tree.
>
>
> BEN SAID> The mathematics of EI is pleasant enough, though my 
> poset-theory-expert friend commented that it largely consists of stuff 
> that poset theorists know already...
> SERGIO REPLIES> Alright, I'll check on that and report later. 
> Everything that applies to posets applies also to causets. But the 
> converse is not true. There are facts that apply to causets but not to 
> posets. EI is one example. The difference is in the Turing halting 
> problem. Causets always halt, posets may not. I don't know if your 
> expert friend noticed this tiny detail. BTW, "largely" is not good enough.
The devil is in the details.
>
>
> BEN SAID> Yeah, just as birds define flying... right... ;p SERGIO 
> REPLIES> Centuries ago, anyone who thought about flight thought abot 
> birds, or about throwing a rock. Today, anyone who thinks aboput 
> intelligence thinks about the brain. There are notions such as 
> intelligence, meaning, emotions, that just can not be defined without 
> a reference to the brain. Just check repeated but failed attempts by
"experts."
>
>
> BEN SAID> I don't remember what a "causet" is, but if I replace it 
> with "packets of information" then the above statements seem obvious.
> SERGIO REPLIES> Good. You are touching a critical point. The 
> statements in their short form apply equally well to causets or packets of
information.
> However, information on arrival to the retina, or to a camera, is 
> causal, and the retina responds in a causal way. If you capture only a 
> "packet of information" that disregards causality, you are leaving
information behind.
> This is the core reason why experts in image recognition have failed 
> for decades to recognize images. The type of causal info they leave 
> behind corresponds, precisely, to grounding and embodiment. Surprise?
>
>
> BEN SAID> I don't find the notion of causation particularly useful in 
> a scientific context, it strikes me as mainly a "folk psychology" 
> concept, like "free will" ...
> SERGIO REPLIES> Really? You haven't convinced me that you know or care 
> to know much about causality (or causation). You have a mental loop: 
> causation is not useful, so why bother to learn about it, but then you 
> don't know how causation is useful. You are not listening, Ben. I am 
> telling you in a loud voice, causation is of the essence. Do not throw it
away!
>
>
> BEN SAID> But its importance for AI or neural modeling is a different 
> story, which I don't yet buy into...
> SERGIO REPLIES> You will.
>
>
> BEN SAID> I don't think poset theory "looks like the brain" very much 
> at all.
> SERGIO REPLIES> That's possibly correct. And that's why I am not using 
> poset theory.
>
>
> BEN SAID> If I had to pick a branch of math to cite in this context -- 
> Nonlinear dynamical systems theory looks a lot more like the brain, 
> and has a lot more demonstrated use for modeling brain function. Look 
> at Izhikevich's book on the geometry of biologically realistic neural 
> nets, for example.
> SERGIO REPLIES> Good pick. Did I ever say that causets exhibit the 
> properties of nonlinear dynamical systems? They have emergence and 
> self-organization, they have attractors, butterfly effect, 
> deterministic chaos, potential wells with energy levels... just to 
> mention a few. This is for causet+functional, causets are 
> mathematical, but Physics enters via the functional, and suddenly 
> causets behave just like nonlinear dynamical systems.
>
> BTW, I am about to post a statement about data structures and 
> representations, where I emphasize how a Physicist and a Mathematician 
> think differently about information. It will not be addressed to you, 
> but please read it.
>
>
> BEN SAID> From my perspective, since I genuinely think I *am* (on a 
> plausible path to AGI), it would be irresponsible for me to hide in a 
> hole and shut up about it ;) SERGIO REPLIES> Please don't shut up, but 
> you also need to listen more. You are an honest man. I believe I am 
> too, and I am bothered you don't seem to trust me in the least. You 
> don't have to, but then you have to check for yourself.
>
> Sergio
>
>
> -----Original Message-----
> From: Ben Goertzel [mailto:[email protected]]
> Sent: Friday, June 08, 2012 4:17 PM
> To: AGI
> Subject: Re: [agi] The Visual Alphabet
>
>> Still, you are ignoring a number of facts:
>> 1. The brain is the only known "intelligent" system. This defines 
>> intelligence.
>
> Yeah, just as birds define flying... right... ;p
>
>> 2. Sensory organs generate causets and feed them to afferent nerves 
>> and the brain.
>> 3. Muscles receive causets from brain/efferent nerves.
>> 4. Unless you believe in magic, or in what Kauffman says about 
>> Quantum Mechanics in the brain, or something else, the brain is a 
>> complex causal physical system. Physics envy or not.
>> 5. Causal systems have properties. For example, they can learn (grow).
>> It is not wise to dismiss these properties as "not fundamental."
>
> I don't remember what a "causet" is, but if I replace it with "packets 
> of information"
> then the above statements seem obvious
>
> I don't find the notion of causation particularly useful in a 
> scientific context, it strikes me as mainly a "folk psychology" 
> concept, like "free will" ...
>
>> 6. EI is a new type of inference. It is inference because it allows 
>> one to derive new facts from known facts. It is not wise to disregard 
>> EI because "I" am or am not well informed. What does "I" have to do 
>> with
> EI?
>> 7. EI does not linearize anything. It dissipates energy, which is 
>> something all physical systems can do, even the brain.
>> 8. EI is not heuristic.
>> 9. EI is a function that maps from a countably infinite set to 
>> another, the set of "raw" causets, as they come in from sensors or 
>> senses, broken into tiny pieces, to the set of "organized" causets.
>> Actually the two sets are the same, they are the same causets, but 
>> the
> organization is a new fact.
>
> The mathematics of EI is pleasant enough, though my 
> poset-theory-expert friend commented that it largely consists of stuff 
> that poset theorists know already, explained using eccentric
terminology...
>
> But its importance for AI or neural modeling is a different story, 
> which I don't yet buy into...
>
>> 10.  2-9 look a lot like the brain. Certainly more than any other 
>> type of inference that we know.
>
> I don't think poset theory "looks like the brain" very much at all.
>
> If I had to pick a branch of math to cite in this context -- Nonlinear 
> dynamical systems theory looks a lot more like the brain, and has a 
> lot more demonstrated use for modeling brain function.   Look at 
> Izhikevich's book on the geometry of biologically realistic neural 
> nets, for example
>
>
>> The reason why chemists can design chemicals, or aeronautical 
>> engineers can design aircraft, is because they understand the 
>> principles
> of their science.
>> And once they understand the principles, they can use them in 
>> ingenious and creative ways. Otherwise it would be alchemy of kite 
>> flying. AGI does not have a principle. This does not mean that 
>> "anything goes." It only means that AGI needs a principle, and we all 
>> ought to be trying to find it. Only then will we be able to engineer
> intelligent systems.
>
> Chemistry and biology don't have simple, elegant unifying principles 
> in the sense that physics does.  They  have multiple principles on 
> various levels with various levels of certitude....  I suspect the 
> science of intelligence will be the same way.  And we are gradually 
> building those principles as we do AGI and cognitive science.  There 
> will be no "quick fix", no simple elegant set of mathematical 
> principles of intelligence that lets you formulaically design an AGI
system on the back of an envelope.
>
>> Ben, it seems you still don't understand EI, and/or don't believe 
>> that EI is inference, and is new. Just look no further than my 
>> section on Small Systems in my paper. Any sensible person, 
>> particularly one who is searching for machine intelligence, should be 
>> wondering how did that happen, and what can one do with it.
>
> I read that, and
> I really don't see what those mathematical games have to say about 
> general intelligence....
>
>>I am sorry if I am hurting your interests, but I already  warned 
>>months ago about the responsibility of claiming AGI. If this one  
>>fails, there may not be another for a long time.
>
> The only way you're "hurting my interests" is by occupying a small 
> fraction of my time on a not-so-productive email thread... ;p
>
> Regarding "claiming AGI" --- nobody sane that I know is claiming to 
> have created AGI.   Claiming to be on a plausible path to AGI is a 
> different thing.
>
> From your standpoint, since you think I'm doomed to fail due to my not 
> embracing the cosmic truth of EI, I guess it's unfortunate that I 
> claim to be on a plausible path to AGI.
>
> From my perspective, since I genuinely think I *am*, it would be 
> irresponsible for me to hide in a hole and shut up about it ;)
>
> -- Ben G
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: 
> https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> d2
> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: 
> https://www.listbox.com/member/archive/rss/303/212726-11ac2389
> Modify Your Subscription: https://www.listbox.com/member/?&; Powered by 
> Listbox: http://www.listbox.com



--
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
Modify Your Subscription:
https://www.listbox.com/member/?&;
d2
Powered by Listbox: http://www.listbox.com





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to