[agi] Universal intelligence test benchmark

2008-12-23 Thread Matt Mahoney
I have been developing an experimental test set along the lines of Legg and 
Hutter's universal intelligence ( 
http://www.idsia.ch/idsiareport/IDSIA-04-05.pdf ). They define general 
intelligence as the expected reward of an AIXI agent in a Solomonoff 
distribution of environments (simulated by random Turing machines). AIXI is 
essentially a compression problem (find the shortest program consistent with 
the interaction so far). Thus, my benchmark is a large number (10^6) of small 
strings (1-32 bytes) generated by random Turing machines. The benchmark is 
here: http://cs.fit.edu/~mmahoney/compression/uiq/

I believe I have solved the technical issues related to experimental 
uncertainty and ensuring the source is cryptographically random. My goal was to 
make it an open benchmark with verifiable results while making it impossible to 
hard-code any knowledge of the test data into the agent. Other benchmarks solve 
this problem by including the decompressor size in the measurement, but my 
approach makes this unnecessary. However, I would appreciate any comments.

A couple of issues arose in designing the benchmark. One is that compression 
results are highly dependent on the choice of universal Turing machine, even 
though all machines are theoretically equivalent. The problem is that even 
though any machine can simulate any other by appending a compiler or 
interpreter, this small constant is significant in practice where the 
complexity of the programs is already small. I tried to create a simple but 
expressive language based on a 2 tape machine (working plus output, both one 
sided and binary) and an instruction set that outputs a bit with each 
instruction. There are, of course, many options. I suppose I could use an 
experimental approach of finding languages that rank compressors in the same 
order as other benchmarks. But there doesn't seem to be a guiding principle.

Also, it does not seem even possible to sample a Solomonoff distribution. Legg 
proved in http://arxiv.org/abs/cs.AI/0606070 that there are strings that are 
hard to learn, but that the time to create them grows as fast as the busy 
beaver problem. Of course I can't create such strings in my benchmark. I can 
create algorithmically complex sources, but they are necessarily easy to learn 
(for example, 100 random bits followed by all zero bits).

Is it possible to test the intelligence of an agent without having at least as 
much computing power? Legg's paper seems to say "no".

-- Matt Mahoney, matmaho...@yahoo.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Criticizing AGI for not being neuroscience, and criticizing AGI programs for
not trying to precisely emulate humans, is really a bit silly.

One can of course make and test scientific hypotheses about the behavior of
AGI systems, quite independent of their potential relationship to human
beings.

AGI systems ultimately are physical systems, and not necessarily less
scientifically interesting than human physical systems.

-- Ben G

On Tue, Dec 23, 2008 at 7:54 PM, Colin Hales
wrote:

>  Ed,
> Comments interspersed below:
>
> Ed Porter wrote:
>
>  Colin,
>
>
>
> Here are my comments re  the following parts of your below post:
>
>
>
> ===Colin said==>
>
> I merely point out that there are fundamental limits as to how computer
> science (CS) can inform/validate basic/physical science - (in an AGI
> context, brain science). Take the Baars/Franklin "IDA" project…. It predicts
> nothing neuroscience can poke a stick at…
>
>
>
> ===ED's reply===>
>
> Different AGI models can have different degrees of correspondence to, and
> different explanatory relevance to, what is believed to take place in the
> brain.  For example the Thomas Serre's PhD thesis "Learning a Dictionary of
> Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
> Machines," at from
> http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf, 
> is a computer simulation which is rather similar to my concept of how a
> Novamente-like AGI could perform certain tasks in visual perception, and yet
> it is designed to model the human visual system to a considerable degree.
> It shows that a certain model of how Serre and Poggio think a certain aspect
> of the human brain works, does in fact work surprisingly well when simulated
> in a computer.
>
>
>
> A surprisingly large number of brain science papers are based on computer
> simulations, many of which are substantially simplified models, but they do
> given neuroscientists a way to poke a stick at various theories they might
> have for how the brain operates at various levels of organization.  Some of
> these papers are directly relevant to AGI.  And some AGI papers are directly
> relevant to providing answers to certain brain science questions.
>
> You are quite right! Realistic models can be quite informative and feed
> back - suggesting new empirical approaches. There can be great
> cross-fertilisation.
>
> However the point is irrelevant to the discussion at hand.
>
>  The phrase "does in fact work surprisingly well when simulated in a
> computer" illustrates the confusion. 'work'? according to whom? "surprisingly
> well"? by what criteria? The tacit assumption is that the model's thus
> implemented on a computer will/can 'behave' indistinguishably from the real
> thing, when what you are observing is a model of the real thing, not the
> real thing.
>
> * *If you targeting AGI with a benchmark/target of human intellect
> or problem solving skills, then the claim made on any/all models is that
> models can attain that goal. A computer implements a model. To make a claim
> that a model  completely captures the reality upon which it was based, you
> need to have a solid theory of the relationships between models and reality
> that is not wishful thinking or assumption, but solid science. Here's where
> you run into the problematic issue that basic physical sciences have with
> models.
>
> There's a boundary to cross - when you claim to have access to human level
> intellect - then you are demanding a equivalence with a real human, not a
> model of a human.
>
>
>
> ===Colin said==>
>
> I agree with your :
>
> "*At the other end of things, physicists are increasingly viewing physical
> reality as a computation, and thus the science of computation (and
> communication which is a part of it), such as information theory, have begun
> to play an increasingly important role in the most basic of all sciences.*
> "
>
>
> ===ED's reply===>
>
> We are largely on the same page here
>
>
>
> ===Colin said==>
>
> I disagree with:
>
> "But the brain is not part of an eternal verity.  It is the result of the
> engineering of evolution. "
>
> Unless I've missed something ... The natural evolutionary 'engineering'
> that has been going on has *not* been the creation of a MODEL (aboutness)
> of things - the 'engineering' has evolved the construction of the 
> *actual*things. The two are not the same. The brain is indeed 'part of an 
> eternal
> verity' - it is made of natural components operating in a fashion we attempt
> to model as 'laws of nature',,,
>
>
>
> ===ED's reply===>
>
> If you define engineering as a process that involves designing something in
> the abstract --- i.e., in your "a MODEL (aboutness of things)" --- before
> physically building it, you could claim evolution is not engineering.
>
>
>
> But if you define engineering as the designing of things (by a process that
> has intelligence what ever method) to solve a set of problems or
> constra

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Colin Hales

Ed,
Comments interspersed below:

Ed Porter wrote:


Colin,

 


Here are my comments re  the following parts of your below post:

 


===Colin said==>

I merely point out that there are fundamental limits as to how 
computer science (CS) can inform/validate basic/physical science - (in 
an AGI context, brain science). Take the Baars/Franklin "IDA" 
project It predicts nothing neuroscience can poke a stick at...


 


===ED's reply===>

Different AGI models can have different degrees of correspondence to, 
and different explanatory relevance to, what is believed to take place 
in the brain.  For example the Thomas Serre's PhD thesis "Learning a 
Dictionary of Shape-Components in Visual Cortex:  Comparison with 
Neurons, Humans and Machines," at from 
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf 
, is a computer simulation which is rather similar to my concept of 
how a Novamente-like AGI could perform certain tasks in visual 
perception, and yet it is designed to model the human visual system to 
a considerable degree.  It shows that a certain model of how Serre and 
Poggio think a certain aspect of the human brain works, does in fact 
work surprisingly well when simulated in a computer.


 

A surprisingly large number of brain science papers are based on 
computer simulations, many of which are substantially simplified 
models, but they do given neuroscientists a way to poke a stick at 
various theories they might have for how the brain operates at various 
levels of organization.  Some of these papers are directly relevant to 
AGI.  And some AGI papers are directly relevant to providing answers 
to certain brain science questions.


You are quite right! Realistic models can be quite informative and feed 
back - suggesting new empirical approaches. There can be great 
cross-fertilisation.


However the point is irrelevant to the discussion at hand.

The phrase "does in fact work surprisingly well when simulated in a 
computer" illustrates the confusion. 'work'? according to whom? 
"surprisingly well"? by what criteria? The tacit assumption is that the 
model's thus implemented on a computer will/can 'behave' 
indistinguishably from the real thing, when what you are observing is a 
model of the real thing, not the real thing.


* *If you targeting AGI with a benchmark/target of human intellect 
or problem solving skills, then the claim made on any/all models is that 
models can attain that goal. A computer implements a model. To make a 
claim that a model  completely captures the reality upon which it was 
based, you need to have a solid theory of the relationships between 
models and reality that is not wishful thinking or assumption, but solid 
science. Here's where you run into the problematic issue that basic 
physical sciences have with models. 

There's a boundary to cross - when you claim to have access to human 
level intellect - then you are demanding a equivalence with a real 
human, not a model of a human.


 


===Colin said==>

I agree with your :

"/At the other end of things, physicists are increasingly viewing 
physical reality as a computation, and thus the science of computation 
(and communication which is a part of it), such as information theory, 
have begun to play an increasingly important role in the most basic of 
all sciences./"



===ED's reply===>

We are largely on the same page here

 


===Colin said==>

I disagree with:

"But the brain is not part of an eternal verity.  It is the result of 
the engineering of evolution. "


Unless I've missed something ... The natural evolutionary 
'engineering' that has been going on has /not/ been the creation of a 
MODEL (aboutness) of things - the 'engineering' has evolved the 
construction of the /actual/ things. The two are not the same. The 
brain is indeed 'part of an eternal verity' - it is made of natural 
components operating in a fashion we attempt to model as 'laws of 
nature',,,


 


===ED's reply===>

If you define engineering as a process that involves designing 
something in the abstract --- i.e., in your "a MODEL (aboutness of 
things)" --- before physically building it, you could claim evolution 
is not engineering. 

 

But if you define engineering as the designing of things (by a process 
that has intelligence what ever method) to solve a set of problems or 
constraints, evolution does perform engineering, and the brain was 
formed by such engineering.


 

How can you claim the human brain is an eternal verity, since it is 
only believed that it has existing in anything close to its current 
form in the last 30 to 100 thousand years, and there is no guarantee 
how much longer it will continue to exists.  Compared to much of what 
the natural sciences study, its existence appears quite fleeting.


I think this is just a terminology issue. The 'laws of nature' are the 
eternal verity, to me. The dynamical output they represent - of course 
that does whatever i

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Richard,

 

You originally totally trashed Tononi's paper, including its central core,
by saying:

 

"It is, for want of a better word, nonsense.  And since people take me to 

task for being so dismissive, let me add that it is the central thesis 

of the paper that is "nonsense":  if you ask yourself very carefully 

what it is he is claiming, you can easily come up with counterexammples 

that make a mockery of his conclusion."\

 

When asked to support your statement that 

 

"you can easily come up with counterexammples that make a mockery of his
conclusion " 

 

you refused.  You did so by grossly mis-describing Tononi's paper (for
example it does not include "pages of .math", of any sort, and particularly
not "pages of irrelevant math") and implying its mis-described faults so
offended your delicate sense of AGI propriety that re-reading it enough to
find support for your extremely critical (and perhaps totally unfair)
condemnation would be either too much work or too emotionally painful.

 

You said the counterexamples to the core of this paper were easy to come up
with, but you can't seem to come up with any. 

 

Such stunts have the appearance of being those of a pompous windbag.

 

Ed Porter

 

P.S. Your postscript is not sufficiently clear to provide much support for
your position.

P.P.S. You below  

 

 

-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com] 
Sent: Tuesday, December 23, 2008 9:53 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ed Porter wrote:

> Richard, 

> 

> Please describe some of the counterexamples, that you can easily come up

> with, that make a mockery of Tononi's conclusion.

> 

> Ed Porter

 

Alas, I will have to disappoint.  I put a lot of effort into 

understanding his paper first time around, but the sheer agony of 

reading (/listening to) his confused, shambling train of thought, the 

non-sequiteurs, and the pages of irrelevant math  that I do not need 

to experience a second time.  All of my original effort only resulted in 

the discovery that I had wasted my time, so I have no interest in 

wasting more of my time.

 

With other papers that contain more coherent substance, but perhaps what 

looks like an error, I would make the effort.  But not this one.

 

It will have to be left as an exercise for the reader, I'm afraid.

 

 

 

Richard Loosemore

 

 

P.S.   A hint.  All I remember was that he started talking about 

multiple regions (columns?) of the brain exchanging information with one 

another in a particular way, and then he asserted a conclusion which, on 

quick reflection, I knew would not be true of a system resembling the 

distributed one that I described in my consciousness paper (the 

molecular model).  Knowing that his conclusion was flat-out untrue for 

that one case, and for a whole class of similar systems, his argument 

was toast.

 

 

 

 

 

 

 

 

 

> -Original Message-

> From: Richard Loosemore [mailto:r...@lightlink.com] 

> Sent: Monday, December 22, 2008 8:54 AM

> To: agi@v2.listbox.com

> Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a

> machine that can learn from experience

> 

> Ed Porter wrote:

>> I don't think this AGI list should be so quick to dismiss a $4.9 million 

>> dollar grant to create an AGI.  It will not necessarily be "vaporware." 

>> I think we should view it as a good sign.

>>

>>  

>>

>> Even if it is for a project that runs the risk, like many DARPA projects 

>> (like most scientific funding in general) of not necessarily placing its 

>> money where it might do the most good --- it is likely to at least 

>> produce some interesting results --- and it just might make some very 

>> important advances in our field.

>>

>>  

>>

>> The article from http://www.physorg.com/news148754667.html said:

>>

>>  

>>

>> ".a $4.9 million grant.for the first phase of DARPA's Systems of 

>> Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

>>

>>  

>>

>> Tononi and scientists from Columbia University and IBM will work on the 

>> "software" for the thinking computer, while nanotechnology and 

>> supercomputing experts from Cornell, Stanford and the University of 

>> California-Merced will create the "hardware." Dharmendra Modha of IBM is 

>> the principal investigator.

>>

>>  

>>

>> The idea is to create a computer capable of sorting through multiple 

>> streams of changing data, to look for patterns and make logical
decisions.

>>

>>  

>>

>> There's another requirement: The finished cognitive computer should be 

>> as small as a the brain of a small mammal and use as little power as a 

>> 100-watt light bulb. It's a major challenge. But it's what our brains do 

>> every day.

>>

>>  

>>

>> I have just spent several hours reading a Tononi paper, "An information 

>> integration theo

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Colin,

 

Here are my comments re  the following parts of your below post:

 

===Colin said==>

I merely point out that there are fundamental limits as to how computer
science (CS) can inform/validate basic/physical science - (in an AGI
context, brain science). Take the Baars/Franklin "IDA" project.. It predicts
nothing neuroscience can poke a stick at.

 

===ED's reply===>

Different AGI models can have different degrees of correspondence to, and
different explanatory relevance to, what is believed to take place in the
brain.  For example the Thomas Serre's PhD thesis "Learning a Dictionary of
Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
Machines," at from

http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
, is a computer simulation which is rather similar to my concept of how a
Novamente-like AGI could perform certain tasks in visual perception, and yet
it is designed to model the human visual system to a considerable degree.
It shows that a certain model of how Serre and Poggio think a certain aspect
of the human brain works, does in fact work surprisingly well when simulated
in a computer.

 

A surprisingly large number of brain science papers are based on computer
simulations, many of which are substantially simplified models, but they do
given neuroscientists a way to poke a stick at various theories they might
have for how the brain operates at various levels of organization.  Some of
these papers are directly relevant to AGI.  And some AGI papers are directly
relevant to providing answers to certain brain science questions. 

 

===Colin said==>

I agree with your :

"At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences."


===ED's reply===>

We are largely on the same page here

 

===Colin said==>

I disagree with:

"But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution. "

Unless I've missed something ... The natural evolutionary 'engineering' that
has been going on has not been the creation of a MODEL (aboutness) of things
- the 'engineering' has evolved the construction of the actual things. The
two are not the same. The brain is indeed 'part of an eternal verity' - it
is made of natural components operating in a fashion we attempt to model as
'laws of nature',,,

 

===ED's reply===>

If you define engineering as a process that involves designing something in
the abstract --- i.e., in your "a MODEL (aboutness of things)" --- before
physically building it, you could claim evolution is not engineering.  

 

But if you define engineering as the designing of things (by a process that
has intelligence what ever method) to solve a set of problems or
constraints, evolution does perform engineering, and the brain was formed by
such engineering.

 

How can you claim the human brain is an eternal verity, since it is only
believed that it has existing in anything close to its current form in the
last 30 to 100 thousand years, and there is no guarantee how much longer it
will continue to exists.  Compared to much of what the natural sciences
study, its existence appears quite fleeting.

 

===Colin said==>

Anyway, for these reasons, folks who use computer models to study human
brains/consciousness will encounter some difficulty justifying, to the basic
physical sciences, claims made as to the equivalence of the model and
reality. That difficulty is fundamental and cannot be 'believed away'. 

 

===ED's reply===>

If you attend brain science lectures and read brain science literature, you
will find that computer modeling is playing an ever increasing role in brain
science --- so this basic difficulty that you describe largely does not
exist.

 

===Colin said==>

The intelligence originates in the brain. AGI and brain science must be
literally joined at the hip or the AGI enterprise is arguably scientifically
impoverished wishful thinking. 

===ED's reply===>

I don't know what you mean by "joined at the hip," but I think it is being
overly anthropomorphic to think an artificial mind has to slavishly model a
human brain to have great power and worth.  

 

But I do think it would probably have to accomplish some of the same general
functions, such as automatic pattern learning, credit assignment, attention
control, etc.

 

Ed Porter

 

-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Monday, December 22, 2008 11:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ed,
I wasn't trying to justify or promote a 'divide'. The t

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
I mentioned it because looked at the book again recently and was pleasantly
surprised at how well his ideas seemed to have held up  In other words,
although there are point on which I think he's probably wrong, his
decade-old ideas *still* seem more sensible and insightful than most of the
theoretical speculations one reads in the neuroscience literature... and I
can't really think of any recent neuroscience data that refutes any of his
key hypotheses...

On Tue, Dec 23, 2008 at 10:36 AM, Richard Loosemore wrote:

> Ben Goertzel wrote:
>
>>
>> Richard,
>>
>> I'm curious what you think of William Calvin's neuroscience hypotheses as
>> presented in e.g. "The Cerebral Code"
>>
>> That book is a bit out of date now, but still, he took complexity and
>> nonlinear dynamics quite seriously, so it seems to me there may be some
>> resonance between his ideas and your own
>>
>> I find his speculative ideas more agreeable than Tononi's, myself...
>>
>> thx
>> ben g
>>
>
> Yes, I did read his book (or part of it) back in 98/99, but 
>
> From what I remember, I found resonance, as you say, but he is one of those
> people who is struggling to find a way to turn an intuition into something
> concrete.  It is just that he wrote a book about it before he got to
> Concrete Operations.
>
> It would be interesting to take a look at it again, 10 years later, and see
> whether my opinion has changed.
>
> To put this in context, I felt like I was looking at a copy of myself back
> in 1982, when I struggled to write down my intuitions as a physicist coming
> to terms with psychology for the first time.  I am now acutely embarrassed
> by the naivete of that first attempt, but in spite of the embarrassment I
> know that I have since turned those intuitions into something meaningful,
> and I know that in spite of my original hubris, I was on a path to something
> that actually did make sense.  To cognitive scientists at the time it looked
> awful, unmotivated and disconnected from reality (by itself, it was!), but
> in the end it was not trash because it had real substance buried inside it.
>
> With people like Calvin (and others) I see writings that look somewhat
> speculative and ungrounded, just like my early attempts, so I am mixed
> between a desire to be lenient (because I was that like that once) and a
> feeling that they really need to be aware that their thoughts are still
> ungelled.
>
> Anyhow, that's my quick thoughts on him.  I'll see if I can dig out his
> book at some point.
>
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
>
>
>> On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore 
>> > r...@lightlink.com>> wrote:
>>
>>Ed Porter wrote:
>>
>>Richard,
>>Please describe some of the counterexamples, that you can easily
>>come up
>>with, that make a mockery of Tononi's conclusion.
>>
>>Ed Porter
>>
>>
>>Alas, I will have to disappoint.  I put a lot of effort into
>>understanding his paper first time around, but the sheer agony of
>>reading (/listening to) his confused, shambling train of thought,
>>the non-sequiteurs, and the pages of irrelevant math  that I do
>>not need to experience a second time.  All of my original effort
>>only resulted in the discovery that I had wasted my time, so I have
>>no interest in wasting more of my time.
>>
>>With other papers that contain more coherent substance, but perhaps
>>what looks like an error, I would make the effort.  But not this one.
>>
>>It will have to be left as an exercise for the reader, I'm afraid.
>>
>>
>>
>>Richard Loosemore
>>
>>
>>P.S.   A hint.  All I remember was that he started talking about
>>multiple regions (columns?) of the brain exchanging information with
>>one another in a particular way, and then he asserted a conclusion
>>which, on quick reflection, I knew would not be true of a system
>>resembling the distributed one that I described in my consciousness
>>paper (the molecular model).  Knowing that his conclusion was
>>flat-out untrue for that one case, and for a whole class of similar
>>systems, his argument was toast.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>-Original Message-
>>From: Richard Loosemore [mailto:r...@lightlink.com
>>] Sent: Monday, December 22, 2008 8:54
>> AM
>>To: agi@v2.listbox.com 
>>Subject: Re: [agi] SyNAPSE might not be a joke  was 
>>Building a
>>machine that can learn from experience
>>
>>Ed Porter wrote:
>>
>>I don't think this AGI list should be so quick to dismiss a
>>$4.9 million dollar grant to create an AGI.  It will not
>>necessarily be "vaporware." I think we should view it as a
>>good sign.
>>
>>Even if it is for a project that runs the risk,
>> like many
>>DARPA projects (like most scientific fund

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore

Ben Goertzel wrote:


Richard,

I'm curious what you think of William Calvin's neuroscience hypotheses 
as presented in e.g. "The Cerebral Code"


That book is a bit out of date now, but still, he took complexity and 
nonlinear dynamics quite seriously, so it seems to me there may be some 
resonance between his ideas and your own


I find his speculative ideas more agreeable than Tononi's, myself...

thx
ben g


Yes, I did read his book (or part of it) back in 98/99, but 

From what I remember, I found resonance, as you say, but he is one of 
those people who is struggling to find a way to turn an intuition into 
something concrete.  It is just that he wrote a book about it before he 
got to Concrete Operations.


It would be interesting to take a look at it again, 10 years later, and 
see whether my opinion has changed.


To put this in context, I felt like I was looking at a copy of myself 
back in 1982, when I struggled to write down my intuitions as a 
physicist coming to terms with psychology for the first time.  I am now 
acutely embarrassed by the naivete of that first attempt, but in spite 
of the embarrassment I know that I have since turned those intuitions 
into something meaningful, and I know that in spite of my original 
hubris, I was on a path to something that actually did make sense.  To 
cognitive scientists at the time it looked awful, unmotivated and 
disconnected from reality (by itself, it was!), but in the end it was 
not trash because it had real substance buried inside it.


With people like Calvin (and others) I see writings that look somewhat 
speculative and ungrounded, just like my early attempts, so I am mixed 
between a desire to be lenient (because I was that like that once) and a 
feeling that they really need to be aware that their thoughts are still 
ungelled.


Anyhow, that's my quick thoughts on him.  I'll see if I can dig out his 
book at some point.





Richard Loosemore










On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore > wrote:


Ed Porter wrote:

Richard,
Please describe some of the counterexamples, that you can easily
come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into
understanding his paper first time around, but the sheer agony of
reading (/listening to) his confused, shambling train of thought,
the non-sequiteurs, and the pages of irrelevant math  that I do
not need to experience a second time.  All of my original effort
only resulted in the discovery that I had wasted my time, so I have
no interest in wasting more of my time.

With other papers that contain more coherent substance, but perhaps
what looks like an error, I would make the effort.  But not this one.

It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about
multiple regions (columns?) of the brain exchanging information with
one another in a particular way, and then he asserted a conclusion
which, on quick reflection, I knew would not be true of a system
resembling the distributed one that I described in my consciousness
paper (the molecular model).  Knowing that his conclusion was
flat-out untrue for that one case, and for a whole class of similar
systems, his argument was toast.









-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com
] Sent: Monday, December 22, 2008 8:54 AM
To: agi@v2.listbox.com 
Subject: Re: [agi] SyNAPSE might not be a joke  was 
Building a
machine that can learn from experience

Ed Porter wrote:

I don't think this AGI list should be so quick to dismiss a
$4.9 million dollar grant to create an AGI.  It will not
necessarily be "vaporware." I think we should view it as a
good sign.

 
Even if it is for a project that runs the risk, like many

DARPA projects (like most scientific funding in general) of
not necessarily placing its money where it might do the most
good --- it is likely to at least produce some interesting
results --- and it just might make some very important
advances in our field.

 
The article from http://www.physorg.com/news148754667.html said:


 
".a $4.9 million grant.for the first phase of DARPA's

Systems of Neuromorphic Adaptive Plastic Scalable
Electronics (SyNAPSE) project.

 
Tononi and scientists from Columbia University and IBM will

work on the "software" for the thinking computer, while
nanotechnology and 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Richard,

I'm curious what you think of William Calvin's neuroscience hypotheses as
presented in e.g. "The Cerebral Code"

That book is a bit out of date now, but still, he took complexity and
nonlinear dynamics quite seriously, so it seems to me there may be some
resonance between his ideas and your own

I find his speculative ideas more agreeable than Tononi's, myself...

thx
ben g

On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore wrote:

> Ed Porter wrote:
>
>> Richard,
>> Please describe some of the counterexamples, that you can easily come up
>> with, that make a mockery of Tononi's conclusion.
>>
>> Ed Porter
>>
>
> Alas, I will have to disappoint.  I put a lot of effort into understanding
> his paper first time around, but the sheer agony of reading (/listening to)
> his confused, shambling train of thought, the non-sequiteurs, and the pages
> of irrelevant math  that I do not need to experience a second time.  All
> of my original effort only resulted in the discovery that I had wasted my
> time, so I have no interest in wasting more of my time.
>
> With other papers that contain more coherent substance, but perhaps what
> looks like an error, I would make the effort.  But not this one.
>
> It will have to be left as an exercise for the reader, I'm afraid.
>
>
>
> Richard Loosemore
>
>
> P.S.   A hint.  All I remember was that he started talking about multiple
> regions (columns?) of the brain exchanging information with one another in a
> particular way, and then he asserted a conclusion which, on quick
> reflection, I knew would not be true of a system resembling the distributed
> one that I described in my consciousness paper (the molecular model).
>  Knowing that his conclusion was flat-out untrue for that one case, and for
> a whole class of similar systems, his argument was toast.
>
>
>
>
>
>
>
>
>
>  -Original Message-
>> From: Richard Loosemore [mailto:r...@lightlink.com] Sent: Monday,
>> December 22, 2008 8:54 AM
>> To: agi@v2.listbox.com
>> Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
>> machine that can learn from experience
>>
>> Ed Porter wrote:
>>
>>> I don't think this AGI list should be so quick to dismiss a $4.9 million
>>> dollar grant to create an AGI.  It will not necessarily be "vaporware." I
>>> think we should view it as a good sign.
>>>
>>>
>>> Even if it is for a project that runs the risk, like many DARPA projects
>>> (like most scientific funding in general) of not necessarily placing its
>>> money where it might do the most good --- it is likely to at least produce
>>> some interesting results --- and it just might make some very important
>>> advances in our field.
>>>
>>>
>>> The article from http://www.physorg.com/news148754667.html said:
>>>
>>>
>>> ".a $4.9 million grant.for the first phase of DARPA's Systems of
>>> Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.
>>>
>>>
>>> Tononi and scientists from Columbia University and IBM will work on the
>>> "software" for the thinking computer, while nanotechnology and
>>> supercomputing experts from Cornell, Stanford and the University of
>>> California-Merced will create the "hardware." Dharmendra Modha of IBM is the
>>> principal investigator.
>>>
>>>
>>> The idea is to create a computer capable of sorting through multiple
>>> streams of changing data, to look for patterns and make logical decisions.
>>>
>>>
>>> There's another requirement: The finished cognitive computer should be as
>>> small as a the brain of a small mammal and use as little power as a 100-watt
>>> light bulb. It's a major challenge. But it's what our brains do every day.
>>>
>>>
>>> I have just spent several hours reading a Tononi paper, "An information
>>> integration theory of consciousness" and skimmed several parts of his book
>>> "A Universe of Consciousness" he wrote with Edleman, whom Ben has referred
>>> to often in his writings.  (I have attached my mark up of the article, which
>>> if you read just the yellow highlighted text, or (for more detail) the red,
>>> you can get a quick understanding of.  You can also view it in MSWord
>>> outline mode if you like.)
>>>
>>>
>>> This paper largely agrees with my notion, stated multiple times on this
>>> list, that consciousness is an incredibly complex computation that interacts
>>> with itself in a very rich manner that makes it aware of itself.
>>>
>>
>> For the record, this looks like the paper that I listened to Tononi talk
>> about a couple of years ago -- the one I mentioned in my last message.
>>
>> It is, for want of a better word, nonsense.  And since people take me to
>> task for being so dismissive, let me add that it is the central thesis of
>> the paper that is "nonsense":  if you ask yourself very carefully what it is
>> he is claiming, you can easily come up with counterexammples that make a
>> mockery of his conclusion.
>>
>>
>>
>> Richard Loosemore
>>
>>
>> ---
>> agi
>> Archives:

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore

Ed Porter wrote:
Richard, 


Please describe some of the counterexamples, that you can easily come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into 
understanding his paper first time around, but the sheer agony of 
reading (/listening to) his confused, shambling train of thought, the 
non-sequiteurs, and the pages of irrelevant math  that I do not need 
to experience a second time.  All of my original effort only resulted in 
the discovery that I had wasted my time, so I have no interest in 
wasting more of my time.


With other papers that contain more coherent substance, but perhaps what 
looks like an error, I would make the effort.  But not this one.


It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about 
multiple regions (columns?) of the brain exchanging information with one 
another in a particular way, and then he asserted a conclusion which, on 
quick reflection, I knew would not be true of a system resembling the 
distributed one that I described in my consciousness paper (the 
molecular model).  Knowing that his conclusion was flat-out untrue for 
that one case, and for a whole class of similar systems, his argument 
was toast.











-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com] 
Sent: Monday, December 22, 2008 8:54 AM

To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

Ed Porter wrote:
I don't think this AGI list should be so quick to dismiss a $4.9 million 
dollar grant to create an AGI.  It will not necessarily be "vaporware." 
I think we should view it as a good sign.


 

Even if it is for a project that runs the risk, like many DARPA projects 
(like most scientific funding in general) of not necessarily placing its 
money where it might do the most good --- it is likely to at least 
produce some interesting results --- and it just might make some very 
important advances in our field.


 


The article from http://www.physorg.com/news148754667.html said:

 

".a $4.9 million grant.for the first phase of DARPA's Systems of 
Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.


 

Tononi and scientists from Columbia University and IBM will work on the 
"software" for the thinking computer, while nanotechnology and 
supercomputing experts from Cornell, Stanford and the University of 
California-Merced will create the "hardware." Dharmendra Modha of IBM is 
the principal investigator.


 

The idea is to create a computer capable of sorting through multiple 
streams of changing data, to look for patterns and make logical decisions.


 

There's another requirement: The finished cognitive computer should be 
as small as a the brain of a small mammal and use as little power as a 
100-watt light bulb. It's a major challenge. But it's what our brains do 
every day.


 

I have just spent several hours reading a Tononi paper, "An information 
integration theory of consciousness" and skimmed several parts of his 
book "A Universe of Consciousness" he wrote with Edleman, whom Ben has 
referred to often in his writings.  (I have attached my mark up of the 
article, which if you read just the yellow highlighted text, or (for 
more detail) the red, you can get a quick understanding of.  You can 
also view it in MSWord outline mode if you like.)


 

This paper largely agrees with my notion, stated multiple times on this 
list, that consciousness is an incredibly complex computation that 
interacts with itself in a very rich manner that makes it aware of itself.


For the record, this looks like the paper that I listened to Tononi talk 
about a couple of years ago -- the one I mentioned in my last message.


It is, for want of a better word, nonsense.  And since people take me to 
task for being so dismissive, let me add that it is the central thesis 
of the paper that is "nonsense":  if you ask yourself very carefully 
what it is he is claiming, you can easily come up with counterexammples 
that make a mockery of his conclusion.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://w