RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-26 Thread Ed Porter
Richard, 

 

Since you are clearly in the mode you routinely get into when you start
loosing an argument on this list --- as has happened so many times before
--- i.e., of ceasing all further productive communication on the actual
subject of the argument --- this will be my last communication with you on
this subject --- that is --- unless you actually come of up with some
reasonable support for your brash statement that the central core of
Tonini's paper, which I cited, was nonsense.

 

I have better things to do than get into extended arguments with people who
are as intellectually dishonest as you become when you start loosing an
argument.

 

Ed Porter

 

 

-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com] 
Sent: Friday, December 26, 2008 1:03 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ed Porter wrote:

 Why is it that people who repeatedly and insultingly say other people's 

 work or ideas are total nonsense -- without any reasonable 

 justification -- are still allowed to participate in the discussion on 

 the AGI list?

 

Because they know what they are talking about.

 

And because they got that way by having a low tolerance for fools, 

nonsense and people who can't tell the difference between the critique 

of an idea and a personal insult.

 

;-)

 

 

 

 

Richard Loosemore

 

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-25 Thread Richard Loosemore

Ed Porter wrote:
Why is it that people who repeatedly and insultingly say other people’s 
work or ideas are total nonsense -- without any reasonable 
justification -- are still allowed to participate in the discussion on 
the AGI list?


Because they know what they are talking about.

And because they got that way by having a low tolerance for fools, 
nonsense and people who can't tell the difference between the critique 
of an idea and a personal insult.


;-)




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-24 Thread Ed Porter

===Colin said==

The tacit assumption is that the model's thus implemented on a computer
will/can 'behave' indistinguishably from the real thing, when what you are
observing is a model of the real thing, not the real thing.

===ED's reply===

I was making no assumption that the model would be behave indistinquisably
from the real thing, but instead only that there were meaningful --- and,
from a cross-fertilization standpoint, informative --- levels of description
at which the computer model and the corresponding brain behavior were
similar.

 


===Colin said==

There's a boundary to cross - when you claim to have access to human level
intellect - then you are demanding a equivalence with a real human, not a
model of a human. 

===ED's reply===

When I, and presumably many other AGIers, say human-level AGI, we do not
mean an exact functional replica of the human brain or mind.  Rather we mean
an AGI that can do things like speak and understand natural language, see
and understand the meaning of its visual surroundings, reason from the rough
equivalent of human-level world knowledge, have common sense, do creative
problems solving, and other mental tasks --- substantially as well as most
people.  Its methods of computation do not have to be exactly like those
used in the mind, the major issue is that its competencies be at least
roughly as good over a range of talents. 

 

===Colin said==

I don't think there's any real issue here. Mostly semantics being mixed a
bit.

Gotta get back to xmas! Yuletide stuff to you. 

===ED's reply===

Agreed.

 

Ed Porter

 

 

-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Tuesday, December 23, 2008 7:55 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ed,
Comments interspersed below:

Ed Porter wrote: 

Colin,

 

Here are my comments re  the following parts of your below post:

 

===Colin said==

I merely point out that there are fundamental limits as to how computer
science (CS) can inform/validate basic/physical science - (in an AGI
context, brain science). Take the Baars/Franklin IDA project.. It predicts
nothing neuroscience can poke a stick at.

 

===ED's reply===

Different AGI models can have different degrees of correspondence to, and
different explanatory relevance to, what is believed to take place in the
brain.  For example the Thomas Serre's PhD thesis Learning a Dictionary of
Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
Machines, at from
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf

http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
, is a computer simulation which is rather similar to my concept of how a
Novamente-like AGI could perform certain tasks in visual perception, and yet
it is designed to model the human visual system to a considerable degree.
It shows that a certain model of how Serre and Poggio think a certain aspect
of the human brain works, does in fact work surprisingly well when simulated
in a computer.

 

A surprisingly large number of brain science papers are based on computer
simulations, many of which are substantially simplified models, but they do
given neuroscientists a way to poke a stick at various theories they might
have for how the brain operates at various levels of organization.  Some of
these papers are directly relevant to AGI.  And some AGI papers are directly
relevant to providing answers to certain brain science questions.

You are quite right! Realistic models can be quite informative and feed back
- suggesting new empirical approaches. There can be great
cross-fertilisation.

However the point is irrelevant to the discussion at hand.

 The phrase does in fact work surprisingly well when simulated in a
computer illustrates the confusion. 'work'? according to whom?
surprisingly well? by what criteria? The tacit assumption is that the
model's thus implemented on a computer will/can 'behave' indistinguishably
from the real thing, when what you are observing is a model of the real
thing, not the real thing.

HERE If you targeting AGI with a benchmark/target of human intellect or
problem solving skills, then the claim made on any/all models is that models
can attain that goal. A computer implements a model. To make a claim that a
model  completely captures the reality upon which it was based, you need to
have a solid theory of the relationships between models and reality that is
not wishful thinking or assumption, but solid science. Here's where you run
into the problematic issue that basic physical sciences have with models.  

There's a boundary to cross - when you claim to have access to human level
intellect - then you are demanding a equivalence with a real human, not a
model of a human. 

 

===Colin said==

I agree with your :

At the other end of things, 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-24 Thread Richard Loosemore



Why is it that people who repeatedly resort to personal abuse like this 
are still allowed to participate in the discussion on the AGI list?





Richard Loosemore





Ed Porter wrote:

Richard,

 

You originally totally trashed Tononi's paper, including its central 
core, by saying:


 


It is, for want of a better word, nonsense.  And since people take me to

task for being so dismissive, let me add that it is the central thesis

of the paper that is nonsense:  if you ask yourself very carefully

what it is he is claiming, you can easily come up with counterexammples

that make a mockery of his conclusion.\

 


When asked to support your statement that

 

you can easily come up with counterexammples that make a mockery of his 
conclusion 


 

you refused.  You did so by grossly mis-describing Tononi’s paper (for 
example it does not include “pages of …math”, of any sort, and 
particularly not “pages of irrelevant math”) and implying its 
mis-described faults so offended your delicate sense of AGI propriety 
that re-reading it enough to find support for your extremely critical 
(and perhaps totally unfair) condemnation would be either too much work 
or too emotionally painful.


 

You said the counterexamples to the core of this paper were easy to come 
up with, but you can’t seem to come up with any.


 


Such stunts have the appearance of being those of a pompous windbag.

 


Ed Porter

 

P.S. Your postscript is not sufficiently clear to provide much support 
for your position.


P.P.S. You below  

 

 


-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com]
Sent: Tuesday, December 23, 2008 9:53 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a 
machine that can learn from experience


 


Ed Porter wrote:


 Richard,







 Please describe some of the counterexamples, that you can easily come up



 with, that make a mockery of Tononi's conclusion.







 Ed Porter


 


Alas, I will have to disappoint.  I put a lot of effort into

understanding his paper first time around, but the sheer agony of

reading (/listening to) his confused, shambling train of thought, the

non-sequiteurs, and the pages of irrelevant math  that I do not need

to experience a second time.  All of my original effort only resulted in

the discovery that I had wasted my time, so I have no interest in

wasting more of my time.

 


With other papers that contain more coherent substance, but perhaps what

looks like an error, I would make the effort.  But not this one.

 


It will have to be left as an exercise for the reader, I'm afraid.

 

 

 


Richard Loosemore

 

 


P.S.   A hint.  All I remember was that he started talking about

multiple regions (columns?) of the brain exchanging information with one

another in a particular way, and then he asserted a conclusion which, on

quick reflection, I knew would not be true of a system resembling the

distributed one that I described in my consciousness paper (the

molecular model).  Knowing that his conclusion was flat-out untrue for

that one case, and for a whole class of similar systems, his argument

was toast.

 

 

 

 

 

 

 

 

 


 -Original Message-



 From: Richard Loosemore [mailto:r...@lightlink.com]



 Sent: Monday, December 22, 2008 8:54 AM



 To: agi@v2.listbox.com



 Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a



 machine that can learn from experience







 Ed Porter wrote:



 I don't think this AGI list should be so quick to dismiss a $4.9 million



 dollar grant to create an AGI.  It will not necessarily be vaporware.



 I think we should view it as a good sign.






 







 Even if it is for a project that runs the risk, like many DARPA projects



 (like most scientific funding in general) of not necessarily placing its



 money where it might do the most good --- it is likely to at least



 produce some interesting results --- and it just might make some very



 important advances in our field.






 







 The article from http://www.physorg.com/news148754667.html said:






 







 .a $4.9 million grant.for the first phase of DARPA's Systems of



 Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.






 







 Tononi and scientists from Columbia University and IBM will work on the



 software for the thinking computer, while nanotechnology and



 supercomputing experts from Cornell, Stanford and the University of



 California-Merced will create the hardware. Dharmendra Modha of IBM is



 the principal investigator.






 







 The idea is to create a computer capable of sorting through multiple


 streams of changing data, to look for patterns and make logical 

decisions.





 







 There's another requirement: The finished cognitive computer should be



 as small as a the brain of a small mammal and use as little power as a



 100-watt light bulb. It's a major 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore

Ed Porter wrote:
Richard, 


Please describe some of the counterexamples, that you can easily come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into 
understanding his paper first time around, but the sheer agony of 
reading (/listening to) his confused, shambling train of thought, the 
non-sequiteurs, and the pages of irrelevant math  that I do not need 
to experience a second time.  All of my original effort only resulted in 
the discovery that I had wasted my time, so I have no interest in 
wasting more of my time.


With other papers that contain more coherent substance, but perhaps what 
looks like an error, I would make the effort.  But not this one.


It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about 
multiple regions (columns?) of the brain exchanging information with one 
another in a particular way, and then he asserted a conclusion which, on 
quick reflection, I knew would not be true of a system resembling the 
distributed one that I described in my consciousness paper (the 
molecular model).  Knowing that his conclusion was flat-out untrue for 
that one case, and for a whole class of similar systems, his argument 
was toast.











-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com] 
Sent: Monday, December 22, 2008 8:54 AM

To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

Ed Porter wrote:
I don't think this AGI list should be so quick to dismiss a $4.9 million 
dollar grant to create an AGI.  It will not necessarily be vaporware. 
I think we should view it as a good sign.


 

Even if it is for a project that runs the risk, like many DARPA projects 
(like most scientific funding in general) of not necessarily placing its 
money where it might do the most good --- it is likely to at least 
produce some interesting results --- and it just might make some very 
important advances in our field.


 


The article from http://www.physorg.com/news148754667.html said:

 

.a $4.9 million grant.for the first phase of DARPA's Systems of 
Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.


 

Tononi and scientists from Columbia University and IBM will work on the 
software for the thinking computer, while nanotechnology and 
supercomputing experts from Cornell, Stanford and the University of 
California-Merced will create the hardware. Dharmendra Modha of IBM is 
the principal investigator.


 

The idea is to create a computer capable of sorting through multiple 
streams of changing data, to look for patterns and make logical decisions.


 

There's another requirement: The finished cognitive computer should be 
as small as a the brain of a small mammal and use as little power as a 
100-watt light bulb. It's a major challenge. But it's what our brains do 
every day.


 

I have just spent several hours reading a Tononi paper, An information 
integration theory of consciousness and skimmed several parts of his 
book A Universe of Consciousness he wrote with Edleman, whom Ben has 
referred to often in his writings.  (I have attached my mark up of the 
article, which if you read just the yellow highlighted text, or (for 
more detail) the red, you can get a quick understanding of.  You can 
also view it in MSWord outline mode if you like.)


 

This paper largely agrees with my notion, stated multiple times on this 
list, that consciousness is an incredibly complex computation that 
interacts with itself in a very rich manner that makes it aware of itself.


For the record, this looks like the paper that I listened to Tononi talk 
about a couple of years ago -- the one I mentioned in my last message.


It is, for want of a better word, nonsense.  And since people take me to 
task for being so dismissive, let me add that it is the central thesis 
of the paper that is nonsense:  if you ask yourself very carefully 
what it is he is claiming, you can easily come up with counterexammples 
that make a mockery of his conclusion.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Richard,

I'm curious what you think of William Calvin's neuroscience hypotheses as
presented in e.g. The Cerebral Code

That book is a bit out of date now, but still, he took complexity and
nonlinear dynamics quite seriously, so it seems to me there may be some
resonance between his ideas and your own

I find his speculative ideas more agreeable than Tononi's, myself...

thx
ben g

On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore r...@lightlink.comwrote:

 Ed Porter wrote:

 Richard,
 Please describe some of the counterexamples, that you can easily come up
 with, that make a mockery of Tononi's conclusion.

 Ed Porter


 Alas, I will have to disappoint.  I put a lot of effort into understanding
 his paper first time around, but the sheer agony of reading (/listening to)
 his confused, shambling train of thought, the non-sequiteurs, and the pages
 of irrelevant math  that I do not need to experience a second time.  All
 of my original effort only resulted in the discovery that I had wasted my
 time, so I have no interest in wasting more of my time.

 With other papers that contain more coherent substance, but perhaps what
 looks like an error, I would make the effort.  But not this one.

 It will have to be left as an exercise for the reader, I'm afraid.



 Richard Loosemore


 P.S.   A hint.  All I remember was that he started talking about multiple
 regions (columns?) of the brain exchanging information with one another in a
 particular way, and then he asserted a conclusion which, on quick
 reflection, I knew would not be true of a system resembling the distributed
 one that I described in my consciousness paper (the molecular model).
  Knowing that his conclusion was flat-out untrue for that one case, and for
 a whole class of similar systems, his argument was toast.









  -Original Message-
 From: Richard Loosemore [mailto:r...@lightlink.com] Sent: Monday,
 December 22, 2008 8:54 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
 machine that can learn from experience

 Ed Porter wrote:

 I don't think this AGI list should be so quick to dismiss a $4.9 million
 dollar grant to create an AGI.  It will not necessarily be vaporware. I
 think we should view it as a good sign.


 Even if it is for a project that runs the risk, like many DARPA projects
 (like most scientific funding in general) of not necessarily placing its
 money where it might do the most good --- it is likely to at least produce
 some interesting results --- and it just might make some very important
 advances in our field.


 The article from http://www.physorg.com/news148754667.html said:


 .a $4.9 million grant.for the first phase of DARPA's Systems of
 Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.


 Tononi and scientists from Columbia University and IBM will work on the
 software for the thinking computer, while nanotechnology and
 supercomputing experts from Cornell, Stanford and the University of
 California-Merced will create the hardware. Dharmendra Modha of IBM is the
 principal investigator.


 The idea is to create a computer capable of sorting through multiple
 streams of changing data, to look for patterns and make logical decisions.


 There's another requirement: The finished cognitive computer should be as
 small as a the brain of a small mammal and use as little power as a 100-watt
 light bulb. It's a major challenge. But it's what our brains do every day.


 I have just spent several hours reading a Tononi paper, An information
 integration theory of consciousness and skimmed several parts of his book
 A Universe of Consciousness he wrote with Edleman, whom Ben has referred
 to often in his writings.  (I have attached my mark up of the article, which
 if you read just the yellow highlighted text, or (for more detail) the red,
 you can get a quick understanding of.  You can also view it in MSWord
 outline mode if you like.)


 This paper largely agrees with my notion, stated multiple times on this
 list, that consciousness is an incredibly complex computation that interacts
 with itself in a very rich manner that makes it aware of itself.


 For the record, this looks like the paper that I listened to Tononi talk
 about a couple of years ago -- the one I mentioned in my last message.

 It is, for want of a better word, nonsense.  And since people take me to
 task for being so dismissive, let me add that it is the central thesis of
 the paper that is nonsense:  if you ask yourself very carefully what it is
 he is claiming, you can easily come up with counterexammples that make a
 mockery of his conclusion.



 Richard Loosemore


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore

Ben Goertzel wrote:


Richard,

I'm curious what you think of William Calvin's neuroscience hypotheses 
as presented in e.g. The Cerebral Code


That book is a bit out of date now, but still, he took complexity and 
nonlinear dynamics quite seriously, so it seems to me there may be some 
resonance between his ideas and your own


I find his speculative ideas more agreeable than Tononi's, myself...

thx
ben g


Yes, I did read his book (or part of it) back in 98/99, but 

From what I remember, I found resonance, as you say, but he is one of 
those people who is struggling to find a way to turn an intuition into 
something concrete.  It is just that he wrote a book about it before he 
got to Concrete Operations.


It would be interesting to take a look at it again, 10 years later, and 
see whether my opinion has changed.


To put this in context, I felt like I was looking at a copy of myself 
back in 1982, when I struggled to write down my intuitions as a 
physicist coming to terms with psychology for the first time.  I am now 
acutely embarrassed by the naivete of that first attempt, but in spite 
of the embarrassment I know that I have since turned those intuitions 
into something meaningful, and I know that in spite of my original 
hubris, I was on a path to something that actually did make sense.  To 
cognitive scientists at the time it looked awful, unmotivated and 
disconnected from reality (by itself, it was!), but in the end it was 
not trash because it had real substance buried inside it.


With people like Calvin (and others) I see writings that look somewhat 
speculative and ungrounded, just like my early attempts, so I am mixed 
between a desire to be lenient (because I was that like that once) and a 
feeling that they really need to be aware that their thoughts are still 
ungelled.


Anyhow, that's my quick thoughts on him.  I'll see if I can dig out his 
book at some point.





Richard Loosemore










On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore r...@lightlink.com 
mailto:r...@lightlink.com wrote:


Ed Porter wrote:

Richard,
Please describe some of the counterexamples, that you can easily
come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into
understanding his paper first time around, but the sheer agony of
reading (/listening to) his confused, shambling train of thought,
the non-sequiteurs, and the pages of irrelevant math  that I do
not need to experience a second time.  All of my original effort
only resulted in the discovery that I had wasted my time, so I have
no interest in wasting more of my time.

With other papers that contain more coherent substance, but perhaps
what looks like an error, I would make the effort.  But not this one.

It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about
multiple regions (columns?) of the brain exchanging information with
one another in a particular way, and then he asserted a conclusion
which, on quick reflection, I knew would not be true of a system
resembling the distributed one that I described in my consciousness
paper (the molecular model).  Knowing that his conclusion was
flat-out untrue for that one case, and for a whole class of similar
systems, his argument was toast.









-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com
mailto:r...@lightlink.com] Sent: Monday, December 22, 2008 8:54 AM
To: agi@v2.listbox.com mailto:agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was 
Building a
machine that can learn from experience

Ed Porter wrote:

I don't think this AGI list should be so quick to dismiss a
$4.9 million dollar grant to create an AGI.  It will not
necessarily be vaporware. I think we should view it as a
good sign.

 
Even if it is for a project that runs the risk, like many

DARPA projects (like most scientific funding in general) of
not necessarily placing its money where it might do the most
good --- it is likely to at least produce some interesting
results --- and it just might make some very important
advances in our field.

 
The article from http://www.physorg.com/news148754667.html said:


 
.a $4.9 million grant.for the first phase of DARPA's

Systems of Neuromorphic Adaptive Plastic Scalable
Electronics (SyNAPSE) project.

 
Tononi and scientists from Columbia University and IBM will

work on the software for the thinking computer, while

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
I mentioned it because looked at the book again recently and was pleasantly
surprised at how well his ideas seemed to have held up  In other words,
although there are point on which I think he's probably wrong, his
decade-old ideas *still* seem more sensible and insightful than most of the
theoretical speculations one reads in the neuroscience literature... and I
can't really think of any recent neuroscience data that refutes any of his
key hypotheses...

On Tue, Dec 23, 2008 at 10:36 AM, Richard Loosemore r...@lightlink.comwrote:

 Ben Goertzel wrote:


 Richard,

 I'm curious what you think of William Calvin's neuroscience hypotheses as
 presented in e.g. The Cerebral Code

 That book is a bit out of date now, but still, he took complexity and
 nonlinear dynamics quite seriously, so it seems to me there may be some
 resonance between his ideas and your own

 I find his speculative ideas more agreeable than Tononi's, myself...

 thx
 ben g


 Yes, I did read his book (or part of it) back in 98/99, but 

 From what I remember, I found resonance, as you say, but he is one of those
 people who is struggling to find a way to turn an intuition into something
 concrete.  It is just that he wrote a book about it before he got to
 Concrete Operations.

 It would be interesting to take a look at it again, 10 years later, and see
 whether my opinion has changed.

 To put this in context, I felt like I was looking at a copy of myself back
 in 1982, when I struggled to write down my intuitions as a physicist coming
 to terms with psychology for the first time.  I am now acutely embarrassed
 by the naivete of that first attempt, but in spite of the embarrassment I
 know that I have since turned those intuitions into something meaningful,
 and I know that in spite of my original hubris, I was on a path to something
 that actually did make sense.  To cognitive scientists at the time it looked
 awful, unmotivated and disconnected from reality (by itself, it was!), but
 in the end it was not trash because it had real substance buried inside it.

 With people like Calvin (and others) I see writings that look somewhat
 speculative and ungrounded, just like my early attempts, so I am mixed
 between a desire to be lenient (because I was that like that once) and a
 feeling that they really need to be aware that their thoughts are still
 ungelled.

 Anyhow, that's my quick thoughts on him.  I'll see if I can dig out his
 book at some point.




 Richard Loosemore









 On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore 
 r...@lightlink.commailto:
 r...@lightlink.com wrote:

Ed Porter wrote:

Richard,
Please describe some of the counterexamples, that you can easily
come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into
understanding his paper first time around, but the sheer agony of
reading (/listening to) his confused, shambling train of thought,
the non-sequiteurs, and the pages of irrelevant math  that I do
not need to experience a second time.  All of my original effort
only resulted in the discovery that I had wasted my time, so I have
no interest in wasting more of my time.

With other papers that contain more coherent substance, but perhaps
what looks like an error, I would make the effort.  But not this one.

It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about
multiple regions (columns?) of the brain exchanging information with
one another in a particular way, and then he asserted a conclusion
which, on quick reflection, I knew would not be true of a system
resembling the distributed one that I described in my consciousness
paper (the molecular model).  Knowing that his conclusion was
flat-out untrue for that one case, and for a whole class of similar
systems, his argument was toast.









-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com
mailto:r...@lightlink.com] Sent: Monday, December 22, 2008 8:54
 AM
To: agi@v2.listbox.com mailto:agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was 
Building a
machine that can learn from experience

Ed Porter wrote:

I don't think this AGI list should be so quick to dismiss a
$4.9 million dollar grant to create an AGI.  It will not
necessarily be vaporware. I think we should view it as a
good sign.

Even if it is for a project that runs the risk,
 like many
DARPA projects (like most scientific funding in general) of
not necessarily placing its money where it might do the most
good --- it is likely to at least produce some interesting

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Colin,

 

Here are my comments re  the following parts of your below post:

 

===Colin said==

I merely point out that there are fundamental limits as to how computer
science (CS) can inform/validate basic/physical science - (in an AGI
context, brain science). Take the Baars/Franklin IDA project.. It predicts
nothing neuroscience can poke a stick at.

 

===ED's reply===

Different AGI models can have different degrees of correspondence to, and
different explanatory relevance to, what is believed to take place in the
brain.  For example the Thomas Serre's PhD thesis Learning a Dictionary of
Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
Machines, at from
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf

http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
, is a computer simulation which is rather similar to my concept of how a
Novamente-like AGI could perform certain tasks in visual perception, and yet
it is designed to model the human visual system to a considerable degree.
It shows that a certain model of how Serre and Poggio think a certain aspect
of the human brain works, does in fact work surprisingly well when simulated
in a computer.

 

A surprisingly large number of brain science papers are based on computer
simulations, many of which are substantially simplified models, but they do
given neuroscientists a way to poke a stick at various theories they might
have for how the brain operates at various levels of organization.  Some of
these papers are directly relevant to AGI.  And some AGI papers are directly
relevant to providing answers to certain brain science questions. 

 

===Colin said==

I agree with your :

At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences.


===ED's reply===

We are largely on the same page here

 

===Colin said==

I disagree with:

But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution. 

Unless I've missed something ... The natural evolutionary 'engineering' that
has been going on has not been the creation of a MODEL (aboutness) of things
- the 'engineering' has evolved the construction of the actual things. The
two are not the same. The brain is indeed 'part of an eternal verity' - it
is made of natural components operating in a fashion we attempt to model as
'laws of nature',,,

 

===ED's reply===

If you define engineering as a process that involves designing something in
the abstract --- i.e., in your a MODEL (aboutness of things) --- before
physically building it, you could claim evolution is not engineering.  

 

But if you define engineering as the designing of things (by a process that
has intelligence what ever method) to solve a set of problems or
constraints, evolution does perform engineering, and the brain was formed by
such engineering.

 

How can you claim the human brain is an eternal verity, since it is only
believed that it has existing in anything close to its current form in the
last 30 to 100 thousand years, and there is no guarantee how much longer it
will continue to exists.  Compared to much of what the natural sciences
study, its existence appears quite fleeting.

 

===Colin said==

Anyway, for these reasons, folks who use computer models to study human
brains/consciousness will encounter some difficulty justifying, to the basic
physical sciences, claims made as to the equivalence of the model and
reality. That difficulty is fundamental and cannot be 'believed away'. 

 

===ED's reply===

If you attend brain science lectures and read brain science literature, you
will find that computer modeling is playing an ever increasing role in brain
science --- so this basic difficulty that you describe largely does not
exist.

 

===Colin said==

The intelligence originates in the brain. AGI and brain science must be
literally joined at the hip or the AGI enterprise is arguably scientifically
impoverished wishful thinking. 

===ED's reply===

I don't know what you mean by joined at the hip, but I think it is being
overly anthropomorphic to think an artificial mind has to slavishly model a
human brain to have great power and worth.  

 

But I do think it would probably have to accomplish some of the same general
functions, such as automatic pattern learning, credit assignment, attention
control, etc.

 

Ed Porter

 

-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Monday, December 22, 2008 11:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ed,
I wasn't trying to justify or promote a 'divide'. The two worlds must be

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Richard,

 

You originally totally trashed Tononi's paper, including its central core,
by saying:

 

It is, for want of a better word, nonsense.  And since people take me to 

task for being so dismissive, let me add that it is the central thesis 

of the paper that is nonsense:  if you ask yourself very carefully 

what it is he is claiming, you can easily come up with counterexammples 

that make a mockery of his conclusion.\

 

When asked to support your statement that 

 

you can easily come up with counterexammples that make a mockery of his
conclusion  

 

you refused.  You did so by grossly mis-describing Tononi's paper (for
example it does not include pages of .math, of any sort, and particularly
not pages of irrelevant math) and implying its mis-described faults so
offended your delicate sense of AGI propriety that re-reading it enough to
find support for your extremely critical (and perhaps totally unfair)
condemnation would be either too much work or too emotionally painful.

 

You said the counterexamples to the core of this paper were easy to come up
with, but you can't seem to come up with any. 

 

Such stunts have the appearance of being those of a pompous windbag.

 

Ed Porter

 

P.S. Your postscript is not sufficiently clear to provide much support for
your position.

P.P.S. You below  

 

 

-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com] 
Sent: Tuesday, December 23, 2008 9:53 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ed Porter wrote:

 Richard, 

 

 Please describe some of the counterexamples, that you can easily come up

 with, that make a mockery of Tononi's conclusion.

 

 Ed Porter

 

Alas, I will have to disappoint.  I put a lot of effort into 

understanding his paper first time around, but the sheer agony of 

reading (/listening to) his confused, shambling train of thought, the 

non-sequiteurs, and the pages of irrelevant math  that I do not need 

to experience a second time.  All of my original effort only resulted in 

the discovery that I had wasted my time, so I have no interest in 

wasting more of my time.

 

With other papers that contain more coherent substance, but perhaps what 

looks like an error, I would make the effort.  But not this one.

 

It will have to be left as an exercise for the reader, I'm afraid.

 

 

 

Richard Loosemore

 

 

P.S.   A hint.  All I remember was that he started talking about 

multiple regions (columns?) of the brain exchanging information with one 

another in a particular way, and then he asserted a conclusion which, on 

quick reflection, I knew would not be true of a system resembling the 

distributed one that I described in my consciousness paper (the 

molecular model).  Knowing that his conclusion was flat-out untrue for 

that one case, and for a whole class of similar systems, his argument 

was toast.

 

 

 

 

 

 

 

 

 

 -Original Message-

 From: Richard Loosemore [mailto:r...@lightlink.com] 

 Sent: Monday, December 22, 2008 8:54 AM

 To: agi@v2.listbox.com

 Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a

 machine that can learn from experience

 

 Ed Porter wrote:

 I don't think this AGI list should be so quick to dismiss a $4.9 million 

 dollar grant to create an AGI.  It will not necessarily be vaporware. 

 I think we should view it as a good sign.



  



 Even if it is for a project that runs the risk, like many DARPA projects 

 (like most scientific funding in general) of not necessarily placing its 

 money where it might do the most good --- it is likely to at least 

 produce some interesting results --- and it just might make some very 

 important advances in our field.



  



 The article from http://www.physorg.com/news148754667.html said:



  



 .a $4.9 million grant.for the first phase of DARPA's Systems of 

 Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.



  



 Tononi and scientists from Columbia University and IBM will work on the 

 software for the thinking computer, while nanotechnology and 

 supercomputing experts from Cornell, Stanford and the University of 

 California-Merced will create the hardware. Dharmendra Modha of IBM is 

 the principal investigator.



  



 The idea is to create a computer capable of sorting through multiple 

 streams of changing data, to look for patterns and make logical
decisions.



  



 There's another requirement: The finished cognitive computer should be 

 as small as a the brain of a small mammal and use as little power as a 

 100-watt light bulb. It's a major challenge. But it's what our brains do 

 every day.



  



 I have just spent several hours reading a Tononi paper, An information 

 integration theory of consciousness and skimmed several parts of his 

 book A Universe of Consciousness he wrote with Edleman, whom Ben 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Colin Hales

Ed,
Comments interspersed below:

Ed Porter wrote:


Colin,

 


Here are my comments re  the following parts of your below post:

 


===Colin said==

I merely point out that there are fundamental limits as to how 
computer science (CS) can inform/validate basic/physical science - (in 
an AGI context, brain science). Take the Baars/Franklin IDA 
project It predicts nothing neuroscience can poke a stick at...


 


===ED's reply===

Different AGI models can have different degrees of correspondence to, 
and different explanatory relevance to, what is believed to take place 
in the brain.  For example the Thomas Serre's PhD thesis Learning a 
Dictionary of Shape-Components in Visual Cortex:  Comparison with 
Neurons, Humans and Machines, at from 
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf 
, is a computer simulation which is rather similar to my concept of 
how a Novamente-like AGI could perform certain tasks in visual 
perception, and yet it is designed to model the human visual system to 
a considerable degree.  It shows that a certain model of how Serre and 
Poggio think a certain aspect of the human brain works, does in fact 
work surprisingly well when simulated in a computer.


 

A surprisingly large number of brain science papers are based on 
computer simulations, many of which are substantially simplified 
models, but they do given neuroscientists a way to poke a stick at 
various theories they might have for how the brain operates at various 
levels of organization.  Some of these papers are directly relevant to 
AGI.  And some AGI papers are directly relevant to providing answers 
to certain brain science questions.


You are quite right! Realistic models can be quite informative and feed 
back - suggesting new empirical approaches. There can be great 
cross-fertilisation.


However the point is irrelevant to the discussion at hand.

The phrase does in fact work surprisingly well when simulated in a 
computer illustrates the confusion. 'work'? according to whom? 
surprisingly well? by what criteria? The tacit assumption is that the 
model's thus implemented on a computer will/can 'behave' 
indistinguishably from the real thing, when what you are observing is a 
model of the real thing, not the real thing.


*HERE *If you targeting AGI with a benchmark/target of human intellect 
or problem solving skills, then the claim made on any/all models is that 
models can attain that goal. A computer implements a model. To make a 
claim that a model  completely captures the reality upon which it was 
based, you need to have a solid theory of the relationships between 
models and reality that is not wishful thinking or assumption, but solid 
science. Here's where you run into the problematic issue that basic 
physical sciences have with models. 

There's a boundary to cross - when you claim to have access to human 
level intellect - then you are demanding a equivalence with a real 
human, not a model of a human.


 


===Colin said==

I agree with your :

/At the other end of things, physicists are increasingly viewing 
physical reality as a computation, and thus the science of computation 
(and communication which is a part of it), such as information theory, 
have begun to play an increasingly important role in the most basic of 
all sciences./



===ED's reply===

We are largely on the same page here

 


===Colin said==

I disagree with:

But the brain is not part of an eternal verity.  It is the result of 
the engineering of evolution. 


Unless I've missed something ... The natural evolutionary 
'engineering' that has been going on has /not/ been the creation of a 
MODEL (aboutness) of things - the 'engineering' has evolved the 
construction of the /actual/ things. The two are not the same. The 
brain is indeed 'part of an eternal verity' - it is made of natural 
components operating in a fashion we attempt to model as 'laws of 
nature',,,


 


===ED's reply===

If you define engineering as a process that involves designing 
something in the abstract --- i.e., in your a MODEL (aboutness of 
things) --- before physically building it, you could claim evolution 
is not engineering. 

 

But if you define engineering as the designing of things (by a process 
that has intelligence what ever method) to solve a set of problems or 
constraints, evolution does perform engineering, and the brain was 
formed by such engineering.


 

How can you claim the human brain is an eternal verity, since it is 
only believed that it has existing in anything close to its current 
form in the last 30 to 100 thousand years, and there is no guarantee 
how much longer it will continue to exists.  Compared to much of what 
the natural sciences study, its existence appears quite fleeting.


I think this is just a terminology issue. The 'laws of nature' are the 
eternal verity, to me. The dynamical output they represent - of course 
that does whatever it does. The 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Criticizing AGI for not being neuroscience, and criticizing AGI programs for
not trying to precisely emulate humans, is really a bit silly.

One can of course make and test scientific hypotheses about the behavior of
AGI systems, quite independent of their potential relationship to human
beings.

AGI systems ultimately are physical systems, and not necessarily less
scientifically interesting than human physical systems.

-- Ben G

On Tue, Dec 23, 2008 at 7:54 PM, Colin Hales
c.ha...@pgrad.unimelb.edu.auwrote:

  Ed,
 Comments interspersed below:

 Ed Porter wrote:

  Colin,



 Here are my comments re  the following parts of your below post:



 ===Colin said==

 I merely point out that there are fundamental limits as to how computer
 science (CS) can inform/validate basic/physical science - (in an AGI
 context, brain science). Take the Baars/Franklin IDA project…. It predicts
 nothing neuroscience can poke a stick at…



 ===ED's reply===

 Different AGI models can have different degrees of correspondence to, and
 different explanatory relevance to, what is believed to take place in the
 brain.  For example the Thomas Serre's PhD thesis Learning a Dictionary of
 Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
 Machines, at from
 http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf, 
 is a computer simulation which is rather similar to my concept of how a
 Novamente-like AGI could perform certain tasks in visual perception, and yet
 it is designed to model the human visual system to a considerable degree.
 It shows that a certain model of how Serre and Poggio think a certain aspect
 of the human brain works, does in fact work surprisingly well when simulated
 in a computer.



 A surprisingly large number of brain science papers are based on computer
 simulations, many of which are substantially simplified models, but they do
 given neuroscientists a way to poke a stick at various theories they might
 have for how the brain operates at various levels of organization.  Some of
 these papers are directly relevant to AGI.  And some AGI papers are directly
 relevant to providing answers to certain brain science questions.

 You are quite right! Realistic models can be quite informative and feed
 back - suggesting new empirical approaches. There can be great
 cross-fertilisation.

 However the point is irrelevant to the discussion at hand.

  The phrase does in fact work surprisingly well when simulated in a
 computer illustrates the confusion. 'work'? according to whom? surprisingly
 well? by what criteria? The tacit assumption is that the model's thus
 implemented on a computer will/can 'behave' indistinguishably from the real
 thing, when what you are observing is a model of the real thing, not the
 real thing.

 *HERE *If you targeting AGI with a benchmark/target of human intellect
 or problem solving skills, then the claim made on any/all models is that
 models can attain that goal. A computer implements a model. To make a claim
 that a model  completely captures the reality upon which it was based, you
 need to have a solid theory of the relationships between models and reality
 that is not wishful thinking or assumption, but solid science. Here's where
 you run into the problematic issue that basic physical sciences have with
 models.

 There's a boundary to cross - when you claim to have access to human level
 intellect - then you are demanding a equivalence with a real human, not a
 model of a human.



 ===Colin said==

 I agree with your :

 *At the other end of things, physicists are increasingly viewing physical
 reality as a computation, and thus the science of computation (and
 communication which is a part of it), such as information theory, have begun
 to play an increasingly important role in the most basic of all sciences.*
 


 ===ED's reply===

 We are largely on the same page here



 ===Colin said==

 I disagree with:

 But the brain is not part of an eternal verity.  It is the result of the
 engineering of evolution. 

 Unless I've missed something ... The natural evolutionary 'engineering'
 that has been going on has *not* been the creation of a MODEL (aboutness)
 of things - the 'engineering' has evolved the construction of the 
 *actual*things. The two are not the same. The brain is indeed 'part of an 
 eternal
 verity' - it is made of natural components operating in a fashion we attempt
 to model as 'laws of nature',,,



 ===ED's reply===

 If you define engineering as a process that involves designing something in
 the abstract --- i.e., in your a MODEL (aboutness of things) --- before
 physically building it, you could claim evolution is not engineering.



 But if you define engineering as the designing of things (by a process that
 has intelligence what ever method) to solve a set of problems or
 constraints, evolution does perform engineering, and the brain was formed by
 such engineering.



 How can you 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Richard Loosemore

Ben Goertzel wrote:


I know Dharmendra Mohdha a bit, and I've corresponded with Eugene 
Izhikevich who is Edelman's collaborator on large-scale brain 
simulations.  I've read Tononi's stuff too.  I think these are all smart 
people with deep understandings, and all in all this will be research 
money well spent.


However, there is no design for a thinking machine here.  There is 
cool work on computer simulations of small portions of the brain.


I find nothing to disrespect in the scientific work involved in this 
DARPA project.  It may not be the absolute most valuable research path, 
but it's a good one. 

However, IMO the rhetoric associating it with thinking machine 
building is premature and borderline dishonest.  It's marketing 
rhetoric.


I agree with this last paragraph wholeheartedly:  this is exactly what I 
meant when I said Neuroscience vaporware.


I also know Tononi's work, because I listened to him give a talk about 
consciousness once.  It was *computationally* incoherent.




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
Hi,



 So if the researcher on this project have been learning some of your ideas,
 and some of the better speculative thinking and neural simulations that have
 been done in brains science --- either directly or indirectly --- it might
 be incorrect to say that there is no 'design for a thinking machine' in
 SyNAPSE.



 But perhaps you know the thinking of the researchers involved enough to
 know that they do, in fact, lack such a design, other than what they have
 yet to learn by progress yet to be made by their neural simulations.


Well I talked to Dharmendra on this topic a couple months ago.  Believe me,
there is no grand AI architecture there.  You won't find one in their
publications, and they don't allude to one in their conversations.  You'd
have to be a heck of a conspiracy theorist to posit one...

I agree that one could make a neural-net-like design based on the underlying
conceptual principles of OpenCogPrime, and if I had a lot more free time
maybe I'd do it, but I'm more interested in putting my time into the current
design which IMO is better adapted to current computers.  I have a feeling
the neuroscientists have a lot of surprises for us coming up in the next 2
decades, so that it's premature to base AI designs on neuroscience
knowledge...

ANYWAY, I THINK WE SHOULD, AT LEAST, INVITE THEM TO AGI 2009.  I though one
 of the goal of AGI 2009 it to increase the attention and respect our
 movement receives from the AI community in general and AI funders in
 particular.


Please note that the AI community and the artificial brain / brain
simulation community are rather separate at this point (though not entirely
so).

We will have a number of recognized leaders from the AI field at AGI-09,
such as (to pick a few almost at random) John Laird, Marcus Hutter and
Juergen Schmidhuber

However, in spite of emailing and talking to some relevant folks, I didn't
seem to succeed in pulling brain simulation folks into AGI-09, at least they
didn't submit papers for presentation...

Perhaps for AGI-10 or 11 some different strategy will need to be taken if we
wish to help pull these communities together.  For instance, convince *one*
leader in that area to take charge of pulling his colleagues into a special
session on computational neuroscience modeling etc.

At the AAAI BICA symposium Alexei Samsonovich organized last month, a couple
neuroscience simulation guys (Steve Grossberg for example) showed up
alongside the AI guys ... probably because biology was in the title ;-)
... but still it was strongly AI-focused rather than brain-simulation
focused.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ed Porter
Richard, 

Please describe some of the counterexamples, that you can easily come up
with, that make a mockery of Tononi's conclusion.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com] 
Sent: Monday, December 22, 2008 8:54 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

Ed Porter wrote:
 I don't think this AGI list should be so quick to dismiss a $4.9 million 
 dollar grant to create an AGI.  It will not necessarily be vaporware. 
 I think we should view it as a good sign.
 
  
 
 Even if it is for a project that runs the risk, like many DARPA projects 
 (like most scientific funding in general) of not necessarily placing its 
 money where it might do the most good --- it is likely to at least 
 produce some interesting results --- and it just might make some very 
 important advances in our field.
 
  
 
 The article from http://www.physorg.com/news148754667.html said:
 
  
 
 .a $4.9 million grant.for the first phase of DARPA's Systems of 
 Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.
 
  
 
 Tononi and scientists from Columbia University and IBM will work on the 
 software for the thinking computer, while nanotechnology and 
 supercomputing experts from Cornell, Stanford and the University of 
 California-Merced will create the hardware. Dharmendra Modha of IBM is 
 the principal investigator.
 
  
 
 The idea is to create a computer capable of sorting through multiple 
 streams of changing data, to look for patterns and make logical decisions.
 
  
 
 There's another requirement: The finished cognitive computer should be 
 as small as a the brain of a small mammal and use as little power as a 
 100-watt light bulb. It's a major challenge. But it's what our brains do 
 every day.
 
  
 
 I have just spent several hours reading a Tononi paper, An information 
 integration theory of consciousness and skimmed several parts of his 
 book A Universe of Consciousness he wrote with Edleman, whom Ben has 
 referred to often in his writings.  (I have attached my mark up of the 
 article, which if you read just the yellow highlighted text, or (for 
 more detail) the red, you can get a quick understanding of.  You can 
 also view it in MSWord outline mode if you like.)
 
  
 
 This paper largely agrees with my notion, stated multiple times on this 
 list, that consciousness is an incredibly complex computation that 
 interacts with itself in a very rich manner that makes it aware of itself.

For the record, this looks like the paper that I listened to Tononi talk 
about a couple of years ago -- the one I mentioned in my last message.

It is, for want of a better word, nonsense.  And since people take me to 
task for being so dismissive, let me add that it is the central thesis 
of the paper that is nonsense:  if you ask yourself very carefully 
what it is he is claiming, you can easily come up with counterexammples 
that make a mockery of his conclusion.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote:

  Ben,



 Thanks for the reply.



 It is a shame the brain science people aren't more interested in AGI.  It
 seems to me there is a lot of potential for cross-fertilization.



I don't think many of these folks have a principled or deep-seated
**aversion** to AGI work or anything like that -- it's just that they're
busy people and need to prioritize, like all working scientists

Similarly, not many AGI types show up at computational neuroscience modeling
type conferences

To create connections between fields there has to be some strong indication
of real value offered by one field to the other ... and preferably mutual
value...

But of course, the catch is that this value will only be demonstrated once
the researchers in the different fields actually start coming together more

I was involved w/ trying to build these kinds of links in the late 90s when
I co-founded two cross-disciplinary university cog sci degree programs.
It's hard because different people from different fields speak different
languages and have different ideas of what constitutes successful or
interesting research.

The problem of bringing together AI and neuroscience and psychology was
*partially* solved back when by the creation of cog sci as a discipline ...
but obviously the solution was only partial because cog sci to a
disturbing degree got sucked into cog psych, and now someone needs to work
again to pull AGI and brain-sim work together.

Obviously, there's only so much one maverick outsider researcher like me can
do to help nudge these two research communities together, but I'll do what I
can via the AGI conferences, anyways

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ed Porter
Colin,

 

From a quick read, the gist of what your are saying seems to be that AGI is
just engineering, i.e., the study of what man can make and the properties
thereof, whereas science relates to the eternal verities of reality.

 

But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution.  

 

At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences.

 

And to the extent that the study of the human mind is a science, then the
study of the types of computation that are done in the mind is part of that
science, and AGI is the study of many of the same functions.

 

So your post might explain the reason for a current cultural divide, but it
does not really provide a justification for it.  In addition, if you attend
events at either MIT's brain study center or its AI center, you will find
many of the people who are there are from the other of these two centers,
and that there is a considerable degree of cross-fertilization there that I
have heard people at such event describe the benefits of.

 

Ed Porter

 

 

-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Monday, December 22, 2008 6:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ben Goertzel wrote: 

 

On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote:

Ben,

 

Thanks for the reply.

 

It is a shame the brain science people aren't more interested in AGI.  It
seems to me there is a lot of potential for cross-fertilization.



I don't think many of these folks have a principled or deep-seated
**aversion** to AGI work or anything like that -- it's just that they're
busy people and need to prioritize, like all working scientists

There's a more fundamental reason: Software engineering is not 'science' in
the sense understood in the basic physical sciences. Science works to
acquire models of empirically provable critical dependencies (apparent
causal necessities). Software engineering never delivers this. The result of
the work, however interesting and powerful, is a model that is, at best,
merely a correlate of some a-priori 'designed' behaviour. Testing to your
own specification is a normal behaviour in computer science. This is not the
testing done in the basic physical science - they 'test' (empirically
examine) whatever is naturally there - which is, by definition, a-priori
unknown. 

No matter how interesting it may be, software tells us nothing about the
actual causal dependencies. The computer's physical hardware (semiconductor
charge manipulation), configured as per the software, is the actual and
ultimate causal necessitator of all the natural behaviour of hot rocks
inside your computer. Software is MANY:1 redundantly/degenerately related to
the physical (natural world) outcomes. The brilliantly useful
'hardware-independence' achieved by software engineering and essentially
analogue electrical machines behaving 'as-if' they were digital - so
powerful and elegant - actually places the status of the software activities
outside the realm of any claims as causal.

This is the fundamental problem that the  basic physical sciences have with
computer 'science'. It's not, in a formal sense a 'science'. That doesn't
mean CS is bad or irrelevant - it just means that it's value as a revealer
of the properties of the natural world must be accepted with appropriate
caution. 

I've spent 10's of thousands of hours testing software that drove all manner
of physical world equipment - some of it the size of a 10 storey building. I
was testing to my own/others specification. Throughout all of it I knew I
was not doing science in the sense that scientists know it to be. The mantra
is correlation is not causation and it's beaten into scientist pups from
an early age. Software is a correlate only - it 'causes' nothing. In
critical argument revolving around claims in respect of software as
causality  - it would be defeated in review every time. A scientist,
standing there with an algorithm/model of a natural world behaviour, knows
that the model does not cause the behaviour. However, the scientist's model
represents a route to predictive efficacy in respect of a unique natural
phenomenon. Computer software does not predict the causal origination of the
natural world behaviours driven by it. 10 compilers could produce 10
different causalities on the same computer. 10 different computers running
the same software would produce 10 different lots of causality.

That's my take on why the basic physical sciences may be under-motivated to
use AGI as a route to the outcomes demanded of their field of interest =
'Laws/regularities of Nature'. It may 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Colin Hales

Ed,
I wasn't trying to justify or promote a 'divide'. The two worlds must be 
better off in collaboration, surely? I merely point out that there are 
fundamental limits as to how computer science (CS) can inform/validate 
basic/physical science - (in an AGI context, brain science). Take the 
Baars/Franklin IDA project. Baars invents 'Global Workspace' = a 
metaphor of apparent brain operation. Franklin writes one. Afterwards, 
you're standing next to to it, wondering as to its performance. What 
part of its behaviour has any direct bearing on how a brain works? It 
predicts nothing neuroscience can poke a stick at. All you can say is 
that the computer is manipulating abstractions according to a model of 
brain material. At best you get to be quite right and prove nothing. If 
the beastie also underperforms then you have seeds for doubt that also 
prove nothing.


CS as 'science' has always had this problem. AGI merely inherits its 
implications in a particular context/speciality. There's nothing bad or 
good - merely justified limits as to how CS and AGI may interact via 
brain science.


I agree with your :

/At the other end of things, physicists are increasingly viewing 
physical reality as a computation, and thus the science of computation 
(and communication which is a part of it), such as information theory, 
have begun to play an increasingly important role in the most basic of 
all sciences./


I would advocate physical reality (all of it) as /literally /computation 
in the sense of information processing. Hold a pencil up in front of 
your face and take a look at it... realise that the universe is 
'computing a pencil'. Take a look at the computer in front of you: the 
universe is 'computing a computer'. The universe is literally computing 
YOU, too. The computation is not 'about' a pencil, a computer, a human. 
The computation IS those things. In exactly this same sense I want the 
universe to 'compute' an AGI (inorganic general intelligence). To me, 
then, this is /not/ manipulating abstractions ('aboutnesses') - which is 
the sense meant by CS generally and what actually happens in reality in CS.


So despite some agreement as to words  - it is in the details we are 
likely to differ. The information processing in the natural world is not 
that which is going on in a model of it. As Edelman said(1) /A theory 
to account for a hurricane is not a hurricane/. In exactly this way a 
computational-algorithmic process about intelligence cannot a-priori 
be claimed to be the intelligence of that which was modelled. Or - put 
yet another way: That {THING behaves 'abstract- RULE-ly'} does not 
entail that {anything manipulated according to abstract-RULE will become 
THING}. The only perfect algorithmic (100% complete information content) 
description of a thing is the actual thing, which includes all 
'information' at all hierarchical descriptive levels, simultaneously.


I disagree with:

But the brain is not part of an eternal verity.  It is the result of 
the engineering of evolution. 


Unless I've missed something ... The natural evolutionary 'engineering' 
that has been going on has /not/ been the creation of a MODEL 
(aboutness) of things - the 'engineering' has evolved the construction 
of the /actual/ things. The two are not the same. The brain is indeed 
'part of an eternal verity' - it is made of natural components operating 
in a fashion we attempt to model as 'laws of nature'. Those models, 
abstracted and shoehorned into a computer - are not the same as the 
original. To believe that they are is one of those Occam's Razor 
violations I pointed out before my xmas shopping spree (see previous-1 
post).

---

Anyway, for these reasons, folks who use computer models to study human 
brains/consciousness will encounter some difficulty justifying, to the 
basic physical sciences, claims made as to the equivalence of the model 
and reality. That difficulty is fundamental and cannot be 'believed 
away'. At the same time it's not a show-stopper; merely something to be 
aware of as we go about our duties. This will remain an issue - the only 
real, certain, known example of a general intelligence is the human. The 
intelligence originates in the brain. AGI and brain science must be 
literally joined at the hip or the AGI enterprise is arguably 
scientifically impoverished wishful thinking. Which is pretty much what 
Ben said...although as usual I have used too many damned words!


I expect we'll just have to agree to disagree... but there you have it :-)

colin hales
(1) Edelman, G. (2003). Naturalizing consciousness: A theoretical 
framework. Proc Natl Acad Sci U S A, 100(9), 5520--24.



Ed Porter wrote:


Colin,

 

From a quick read, the gist of what your are saying seems to be that 
AGI is just engineering, i.e., the study of what man can make and 
the properties thereof, whereas science relates to the eternal 
verities of reality.


 


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
To add to this discussion, I'd like to point out that many AI systems have
been used and scientifically evaluated as *psychological* models, e.g.
cognitive models.

For instance, SOAR and ACT-R are among the  many systems that have been used
and evaluated this way.

The goal of that sort of research is to come up with simple, principled
explanations of human behaviors in psychology experiments, via coming up
with software systems that precisely simulate these behaviors.

So, one possible approach to AGI would be via cognitive modeling of this
sort.

This is quite different than brain simulation, and also quite different than
AGI which seeks generally but not precisely humanlike behavior.

I know there is some divergence in the SOAR community between whose who want
to use SOAR for scientific cognitive modeling, and those who want to use it
for building AGI that emulates human thought qualitatively but not precisely

-- Ben G



On Mon, Dec 22, 2008 at 11:36 PM, Colin Hales
c.ha...@pgrad.unimelb.edu.auwrote:

  Ed,
 I wasn't trying to justify or promote a 'divide'. The two worlds must be
 better off in collaboration, surely? I merely point out that there are
 fundamental limits as to how computer science (CS) can inform/validate
 basic/physical science - (in an AGI context, brain science). Take the
 Baars/Franklin IDA project. Baars invents 'Global Workspace' = a metaphor
 of apparent brain operation. Franklin writes one. Afterwards, you're
 standing next to to it, wondering as to its performance. What part of its
 behaviour has any direct bearing on how a brain works? It predicts nothing
 neuroscience can poke a stick at. All you can say is that the computer is
 manipulating abstractions according to a model of brain material. At best
 you get to be quite right and prove nothing. If the beastie also
 underperforms then you have seeds for doubt that also prove nothing.

 CS as 'science' has always had this problem. AGI merely inherits its
 implications in a particular context/speciality. There's nothing bad or good
 - merely justified limits as to how CS and AGI may interact via brain
 science.
 
 I agree with your :

 *At the other end of things, physicists are increasingly viewing physical
 reality as a computation, and thus the science of computation (and
 communication which is a part of it), such as information theory, have begun
 to play an increasingly important role in the most basic of all sciences.*
 

 I would advocate physical reality (all of it) as *literally *computation
 in the sense of information processing. Hold a pencil up in front of your
 face and take a look at it... realise that the universe is 'computing a
 pencil'. Take a look at the computer in front of you: the universe is
 'computing a computer'. The universe is literally computing YOU, too. The
 computation is not 'about' a pencil, a computer, a human. The computation IS
 those things. In exactly this same sense I want the universe to 'compute' an
 AGI (inorganic general intelligence). To me, then, this is *not*manipulating 
 abstractions ('aboutnesses') - which is the sense meant by CS
 generally and what actually happens in reality in CS.

 So despite some agreement as to words  - it is in the details we are likely
 to differ. The information processing in the natural world is not that which
 is going on in a model of it. As Edelman said(1) *A theory to account for
 a hurricane is not a hurricane*. In exactly this way a
 computational-algorithmic process about intelligence cannot a-priori be
 claimed to be the intelligence of that which was modelled. Or - put yet
 another way: That {THING behaves 'abstract- RULE-ly'} does not entail that
 {anything manipulated according to abstract-RULE will become THING}. The
 only perfect algorithmic (100% complete information content) description of
 a thing is the actual thing, which includes all 'information' at all
 hierarchical descriptive levels, simultaneously.
 
 I disagree with:

 But the brain is not part of an eternal verity.  It is the result of the
 engineering of evolution. 

 Unless I've missed something ... The natural evolutionary 'engineering'
 that has been going on has *not* been the creation of a MODEL (aboutness)
 of things - the 'engineering' has evolved the construction of the 
 *actual*things. The two are not the same. The brain is indeed 'part of an 
 eternal
 verity' - it is made of natural components operating in a fashion we attempt
 to model as 'laws of nature'. Those models, abstracted and shoehorned into a
 computer - are not the same as the original. To believe that they are is one
 of those Occam's Razor violations I pointed out before my xmas shopping
 spree (see previous-1 post).
 ---

 Anyway, for these reasons, folks who use computer models to study human
 brains/consciousness will encounter some difficulty justifying, to the basic
 physical sciences, claims made as to the equivalence of the model 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Ben Goertzel
I know Dharmendra Mohdha a bit, and I've corresponded with Eugene Izhikevich
who is Edelman's collaborator on large-scale brain simulations.  I've read
Tononi's stuff too.  I think these are all smart people with deep
understandings, and all in all this will be research money well spent.

However, there is no design for a thinking machine here.  There is cool
work on computer simulations of small portions of the brain.

I find nothing to disrespect in the scientific work involved in this DARPA
project.  It may not be the absolute most valuable research path, but it's a
good one.

However, IMO the rhetoric associating it with thinking machine building is
premature and borderline dishonest.  It's marketing rhetoric.  It's more
like interesting brain simulation research that could eventually play a
role in some future thinking-machine-building project, whose nature remains
largely unspecified.

Getting into the nitty-gritty a little more: until we understand way, way
more about how brain dynamics and structures lead to thoughts, and/or have
way, way better brain imaging data, we're not going to be able to build a
thinking machine via brain simulation.

-- Ben G

On Sat, Dec 20, 2008 at 5:25 PM, Ed Porter ewpor...@msn.com wrote:

  I don't think this AGI list should be so quick to dismiss a $4.9 million
 dollar grant to create an AGI.  It will not necessarily be vaporware. I
 think we should view it as a good sign.



 Even if it is for a project that runs the risk, like many DARPA projects
 (like most scientific funding in general) of not necessarily placing its
 money where it might do the most good --- it is likely to at least produce
 some interesting results --- and it just might make some very important
 advances in our field.



 The article from http://www.physorg.com/news148754667.html said:



 …a $4.9 million grant…for the first phase of DARPA's Systems of
 Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.



 Tononi and scientists from Columbia University and IBM will work on the
 software for the thinking computer, while nanotechnology and
 supercomputing experts from Cornell, Stanford and the University of
 California-Merced will create the hardware. Dharmendra Modha of IBM is
 the principal investigator.



 The idea is to create a computer capable of sorting through multiple
 streams of changing data, to look for patterns and make logical decisions.



 There's another requirement: The finished cognitive computer should be as
 small as a the brain of a small mammal and use as little power as a 100-watt
 light bulb. It's a major challenge. But it's what our brains do every day.




 I have just spent several hours reading a Tononi paper, An information
 integration theory of consciousness and skimmed several parts of his book
 A Universe of Consciousness he wrote with Edleman, whom Ben has referred
 to often in his writings.  (I have attached my mark up of the article, which
 if you read just the yellow highlighted text, or (for more detail) the red,
 you can get a quick understanding of.  You can also view it in MSWord
 outline mode if you like.)



 This paper largely agrees with my notion, stated multiple times on this
 list, that consciousness is an incredibly complex computation that interacts
 with itself in a very rich manner that makes it aware of itself.



 However, it is not clear to me --- from reading this paper or one full
 chapter of A Universe of Consciousness on Google Books and spending about
 fifteen minutes skimming the rest of it --- that either he or Edelman have
 anything approaching Novamente or OpenCog's detail description of how to
 build an AGI.



 I did not hear enough discussion of the role of grounding, and the need for
 proper selection in the spreading activation of a representational net so
 that the consciousness would be one of awareness of appropriate meaning.



 But Tononi is going to work with Dharmendra Modha of IBM, who is a leader
 in brain simulation, so they may well produce something interesting.



 I personally think it would be more productive to spend the money with a
 more Novamente-like approach, where we already seem to have good ideas for
 how to solve most of the hard problems (other than staying within a
 computational budget, and parameter tuning) --- but whatever it discovers
 should, at least, be relevant.



 Furthermore, what little I have read about the hardware side of this
 project is very exciting, since it provides a much more brain like platform,
 which if it could be made to work using Memsistors, or grapheme based
 technology, could enable artificial brains to be made for amazingly low
 prices, with energy costs 1/1000 to 1/30,000 that of CMOS machines with
 similar computational power.  Its goal is to develop a technology that will
 enable AGIs to be built small enough that we could carry them around like an
 iPhone (albeit with large batteries, at least for a decade or so).



 In any case, I think we 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Bob Mottram
2008/12/21 Ben Goertzel b...@goertzel.org:
 However, IMO the rhetoric associating it with thinking machine building is
 premature and borderline dishonest.  It's marketing rhetoric.  It's more
 like interesting brain simulation research that could eventually play a
 role in some future thinking-machine-building project, whose nature remains
 largely unspecified.


Yes, which would sound less dramatic.  Some time ago there was a
similar borderline dishonest report that a mouse brain had been
simulated on a supercomputer.  This sounded exciting, but it just
turns out that they've been able to simulate a number of neuron-like
elements (the Izhikevich spiking model, I think) similar in quantity
to a mouse-sized brain within some tractable amount of time, which is
not quite as impressive.

This kind of research is eventually doomed to succeed, but at present
we still don't know in detail how even a mouse brain is organized,
beyond a fairly gross level of anatomy.  Some of the newer techniques,
such as genetic modification which gives each neuron a unique colour,
should be helpful in this regard.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Ed Porter
Ben,

 

It would seem to me that a lot of the ideas in OpenCogPrime could be
implemented in neuromorphic hardware, particularly if you were to intermix
it with some traditional computing hardware.  This is particularly true if
such a system could efficiently use neural assemblies, because that would
appear to allow it to much more flexibly allocate representational resources
in given amount of neuromophic hardware.  (This is one of he reasons I have
asked so many questions about neural assemblies on this list.) 

 

So if the researcher on this project have been learning some of your ideas,
and some of the better speculative thinking and neural simulations that have
been done in brains science --- either directly or indirectly --- it might
be incorrect to say that there is no 'design for a thinking machine' in
SyNAPSE.  

 

But perhaps you know the thinking of the researchers involved enough to know
that they do, in fact, lack such a design, other than what they have yet to
learn by progress yet to be made by their neural simulations. 

 

(It should be noted that neuromophic hardware might be able to greatly
reduce the cost of, and speed up, many types of neural simulations,
increasing the rate at which they may be able to make progress with such an
approach.)

 

ANYWAY, I THINK WE SHOULD, AT LEAST, INVITE THEM TO AGI 2009.  I though one
of the goal of AGI 2009 it to increase the attention and respect our
movement receives from the AI community in general and AI funders in
particular.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Sunday, December 21, 2008 12:17 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 


I know Dharmendra Mohdha a bit, and I've corresponded with Eugene Izhikevich
who is Edelman's collaborator on large-scale brain simulations.  I've read
Tononi's stuff too.  I think these are all smart people with deep
understandings, and all in all this will be research money well spent.

However, there is no design for a thinking machine here.  There is cool
work on computer simulations of small portions of the brain.

I find nothing to disrespect in the scientific work involved in this DARPA
project.  It may not be the absolute most valuable research path, but it's a
good one.  

However, IMO the rhetoric associating it with thinking machine building is
premature and borderline dishonest.  It's marketing rhetoric.  It's more
like interesting brain simulation research that could eventually play a
role in some future thinking-machine-building project, whose nature remains
largely unspecified.

Getting into the nitty-gritty a little more: until we understand way, way
more about how brain dynamics and structures lead to thoughts, and/or have
way, way better brain imaging data, we're not going to be able to build a
thinking machine via brain simulation.  

-- Ben G

On Sat, Dec 20, 2008 at 5:25 PM, Ed Porter ewpor...@msn.com wrote:

I don't think this AGI list should be so quick to dismiss a $4.9 million
dollar grant to create an AGI.  It will not necessarily be vaporware. I
think we should view it as a good sign.

 

Even if it is for a project that runs the risk, like many DARPA projects
(like most scientific funding in general) of not necessarily placing its
money where it might do the most good --- it is likely to at least produce
some interesting results --- and it just might make some very important
advances in our field.

 

The article from http://www.physorg.com/news148754667.html said:

 

.a $4.9 million grant.for the first phase of DARPA's Systems of
Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project. 

 

Tononi and scientists from Columbia University and IBM will work on the
software for the thinking computer, while nanotechnology and
supercomputing experts from Cornell, Stanford and the University of
California-Merced will create the hardware. Dharmendra Modha of IBM is the
principal investigator. 

 

The idea is to create a computer capable of sorting through multiple streams
of changing data, to look for patterns and make logical decisions. 

 

There's another requirement: The finished cognitive computer should be as
small as a the brain of a small mammal and use as little power as a 100-watt
light bulb. It's a major challenge. But it's what our brains do every day. 

 

I have just spent several hours reading a Tononi paper, An information
integration theory of consciousness and skimmed several parts of his book
A Universe of Consciousness he wrote with Edleman, whom Ben has referred
to often in his writings.  (I have attached my mark up of the article, which
if you read just the yellow highlighted text, or (for more detail) the red,
you can get a quick understanding of.  You can also view it in MSWord
outline mode if you like.)

 

This paper largely agrees with my notion, stated multiple times on this
list, that