Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore
Ed Porter wrote: Richard, Please describe some of the counterexamples, that you can easily come up with, that make a mockery of Tononi's conclusion. Ed Porter Alas, I will have to disappoint. I put a lot of effort into understanding his paper first time around, but the sheer agony of

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Richard, I'm curious what you think of William Calvin's neuroscience hypotheses as presented in e.g. The Cerebral Code That book is a bit out of date now, but still, he took complexity and nonlinear dynamics quite seriously, so it seems to me there may be some resonance between his ideas and

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore
Ben Goertzel wrote: Richard, I'm curious what you think of William Calvin's neuroscience hypotheses as presented in e.g. The Cerebral Code That book is a bit out of date now, but still, he took complexity and nonlinear dynamics quite seriously, so it seems to me there may be some

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
I mentioned it because looked at the book again recently and was pleasantly surprised at how well his ideas seemed to have held up In other words, although there are point on which I think he's probably wrong, his decade-old ideas *still* seem more sensible and insightful than most of the

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Colin, Here are my comments re the following parts of your below post: ===Colin said== I merely point out that there are fundamental limits as to how computer science (CS) can inform/validate basic/physical science - (in an AGI context, brain science). Take the Baars/Franklin IDA

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Richard, You originally totally trashed Tononi's paper, including its central core, by saying: It is, for want of a better word, nonsense. And since people take me to task for being so dismissive, let me add that it is the central thesis of the paper that is nonsense: if you ask

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Colin Hales
Ed, Comments interspersed below: Ed Porter wrote: Colin, Here are my comments re the following parts of your below post: ===Colin said== I merely point out that there are fundamental limits as to how computer science (CS) can inform/validate basic/physical science - (in

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Criticizing AGI for not being neuroscience, and criticizing AGI programs for not trying to precisely emulate humans, is really a bit silly. One can of course make and test scientific hypotheses about the behavior of AGI systems, quite independent of their potential relationship to human beings.

[agi] Universal intelligence test benchmark

2008-12-23 Thread Matt Mahoney
I have been developing an experimental test set along the lines of Legg and Hutter's universal intelligence ( http://www.idsia.ch/idsiareport/IDSIA-04-05.pdf ). They define general intelligence as the expected reward of an AIXI agent in a Solomonoff distribution of environments (simulated by