RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-26 Thread Ed Porter
Richard, Since you are clearly in the mode you routinely get into when you start loosing an argument on this list --- as has happened so many times before --- i.e., of ceasing all further productive communication on the actual subject of the argument --- this will be my last communication

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-25 Thread Richard Loosemore
Ed Porter wrote: Why is it that people who repeatedly and insultingly say other people’s work or ideas are total nonsense -- without any reasonable justification -- are still allowed to participate in the discussion on the AGI list? Because they know what they are talking about. And because

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-24 Thread Ed Porter
===Colin said== The tacit assumption is that the model's thus implemented on a computer will/can 'behave' indistinguishably from the real thing, when what you are observing is a model of the real thing, not the real thing. ===ED's reply=== I was making no assumption that the model

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-24 Thread Richard Loosemore
Why is it that people who repeatedly resort to personal abuse like this are still allowed to participate in the discussion on the AGI list? Richard Loosemore Ed Porter wrote: Richard, You originally totally trashed Tononi's paper, including its central core, by saying: It

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore
Ed Porter wrote: Richard, Please describe some of the counterexamples, that you can easily come up with, that make a mockery of Tononi's conclusion. Ed Porter Alas, I will have to disappoint. I put a lot of effort into understanding his paper first time around, but the sheer agony of

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Richard, I'm curious what you think of William Calvin's neuroscience hypotheses as presented in e.g. The Cerebral Code That book is a bit out of date now, but still, he took complexity and nonlinear dynamics quite seriously, so it seems to me there may be some resonance between his ideas and

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore
Ben Goertzel wrote: Richard, I'm curious what you think of William Calvin's neuroscience hypotheses as presented in e.g. The Cerebral Code That book is a bit out of date now, but still, he took complexity and nonlinear dynamics quite seriously, so it seems to me there may be some

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
I mentioned it because looked at the book again recently and was pleasantly surprised at how well his ideas seemed to have held up In other words, although there are point on which I think he's probably wrong, his decade-old ideas *still* seem more sensible and insightful than most of the

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Colin, Here are my comments re the following parts of your below post: ===Colin said== I merely point out that there are fundamental limits as to how computer science (CS) can inform/validate basic/physical science - (in an AGI context, brain science). Take the Baars/Franklin IDA

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ed Porter
Richard, You originally totally trashed Tononi's paper, including its central core, by saying: It is, for want of a better word, nonsense. And since people take me to task for being so dismissive, let me add that it is the central thesis of the paper that is nonsense: if you ask

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Colin Hales
Ed, Comments interspersed below: Ed Porter wrote: Colin, Here are my comments re the following parts of your below post: ===Colin said== I merely point out that there are fundamental limits as to how computer science (CS) can inform/validate basic/physical science - (in

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Criticizing AGI for not being neuroscience, and criticizing AGI programs for not trying to precisely emulate humans, is really a bit silly. One can of course make and test scientific hypotheses about the behavior of AGI systems, quite independent of their potential relationship to human beings.

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Richard Loosemore
Ben Goertzel wrote: I know Dharmendra Mohdha a bit, and I've corresponded with Eugene Izhikevich who is Edelman's collaborator on large-scale brain simulations. I've read Tononi's stuff too. I think these are all smart people with deep understandings, and all in all this will be research

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
Hi, So if the researcher on this project have been learning some of your ideas, and some of the better speculative thinking and neural simulations that have been done in brains science --- either directly or indirectly --- it might be incorrect to say that there is no 'design for a thinking

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ed Porter
Richard, Please describe some of the counterexamples, that you can easily come up with, that make a mockery of Tononi's conclusion. Ed Porter -Original Message- From: Richard Loosemore [mailto:r...@lightlink.com] Sent: Monday, December 22, 2008 8:54 AM To: agi@v2.listbox.com Subject:

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote: Ben, Thanks for the reply. It is a shame the brain science people aren't more interested in AGI. It seems to me there is a lot of potential for cross-fertilization. I don't think many of these folks have a

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ed Porter
Colin, From a quick read, the gist of what your are saying seems to be that AGI is just engineering, i.e., the study of what man can make and the properties thereof, whereas science relates to the eternal verities of reality. But the brain is not part of an eternal verity. It is the

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Colin Hales
Ed, I wasn't trying to justify or promote a 'divide'. The two worlds must be better off in collaboration, surely? I merely point out that there are fundamental limits as to how computer science (CS) can inform/validate basic/physical science - (in an AGI context, brain science). Take the

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
To add to this discussion, I'd like to point out that many AI systems have been used and scientifically evaluated as *psychological* models, e.g. cognitive models. For instance, SOAR and ACT-R are among the many systems that have been used and evaluated this way. The goal of that sort of

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Ben Goertzel
I know Dharmendra Mohdha a bit, and I've corresponded with Eugene Izhikevich who is Edelman's collaborator on large-scale brain simulations. I've read Tononi's stuff too. I think these are all smart people with deep understandings, and all in all this will be research money well spent. However,

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Bob Mottram
2008/12/21 Ben Goertzel b...@goertzel.org: However, IMO the rhetoric associating it with thinking machine building is premature and borderline dishonest. It's marketing rhetoric. It's more like interesting brain simulation research that could eventually play a role in some future

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Ed Porter
Ben, It would seem to me that a lot of the ideas in OpenCogPrime could be implemented in neuromorphic hardware, particularly if you were to intermix it with some traditional computing hardware. This is particularly true if such a system could efficiently use neural assemblies, because that