Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Richard Loosemore
Ben Goertzel wrote: I know Dharmendra Mohdha a bit, and I've corresponded with Eugene Izhikevich who is Edelman's collaborator on large-scale brain simulations. I've read Tononi's stuff too. I think these are all smart people with deep understandings, and all in all this will be research

Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Richard Loosemore
Valentina Poletti wrote: I have a question for you AGIers.. from your experience as well as from your background, how relevant do you think software engineering is in developing AI software and, in particular AGI software? Just wondering.. does software verification as well as correctness

Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Ben Goertzel
Well, we have attempted to use sound software engineering principles to architect the OpenCog framework, with a view toward making it usable for prototyping speculative AI ideas and ultimately building scalable, robust, mature AGI systems as well But, we are fairly confident of our overall

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
Hi, So if the researcher on this project have been learning some of your ideas, and some of the better speculative thinking and neural simulations that have been done in brains science --- either directly or indirectly --- it might be incorrect to say that there is no 'design for a thinking

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ed Porter
Richard, Please describe some of the counterexamples, that you can easily come up with, that make a mockery of Tononi's conclusion. Ed Porter -Original Message- From: Richard Loosemore [mailto:r...@lightlink.com] Sent: Monday, December 22, 2008 8:54 AM To: agi@v2.listbox.com Subject:

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote: Ben, Thanks for the reply. It is a shame the brain science people aren't more interested in AGI. It seems to me there is a lot of potential for cross-fertilization. I don't think many of these folks have a

RE: [agi] Relevance of SE in AGI

2008-12-22 Thread John G. Rose
I've been experimenting with extending OOP to potentially implement functionality that could make a particular AGI design easier to build. The problem with SE is that it brings along much baggage that can totally obscure AGI thinking. Many AGI people and AI people are automatic top of the

Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Richard Loosemore
Ben Goertzel wrote: Well, we have attempted to use sound software engineering principles to architect the OpenCog framework, with a view toward making it usable for prototyping speculative AI ideas and ultimately building scalable, robust, mature AGI systems as well But, we are fairly

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ed Porter
Colin, From a quick read, the gist of what your are saying seems to be that AGI is just engineering, i.e., the study of what man can make and the properties thereof, whereas science relates to the eternal verities of reality. But the brain is not part of an eternal verity. It is the

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Colin Hales
Ed, I wasn't trying to justify or promote a 'divide'. The two worlds must be better off in collaboration, surely? I merely point out that there are fundamental limits as to how computer science (CS) can inform/validate basic/physical science - (in an AGI context, brain science). Take the

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
To add to this discussion, I'd like to point out that many AI systems have been used and scientifically evaluated as *psychological* models, e.g. cognitive models. For instance, SOAR and ACT-R are among the many systems that have been used and evaluated this way. The goal of that sort of