Ben Goertzel wrote:
I know Dharmendra Mohdha a bit, and I've corresponded with Eugene
Izhikevich who is Edelman's collaborator on large-scale brain
simulations. I've read Tononi's stuff too. I think these are all smart
people with deep understandings, and all in all this will be research
Valentina Poletti wrote:
I have a question for you AGIers.. from your experience as well as from
your background, how relevant do you think software engineering is in
developing AI software and, in particular AGI software? Just wondering..
does software verification as well as correctness
Well, we have attempted to use sound software engineering principles to
architect the OpenCog framework, with a view toward making it usable for
prototyping speculative AI ideas and ultimately building scalable, robust,
mature AGI systems as well
But, we are fairly confident of our overall
Hi,
So if the researcher on this project have been learning some of your ideas,
and some of the better speculative thinking and neural simulations that have
been done in brains science --- either directly or indirectly --- it might
be incorrect to say that there is no 'design for a thinking
Richard,
Please describe some of the counterexamples, that you can easily come up
with, that make a mockery of Tononi's conclusion.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com]
Sent: Monday, December 22, 2008 8:54 AM
To: agi@v2.listbox.com
Subject:
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote:
Ben,
Thanks for the reply.
It is a shame the brain science people aren't more interested in AGI. It
seems to me there is a lot of potential for cross-fertilization.
I don't think many of these folks have a
I've been experimenting with extending OOP to potentially implement
functionality that could make a particular AGI design easier to build.
The problem with SE is that it brings along much baggage that can totally
obscure AGI thinking.
Many AGI people and AI people are automatic top of the
Ben Goertzel wrote:
Well, we have attempted to use sound software engineering principles to
architect the OpenCog framework, with a view toward making it usable for
prototyping speculative AI ideas and ultimately building scalable,
robust, mature AGI systems as well
But, we are fairly
Colin,
From a quick read, the gist of what your are saying seems to be that AGI is
just engineering, i.e., the study of what man can make and the properties
thereof, whereas science relates to the eternal verities of reality.
But the brain is not part of an eternal verity. It is the
Ed,
I wasn't trying to justify or promote a 'divide'. The two worlds must be
better off in collaboration, surely? I merely point out that there are
fundamental limits as to how computer science (CS) can inform/validate
basic/physical science - (in an AGI context, brain science). Take the
To add to this discussion, I'd like to point out that many AI systems have
been used and scientifically evaluated as *psychological* models, e.g.
cognitive models.
For instance, SOAR and ACT-R are among the many systems that have been used
and evaluated this way.
The goal of that sort of
11 matches
Mail list logo