Re: [agi] Books

2007-06-15 Thread YKY (Yan King Yin)
On 6/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I'll try to answer this and Mike Tintner's question at the same time. The typical GOFAI engine over the past decades has had a layer structure something like this: Problem-specific assertions Inference engine/database Lisp on top of the

Re: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread Robert Wensman
Yes, what language to use when expressing memes is definitley one of the key point in the construction of such a system. I think such a language needs to fulfull the following criteria: - Enough expressive power - Algorithms for consistency checks, entailment etc. - Roubustness to random

Re: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread Robert Wensman
For an intelligence to know what is possible actions it must model those and think about those internally, and model that beahavior, and I would argue that all intelligence is about the behavior that the internal cognition brings. You cant really have an intelligence I dont believe without

Re: [agi] AGI Consortium

2007-06-15 Thread Mark Waser
Well, if one of us becomes extremely successful biz-wise, but the other has made some deep AI success, the one can always buy the other's company ;-) Hey! If I become both extremely successful biz-wise *and* make some deep AI success, can I give you the company and just make you pay me some

RE: [agi] AGI Consortium

2007-06-15 Thread John G. Rose
Certainly there are many ways to slay the beast. And the beast has many definitions. For an open source AGI you'd have to not throw in the kitchen sink, come up was a very basic design and maybe not tout how the thing is going to trigger a singularity? Maybe not try to replicate human brain

RE: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread Derek Zahn
Robert Wensman writes: Has there been any work done previously in statistical, example driven deduction? Yes. In this AGI community, Pei Wang's NARS system is exactly that: http://nars.wang.googlepages.com/ Also, Ben Goertzel (et. al.) is building a system called Novamente

Re: [agi] Pure reason is a disease.

2007-06-15 Thread Matt Mahoney
--- Lukasz Stafiniak [EMAIL PROTECTED] wrote: http://www.goertzel.org/books/spirit/uni3.htm -- VIRTUAL ETHICS The book chapter describes the need for ethics and cooperation in virtual worlds, but does not address the question of whether machines can feel pain. If you feel pain, you will insist

Re: [agi] Pure reason is a disease.

2007-06-15 Thread Jiri Jelinek
Eric, Right. IMO roughly the same problem when processed by a computer.. Why should you expect running a pain program on a computer to make you feel pain any more than when I feel pain? I don't. The thought was: If we don't feel pain when processing software in our pain-enabled minds, why

Re: [agi] Pure reason is a disease.

2007-06-15 Thread Eric Baum
Jiri Eric, Right. IMO roughly the same problem when processed by a computer.. Why should you expect running a pain program on a computer to make you feel pain any more than when I feel pain? Jiri I don't. The thought was: If we don't feel pain when processing Jiri software in our

Re: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread James Ratcliff
Robert Wensman [EMAIL PROTECTED] wrote:For an intelligence to know what is possible actions it must model those and think about those internally, and model that beahavior, and I would argue that all intelligence is about the behavior that the internal cognition brings. You cant really