RE: Re[2]: [agi] Early AGI apps

2002-11-11 Thread Ben Goertzel
BG Optimizing the optimizer is what we've called supercompiling the BG supercompiler, it's a natural idea, but we're not there yet. I didn't mean supercompiling the supercompiler but rather, evolving the supercompiler through known techniques such as GA. I have no idea how feasible

Re: [agi] A point of philosophy, rather than engineering

2002-11-11 Thread Charles Hixson
The problem with a truly general intelligence is that the search spaces are too large. So one uses specializing hueristics to cut down the amount of search space. This does, however, inevitably remove a piece of the generality. The benefit is that you can answer more complicated questions

Re: [agi] A point of philosophy, rather than engineering

2002-11-11 Thread James Rogers
On Mon, 2002-11-11 at 14:11, Charles Hixson wrote: Personally, I believe that the most effective AI will have a core general intelligence, that may be rather primitive, and a huge number of specialized intelligence modules. The tricky part of this architecture is designing the various