Re: [agi] Re: Merging - or: Multiplicity

2008-05-29 Thread Steve Richfield
Mike, On 5/28/08, Mike Tintner [EMAIL PROTECTED] wrote: Steve: I have been advocating fixing the brain shorts that lead to problems, rather than jerking the entire world around to make brain shorted people happy. Which brain shorts? IMO the brain's capacity for shorts in one situation is

Re: [agi] Re: Merging - or: Multiplicity

2008-05-28 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]: Will:And you are part of the problem insisting that an AGI should be tested by its ability to learn on its own and not get instruction/help from other agents be they human or other artificial intelligences. I insist[ed] that an AGI should be tested on

Re: [agi] Re: Merging - or: Multiplicity

2008-05-28 Thread Mike Tintner
Steve: I have been advocating fixing the brain shorts that lead to problems, rather than jerking the entire world around to make brain shorted people happy. Which brain shorts? IMO the brain's capacity for shorts in one situation is almost always a capacity for short-cuts in another - and

[agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Mike Tintner
Steve:Presuming that you do NOT want to store all of history and repeatedly analyze all of it as your future AGI operates, you must accept MULTIPLE potentially-useful paradigms, adding new ones and trashing old ones as more information comes in. Our own very personal ideas of learning and

Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]: Actually, that's an absurdity. The whole story of evolution tells us that the problems of living in this world for any species of creature/intelligence at any level can only be solved by a SOCIETY of individuals. This whole dimension seems to be

Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Mike Tintner
Will:And you are part of the problem insisting that an AGI should be tested by its ability to learn on its own and not get instruction/help from other agents be they human or other artificial intelligences. I insist[ed] that an AGI should be tested on its ability to solve some *problems* on its

Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Steve Richfield
Mike, On 5/27/08, Mike Tintner [EMAIL PROTECTED] wrote: Steve:Presuming that you do NOT want to store all of history and repeatedly analyze all of it as your future AGI operates, you must accept MULTIPLE potentially-useful paradigms, adding new ones and trashing old ones as more information

RE: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Derek Zahn
Steve Richfield: It is sure nice that this is a VIRTUAL forum, for if we were all in one room together, my posting above would probably get me thrashed by the true AGI believers here. Does anyone here want to throw a virtual stone? Sure. *plonk*