RE: [agi] Friendliness toward humans

2003-01-10 Thread Gary Miller
EGHeflin said: The reason is that the approach is essentially 'Asimovian' in nature and, therefore, wouldn't result in anything more than perhaps a servile pet, call it iRobot, which is always 'less-than-equal' to you and therefore always short of your goal to achieve the so called

Re: [agi] AGI's sense of self

2003-01-10 Thread maitri
Consider an analogy. In human culture, there is a rigid distinction between man and woman. This makes sense, because there are very few intermediate cases. True hermaphroditism is about one in a million; and ambiguous genitala are seen in maybe one in 10,000 - 100,000 On the other

RE: [agi] The Next Wave

2003-01-10 Thread Ben Goertzel
Kevin Copple wrote: It seems clear that AGI will be obtained in the foreseeable future. It also seems that it will be done with adequate safeguards against a runaway entity that will exterminate us humans. Likely it will remain under our control also. HOWEVER, this brings up another

Re: [agi] Friendliness toward humans

2003-01-10 Thread Alan Grimes
Ben Goertzel wrote: Since I'm too busy studying neuroscience, I simply don't have any time for learning operating systems. I will therefore either use the systems I know or the systems that require the least ammount of effort to learn regardless of their features. Alan, that sounds like a

Re: [agi] The Next Wave

2003-01-10 Thread C. David Noziglia
ULTIMATE KNOWLEDGE Our AGI will come to know everything. Every single flap of every butterfly wing in all of history. If it has emotions like ours, it may become rather depressed and realize that it is all pointless. Maybe we will understand and agree with the AGI's explanation. What

RE: [agi] Friendliness toward humans

2003-01-10 Thread Ben Goertzel
Ben Goertzel wrote: Since I'm too busy studying neuroscience, I simply don't have any time for learning operating systems. I will therefore either use the systems I know or the systems that require the least ammount of effort to learn regardless of their features. Alan, that sounds

Re: [agi] Friendliness toward humans

2003-01-10 Thread Alan Grimes
I say this as someone who just burned half a week setting up a Linux network in his study. Ditto... The windows 3.11 machine took 10 minutes. The Leenooks machine took 3 days... Yeah, that stuff is a pain. But compared to designing, programming and testing a thinking machine, it's cake,

Re: [agi] OS and AGI

2003-01-10 Thread James Rogers
On Fri, 2003-01-10 at 16:44, Damien Sullivan wrote: While I'm equally horrified by the idea of someone using DOS as a benchmark, there is a difference between 'stump' I can't figure this out and 'stump' I haven't learned much about this. Aye, I think the reaction is more to an apparent

RE: [agi] The Next Wave

2003-01-10 Thread Ben Goertzel
Eliezer wrote: James Rogers wrote: Your intuition is correct, depending on how strict you are about knowledge. The intrinsic algorithmic information content of any machine is greater (sometimes much greater) than the algorithmic information content of its static state. The intrinsic

Re[2]: [agi] The Next Wave

2003-01-10 Thread Cliff Stabbert
Friday, January 10, 2003, 10:36:35 PM, Kevin Copple wrote: KC Well, my The Next Wave post was intended to be humorous. I not that much KC of a comedian, so I may have weighed in too heavily on apparently serious. KC Let me apologize to the extent it was a feebly frivolous failure. The line

RE: [agi] The Next Wave

2003-01-10 Thread Ben Goertzel
Kevin Copple wrote: Perhaps I am wrong, but my impression is that the talk here about AGI sense of self, AGI friendliness, and so on is quite premature. Attitudes on that vary, I think... I know that many AGI researchers agree with you, and think such issues are best deferred till after some