Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore
Harshad RJ wrote: On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Harshad RJ wrote: I read the conversation from the start and believe that Matt's argument is correct. Did you mean to send this only to me? It looks as though

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote: My argument was (at the beginning of the debate with Matt, I believe) that, for a variety of reasons, the first AGI will be built with peaceful motivations. Seems hard to believe, but for various technical reasons I think we

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Vladimir Nesov
On Feb 18, 2008 7:41 PM, Richard Loosemore [EMAIL PROTECTED] wrote: In other words you cannot have your cake and eat it too: you cannot assume that this hypothetical AGI is (a) completely able to build its own understanding of the world, right up to the human level and beyond, while also

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore
Matt Mahoney wrote: On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote: My argument was (at the beginning of the debate with Matt, I believe) that, for a variety of reasons, the first AGI will be built with peaceful motivations. Seems hard to believe, but for various technical

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Harshad RJ
On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED] wrote: You assume that the system does not go through a learning phase (childhood) during which it acquires its knowledge by itself. Why do you assume this? Because an AGI that was motivated only to seek electricity and

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Bob Mottram
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: ... might be true. Yes, a motivation of some form could be coded into the system, but the paucity of expression in the level at which it is coded, may still allow for unintended motivations to emerge out. It seems that in the AGI

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore
Bob Mottram wrote: On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: ... might be true. Yes, a motivation of some form could be coded into the system, but the paucity of expression in the level at which it is coded, may still allow for unintended motivations to emerge out. It seems

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote: My argument was (at the beginning of the debate with Matt, I believe) that, for a variety of reasons, the first AGI will be built with peaceful

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote: My argument was (at the beginning of the debate with Matt, I believe) that, for a variety of reasons, the first AGI will be built with

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore
Harshad RJ wrote: On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: You assume that the system does not go through a learning phase (childhood) during which it acquires its knowledge by itself. Why do you assume this? Because an AGI

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Perhaps worm is the wrong word. Unlike today's computer worms, it would be intelligent, it would evolve, and it would not necessarily be controlled by or serve the interests of its creator. Whether or not it is

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-01-28 Thread Randall Randall
On Jan 28, 2008, at 12:03 PM, Richard Loosemore wrote: Your comments below are unfounded, and all the worse for being so poisonously phrased. If you read the conversation from the beginning you will discover why: Matt initially suggested the idea that an AGI might be asked to develop a

[agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-01-28 Thread Richard Loosemore
Randall, Your comments below are unfounded, and all the worse for being so poisonously phrased. If you read the conversation from the beginning you will discover why: Matt initially suggested the idea that an AGI might be asked to develop a virus of maximum potential, for purposes of