Re: Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-28 Thread Matt Mahoney
--- "J. Andrew Rogers" <[EMAIL PROTECTED]> wrote: > > On Nov 27, 2007, at 7:21 PM, Matt Mahoney wrote: > > As a counterexample, evolution is already smarter than > > the human brain. It just takes more computing power. Evolution has > > figured > > out how to make humans out of simple chemic

Re: Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-27 Thread J. Andrew Rogers
On Nov 27, 2007, at 7:21 PM, Matt Mahoney wrote: As a counterexample, evolution is already smarter than the human brain. It just takes more computing power. Evolution has figured out how to make humans out of simple chemicals. "figured out"? So if we implemented a planet kill, this "evo

Re: Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-27 Thread Matt Mahoney
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Matt, > > >> > As for the analogies, my point is that AGI will quickly evolve to > >> invisibility from a human-level intelligence. > >> > >> I think you underestimate how quickly performance deteriorates with the > >> growth of complexity. > >> A

Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-27 Thread Dennis Gorelik
Mike, >> I think you underestimate how quickly performance deteriorates with >> the growth of complexity. > Dennis, you are stating what could be potentially an extremely important > principle. It is very important principles for [hundreds of] years already. Take a look into business. You can n

Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-27 Thread Dennis Gorelik
Matt, >> > As for the analogies, my point is that AGI will quickly evolve to >> invisibility from a human-level intelligence. >> >> I think you underestimate how quickly performance deteriorates with the >> growth of complexity. >> AGI systems would have lots of performance problems in spite of f