[agi] Re: Solomonoff Machines – up close and personal

2007-11-11 Thread Shane Legg
Hi Ed, So is the real significance of the universal prior, not its probability value given in a given probability space (which seems relatively unimportant, provided is not one or close to zero), but rather the fact that it can model almost any kind of probability space? It just takes a

Re: [agi] Re: What best evidence for fast AI?

2007-11-11 Thread Bryan Bishop
Excellent post, and I hope that I may come across enough time to give it a more thorough reading. Is it possible that at the moment our working with 'intelligence' is just like flapping in an attempt to fly? It seems like the concept of intelligence is a good way to preserve the nonabsurdity

RE: [agi] What best evidence for fast AI?

2007-11-11 Thread Edward W. Porter
Ben said -- the possibility of dramatic, rapid, shocking success in robotics is LOWER than in cognition That's why I tell people the value of manual labor will not be impacted as soon by the AGI revolution as the value of mind labor. Ed Porter -Original Message- From: Benjamin

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Jef Allbright
On 11/11/07, Edward W. Porter [EMAIL PROTECTED] wrote: Ben said -- the possibility of dramatic, rapid, shocking success in robotics is LOWER than in cognition That's why I tell people the value of manual labor will not be impacted as soon by the AGI revolution as the value of mind labor.

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel
But we do not yet have a complete, verifiable theory, let alone a practical design. - Jef To be more accurate, we don't have a practical design that is commonly accepted in the AGI research community. I believe that I *do* have a practical design for AGI and I am working hard toward

Re: [agi] question about algorithmic search

2007-11-11 Thread Charles D Hixson
YKY (Yan King Yin) wrote: I have the intuition that Levin search may not be the most efficient way to search programs, because it operates very differently from human programming. I guess better ways to generate programs can be achieved by imitating human programming -- using techniques

Re: [agi] Connecting Compatible Mindsets

2007-11-11 Thread Charles D Hixson
Bryan Bishop wrote: On Saturday 10 November 2007 14:10, Charles D Hixson wrote: Bryan Bishop wrote: On Saturday 10 November 2007 13:40, Charles D Hixson wrote: OTOH, to make a go of this would require several people willing to dedicate a lot of time consistently over a long

[agi] Re: What best evidence for fast AI?

2007-11-11 Thread Robin Hanson
At 05:48 PM 11/10/2007, Eliezer S. Yudkowsky wrote: The anchor that I start with is my rough estimate of how long whole brain emulation will take, and so I'm most interesting in comparing AGI to that anchor. The fact that people are prone to take these estimate questions as attitude surveys is

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel
Richard, Even Ben Goertzel, in a recent comment, said something to the effect that the only good reason to believe that his model is going to function as advertised is that *when* it is working we will be able to see that it really does work: The above paragraph is a distortion of what I

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Richard Loosemore
Benjamin Goertzel wrote: Richard, Even Ben Goertzel, in a recent comment, said something to the effect that the only good reason to believe that his model is going to function as advertised is that *when* it is working we will be able to see that it really does work:

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Richard Loosemore
Edward W. Porter wrote: Richard, Geortzel claims his planning indicates it is rougly 6 years x 15 excellent, hard-working programmers, or 90 man years to getting his architecture up an running. I assume that will involve a lot of “hard” mental work. By “hard problem” I mean a problem for

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel
Richard, Thus: if someone wanted volunteers to fly in their brand-new aircraft design, but all they could do to reassure people that it was going to work were the intuitions of suitably trained individuals, then most rational people would refuse to fly - they would want more than