Hi Ed,
So is the real significance of the universal prior, not its probability
value given in a given probability space (which seems relatively
unimportant, provided is not one or close to zero), but rather the fact that
it can model almost any kind of probability space?
It just takes a
Excellent post, and I hope that I may come across enough time to give it
a more thorough reading.
Is it possible that at the moment our working with 'intelligence' is
just like flapping in an attempt to fly? It seems like the concept of
intelligence is a good way to preserve the nonabsurdity
Ben said -- the possibility of dramatic, rapid, shocking success in
robotics is LOWER
than in cognition
That's why I tell people the value of manual labor will not be impacted as
soon by the AGI revolution as the value of mind labor.
Ed Porter
-Original Message-
From: Benjamin
On 11/11/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Ben said -- the possibility of dramatic, rapid, shocking success in
robotics is LOWER than in cognition
That's why I tell people the value of manual labor will not be impacted as
soon by the AGI revolution as the value of mind labor.
But we do not yet have a complete, verifiable theory, let alone a
practical design.
- Jef
To be more accurate, we don't have a practical design that is commonly
accepted in the AGI research community.
I believe that I *do* have a practical design for AGI and I am working hard
toward
YKY (Yan King Yin) wrote:
I have the intuition that Levin search may not be the most efficient
way to search programs, because it operates very differently from
human programming. I guess better ways to generate programs can be
achieved by imitating human programming -- using techniques
Bryan Bishop wrote:
On Saturday 10 November 2007 14:10, Charles D Hixson wrote:
Bryan Bishop wrote:
On Saturday 10 November 2007 13:40, Charles D Hixson wrote:
OTOH, to make a go of this would require several people willing to
dedicate a lot of time consistently over a long
At 05:48 PM 11/10/2007, Eliezer S. Yudkowsky wrote:
The anchor that I start with is
my rough estimate of how long whole brain emulation will take, and so I'm
most interesting in comparing AGI to that anchor. The fact that
people are prone to take these estimate questions as attitude surveys is
Richard,
Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to function
as advertised is that *when* it is working we will be able to see that
it really does work:
The above paragraph is a distortion of what I
Benjamin Goertzel wrote:
Richard,
Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to function
as advertised is that *when* it is working we will be able to see that
it really does work:
Edward W. Porter wrote:
Richard,
Geortzel claims his planning indicates it is rougly 6 years x 15
excellent, hard-working programmers, or 90 man years to getting his
architecture up an running. I assume that will involve a lot of “hard”
mental work.
By “hard problem” I mean a problem for
Richard,
Thus: if someone wanted volunteers to fly in their brand-new aircraft
design, but all they could do to reassure people that it was going to
work were the intuitions of suitably trained individuals, then most
rational people would refuse to fly - they would want more than
12 matches
Mail list logo