Ben has confidently stated that he believes Novamente will work
(http://www.kurzweilai.net/meme/frame.html?m=3 and
others).
AGI builders, what evidence do you have that your design will work?
This is an oft-repeated question, but I'd like to focus on two possible
bases for saying that an
--- Joshua Fox [EMAIL PROTECTED] wrote:
AGI builders, what evidence do you have that your design will work?
None, because we have not defined what AGI is.
One definition of AGI is passing the Turing test. That will not happen. A
machine can just as easily fail by being too smart, too fast,
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Tue, Apr 24, 2007 at 01:35:31PM -0700, Matt Mahoney wrote:
None, because we have not defined what AGI is.
AGI is like porn. I'll know it when I'll see it.
Not really. You recognize porn because you have seen examples of porn and
not-porn. If
I also don't think you will recognize AGI. You have never seen examples
of
it. Earlier I posted examples of Google passing the Turing test, but
nobody
believes that is AGI. If nothing is ever labeled AGI, then nothing ever
will
be.
Google does not pass the Turing test. Giving human-like
Hi,
I strongly disagree - there is a need to provide a definition of AGI - not
necessarily the right or optimal definition, but one that poses concrete
challenges and focusses the mind - even if it's only a starting-point. The
reason the Turing Test has been such a successful/ popular idea is
Well, in my 1993 book The Structure of Intelligence I defined intelligence
as
The ability to achieve complex goals in complex environments.
I followed this up with a mathematical definition of complexity grounded in
algorithmic information theory (roughly: the complexity of X is the amount
of
But there is a difference I think it's crucial re the goals being set for AGI.
There is a difference between your version: achieving goals which can be
done, if I understand you, by algorithms - and my goal-SEEKING, which is done
by all animals, and can't be done by algorithms alone. It
You seem to be mixing two things up...
1) the definition of the goal of human level AGI
2) the right incremental path to get there
I consider these as rather different, separate isses...
In my prior reply to you I was discussing only Point 1, not Point 2
I don't really accept your
--- Mike Tintner [EMAIL PROTECTED] wrote:
There is a difference between your version: achieving goals which can be
done, if I understand you, by algorithms - and my goal-SEEKING, which is
done by all animals, and can't be done by algorithms alone. It involves
finding your way as distinct