On Feb 8, 2008 7:12 AM, Benjamin Johnston [EMAIL PROTECTED] wrote:
4. If you're trying to develop your own argument, then I'd recommend
taking a look at some of the more philosophical works in the research
literature - not just in AGI but also in areas like embodied robotics,
commonsense
Richard: Consider yourself corrected: many people realize the importance
of
generalization (and related processes).
People go about it in very different ways, so some are more specific and
up-front about it than others, but even the conventional-AI people (with
whom I have many
Mike Tintner wrote:
Benjamin:When I read your
post, claiming that generalization is important, I think to myself
yeah, that is what everybody else is saying and attempting to solve --
I even gave you several examples of how generalization could work, so I
then find myself surprised that you
Benjamin:When I read your
post, claiming that generalization is important, I think to myself
yeah, that is what everybody else is saying and attempting to solve --
I even gave you several examples of how generalization could work, so I
then find myself surprised that you claim that nobody is
a single one,
that is actually applied to an end-problem, to a true test of its
AGI domain-crossing potential.
I thought I had already provided evidence that many approaches could
succeed on an end-problem. Particularly in the sections on logic
and hybrid systems.
And I think if you go through
, as
formally outlined, and I suggest you will not find a single one, that is
actually applied to an end-problem, to a true test of its AGI
domain-crossing potential.
I thought I had already provided evidence that many approaches could
succeed on an end-problem. Particularly in the sections on logic
On 05/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
William P : I can't think
of any external test that can't be fooled by a giant look up table
(ned block thought of this argument first).
A by definition requirement of a general test is that the systembuilder
doesn't set it, and can't
Fine. Which idea of anyone's do you believe will directly produce
general intelligence - i.e. will enable an AGI to solve problems in
new unfamiliar domains, and pass the general test I outlined? (And
everyone surely agrees, regardless of the test, that an AGI must have
general
A. T. Murray wrote:
Mike Tintner wrote in the message archived at
http://www.mail-archive.com/agi@v2.listbox.com/msg09744.html
[...]
The first thing is that you need a definition
of the problem, and therefore a test of AGI.
And there is nothing even agreed about that -
although I think
On Feb 4, 2008 11:42 PM, Mike Tintner [EMAIL PROTECTED] wrote:
The test, I suggest, is essentially; not the Turing Test or anything like
that but The General Test. If your system is an AGI, or has AGI potential,
then it must first of all have a skill and be able to solve problems in a
given
On Feb 5, 2008 11:36 PM, Benjamin Johnston [EMAIL PROTECTED] wrote:
Well, as I said before, I don't know which will directly produce general
intelligence and which of them will fail.
snip /
My point, again, is that we don't know how the first successful AGI will
work - but we can see many
Benjamin Johnston wrote, among other things:
I like to think about Deep Blue a lot. Prior to Deep Blue, I'm sure
that there were people who, like you, complained that nobody has
offered a crux idea that could make truly intelligent computer chess
system. In the end Deep Blue appeared to win
Mike Tintner wrote:
I believe we are
thinking machines and not in any way magical. I just believe that our
thinking works on different mechanistic/ computational principles to
those of programs - which someone apart from me, surely should at
Richard,:Mike,
When you say I just believe that our thinking works on different
mechanistic/ computational principles to those of programs ... What you
are really trying to say is that intelligence is not captured by a certain
type of rigid, pure symbol-processing AI. The key phrase is
suggest you look again at any of the approaches you mention, as formally
outlined, and I suggest you will not find a single one, that is actually
applied to an end-problem, to a true test of its AGI domain-crossing
potential. And I think if you go through the archives here you also won't
find
suggest you will not find a single one, that
is actually applied to an end-problem, to a true test of its AGI
domain-crossing potential.
I thought I had already provided evidence that many approaches could
succeed on an end-problem. Particularly in the sections on logic and
hybrid systems
Very briefly, my focus a while back in attacking programs was not on
the sign/ semiotic - and more particularly, symbolic - form of
programs, although that is v. important too.
My focus was on the *structure* of programs - that's what they are:
structured and usually sequenced sets of
On 04/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
(And it's a fairly safe bet, Joseph, that no one will now do the obvious
thing and say.. well, one idea I have had is..., but many will say, the
reason why we can't do that is...)
And maybe they would have a reason for doing so. I would like
William P : I can't think
of any external test that can't be fooled by a giant look up table
(ned block thought of this argument first).
A by definition requirement of a general test is that the systembuilder
doesn't set it, and can't prepare for it as you indicate. He can't know
whether the
Mike Tintner wrote in the message archived at
http://www.mail-archive.com/agi@v2.listbox.com/msg09744.html
[...]
The first thing is that you need a definition
of the problem, and therefore a test of AGI.
And there is nothing even agreed about that -
although I think most people know
Er, you don't ask that in AGI. The general culture here is not to
recognize the crux, or the test of AGI. You are the first person
here to express the basic requirement of any creative project. You
should only embark on a true creative project - in the sense of
committing to it - if you have
intelligence - i.e. will enable an AGI to solve problems in new unfamiliar
domains, and pass the general test I outlined? (And everyone surely agrees,
regardless of the test, that an AGI must have general intelligence).
Please note very carefully - I am only asking for an idea that will play
For a universal test of AI, I would of course suggest universal intelligenceas defined in this report:http://www.idsia.ch/idsiareport/IDSIA-10-06.pdf
ShaneOn Fri, 02 Jun 2006 09:15:26 -500, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:What is the universal test for the ability of any given AI SYSTEM
What is the universal test for the ability of any given AI SYSTEM
to Perceive Reason and Act?
Is there such a test?
What is the closest test known to date?
Dan Goe
From : William Pearson [EMAIL PROTECTED]
To : agi@v2.listbox.com
What is the largest test to date of Novamate on a distributed network of
machines?
Is Novamate designing itself?
Dan Goe
From : Ben Goertzel [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Data there vs data not there, Limits
To answer your questions:
Right now the most machines we have used for a single NM installation is 4
However, scaling up to many machines is NOT our biggest issue by any means...
In 2000, we ran our Webmind AI Engine system (with a conceptually
similar distributed processing infrastructure) on
26 matches
Mail list logo