Benjamin: > I believe that you're misrepresenting the situation. I would
guess that
most people on this list have an idea that they are pursuing because they
believe it has a chance at creating general intelligence.
Fine. Which idea of anyone's do you believe will directly produce general
in
Mike Tintner wrote in the message archived at
http://www.mail-archive.com/agi@v2.listbox.com/msg09744.html
> [...]
> The first thing is that you need a definition
> of the problem, and therefore a test of AGI.
> And there is nothing even agreed about that -
> although I think most people know
Er, you don't ask that in AGI. The general culture here is not to
recognize the crux, or the "test" of AGI. You are the first person
here to express the basic requirement of any creative project. You
should only embark on a true creative project - in the sense of
committing to it - if you hav
William P : I can't think
of any external test that can't be fooled by a giant look up table
(ned block thought of this argument first).
A by definition requirement of a "general test" is that the systembuilder
doesn't set it, and can't prepare for it as you indicate. He can't know
whether the
Kaj Sotala wrote:
Richard,
[Where's your blog? Oh, and this is a very useful discussion, as it's
given me material for a possible essay of my own as well. :-)]
It is in the process of being set up: I am currently wrestling with the
process of getting to know the newest version (just released
On 04/02/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
> (And it's a fairly safe bet, Joseph, that no one will now do the obvious
> thing and say.." well, one idea I have had is...", but many will say, "the
> reason why we can't do that is...")
And maybe they would have a reason for doing so. I wo
Joseph Gentle:> Eventually, you will have to write something which allows
for emergent
behaviour and complex communication. To me, that stage of your project
is the interesting crux of AGI. It should have some very interesting
emergant behaviour with inputs other than the information SLAM
outpu
On Feb 4, 2008 7:38 PM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> Well if you take something like the "talking heads" experiment
> (http://www.isrl.uiuc.edu/~amag/langev/cited2/steelsthetalkingheadsexperiment.html)
> and ask what it would take to scale this up to human-like language
> abilities inev
On 04/02/2008, Joseph Gentle <[EMAIL PROTECTED]> wrote:
> I haven't read any of Steels stuff lately, either. I'm not sure if any
> of the language he's generating is higher order, but I wouldn't be so
> quick to dismiss emergent language generation as a trick for just 5
> minute demos.
Well if you