On 4/16/2010 3:16 AM, Skeletori wrote:
Hi, I'm trying to move this to the intelligence thread.
On Apr 15, 11:21 pm, Brent Meeker<meeke...@dslextreme.com> wrote:
I agree with the above and pushing the idea further has led me to the
conclusion that intelligence is only relative to an environment. If you
consider Hume's argument that induction cannot be justified - yet it is
the basis of all our beliefs - you are led to wonder whether humans have
"general intelligence". Don't we really just have intelligence in this
particular world with it's regularities and "natural kinds"? Our
"general intelligence" allows us to see and manipulate objects - but not
quantum fields or space-time.
Yeah, I think some no-free-lunch theorems in AI also point to this. I
was thinking about the simple goal problem - what if we gave an AI all
the books in the world and tell it to compress them? That could yield
some very complex internal models... but how would it relate them to
the real world? When humans are taught language they learn to "ground"
the concepts at the same time.
I think intelligence in the context of a particular world requires
acting within that world. Humans learn language starting with ostensive
definition: (pointing) "There that's a chair. Sit in it. That's what
it's for. Move it where you want to sit." An AI that was given all the
books in the world to learn might very well learn something, but it
would have a different kind of intelligence than human because it
developed and functions in a different context. For an AI to develop
human like intelligence I think it would need to be in something like a
robot, something capable of acting in the world.
That leads me to believe that AIs will in practice need special
training programs where they proceed from simple problems to more
complex ones (this is called shaping), much like humans, while staying
"grounded" from the start. It's a really interesting race: which will
arrive first, brain digitization or strong AI? My money's on the
former right now because I believe the engineering of the training
programs is a big task.
Anybody think strong AI is inherently much easier? I'd very much like
to be proven wrong because I think early brain digitization will
likely lead to digital exploitation.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
For more options, visit this group at