Hi, I'm trying to move this to the intelligence thread.

On Apr 15, 11:21 pm, Brent Meeker <meeke...@dslextreme.com> wrote:
> I agree with the above and pushing the idea further has led me to the
> conclusion that intelligence is only relative to an environment. If you
> consider Hume's argument that induction cannot be justified - yet it is
> the basis of all our beliefs - you are led to wonder whether humans have
> "general intelligence".  Don't we really just have intelligence in this
> particular world with it's regularities and "natural kinds"?  Our
> "general intelligence" allows us to see and manipulate objects - but not
> quantum fields or space-time.

Yeah, I think some no-free-lunch theorems in AI also point to this. I
was thinking about the simple goal problem - what if we gave an AI all
the books in the world and tell it to compress them? That could yield
some very complex internal models... but how would it relate them to
the real world? When humans are taught language they learn to "ground"
the concepts at the same time.

That leads me to believe that AIs will in practice need special
training programs where they proceed from simple problems to more
complex ones (this is called shaping), much like humans, while staying
"grounded" from the start. It's a really interesting race: which will
arrive first, brain digitization or strong AI? My money's on the
former right now because I believe the engineering of the training
programs is a big task.

Anybody think strong AI is inherently much easier? I'd very much like
to be proven wrong because I think early brain digitization will
likely lead to digital exploitation.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to