--- Mark Waser <[EMAIL PROTECTED]> wrote:
> > The terms "meaning" and "understanding" are not well defined for machines.
> 
> Then rigorously define them for your purposes and stop complaining.  If you 
> have an effective, coherent world model and if you can ground an input in 
> this model then you "understand" that input (i.e. that input has "meaning" 
> relative to your world model).

OK, how about Legg's definition of universal intelligence as a measure of how
a system "understands" its environment?
http://www.vetta.org/documents/ui_benelearn.pdf

Of course it is rather impractical to test a system in a Solomonoff-Levin
distribution over an infinite set of enviromnents.  So we are back to defining
"understanding" as something a human does, e.g. the Turing test, University of
Phoenix test, and so on.

Until you start putting your AGI in the sculls of babies, machines will always
have a world model that differs from humans, better in some ways but inferior
in others.  In the narrow domain of arithmetic, a calculator's world model and
understanding of numbers is superior to yours.  If you dismiss a calculator as
unintelligent because it doesn't know how many fingers you are holding up,
then we will never have AGI no matter how smart computers get.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to