OK, how about Legg's definition of universal intelligence as a measure of
how
a system "understands" its environment?
OK. What purpose do you wish to use Legg's definition for? You immediately
discard it below . . . .
Of course it is rather impractical to test a system in a Solomonoff-Levin
distribution over an infinite set of enviromnents. So we are back to
defining
"understanding" as something a human does, e.g. the Turing test,
University of
Phoenix test, and so on.
I completely disagree. Why does a failure of one test method -- which you
suggested -- mean that we are immediately forced into your pet definition
scheme?
Until you start putting your AGI in the sculls of babies, machines will
always
have a world model that differs from humans, better in some ways but
inferior
in others.
True. And 100% irrelevant.
In the narrow domain of arithmetic, a calculator's world model and
understanding of numbers is superior to yours.
No. As I said in the last e-mail, "Understanding also means that you can
build upon your model. A calculator can't grow."
If you dismiss a calculator as
unintelligent because it doesn't know how many fingers you are holding up,
then we will never have AGI no matter how smart computers get.
Huh? Why? How does your if clause possibly lead to your then clause?
----- Original Message -----
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Wednesday, May 02, 2007 10:54 AM
Subject: Re: [agi] rule-based NL system
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> The terms "meaning" and "understanding" are not well defined for
> machines.
Then rigorously define them for your purposes and stop complaining. If
you
have an effective, coherent world model and if you can ground an input in
this model then you "understand" that input (i.e. that input has
"meaning"
relative to your world model).
OK, how about Legg's definition of universal intelligence as a measure of
how
a system "understands" its environment?
http://www.vetta.org/documents/ui_benelearn.pdf
Of course it is rather impractical to test a system in a Solomonoff-Levin
distribution over an infinite set of enviromnents. So we are back to
defining
"understanding" as something a human does, e.g. the Turing test,
University of
Phoenix test, and so on.
Until you start putting your AGI in the sculls of babies, machines will
always
have a world model that differs from humans, better in some ways but
inferior
in others. In the narrow domain of arithmetic, a calculator's world model
and
understanding of numbers is superior to yours. If you dismiss a
calculator as
unintelligent because it doesn't know how many fingers you are holding up,
then we will never have AGI no matter how smart computers get.
-- Matt Mahoney, [EMAIL PROTECTED]
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936