Mark Waser writes:
Intelligence is only as good as your model of the world and what it allows
you to do (which is pretty much a paraphrasing of Legg's definition as far
as I'm concerned).
Since Legg's definition is quite explicitly careful not to say anything
at all about the internal structure of an agent, this is an interesting
statement, and I'm curious how you derive this equivalence.
I assume that you have something in mind for "model of the world"
that isn't so trivial that every possible approach to AGI has to have one
basically by definition... If so, it's not really worth talking about. If
not,
how can you tell if an agent has a model of the world?
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936