J. Storrs Hall, PhD. wrote:
On Wednesday 02 May 2007 15:08, Charles D Hixson wrote:
Mark Waser wrote:
... Machines will know
the meaning of text (i.e. understand it) when they have a coherent
world model that they ground their usage of text in.
...
But note that in this case "world model" is not a model of the same
world that you have a model of.
After reading the foregoing discussions of subjects such as intelligence,
language, meaning, etc, it is quite clear to me that the various members of
this list do not have models of the same world. This is entirely appropriate:
consider each of us as a unit in a giant GA search for useful ways of
thinking about reality...
Josh
Well, that's true. E.g., when I was 3 I had one I patched for 3 months
in a vain attempt to cure amblyopia. This caused me to be relatively
detached from visual imagery, and more attached to kinesthetic imagery.
But still, all normal people have a world model where when their eyes
are covered they can't see, but where the eyes cannot be removed and
then replaced. So there are relatively small degrees of difference
between the world models of normal humans and those which will be
learned by AGIs. This is even true in the case of AGIs which are raised
with the intention of having them have approximately normal maturation.
The attempt is essentially futile. Humans will come to resemble AGIs
before AGIs come to resemble people. (Admittedly, though, the AGIs that
people eventually come to resemble won't bear much resemblance to the
early model AGIs.)
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936