Thursday, December 26, 2002, 4:44:25 PM, Alan Grimes wrote:

AG> A human level intelligence requires arbitrary acess to
AG> visual/phonetic/other "faculties" in order to be intelligent.

In order to communicate intelligently and intelligibly with us, yes.
In order to _be_ intelligent, no.

AG> A system that either lacks these or tries to fake them in some way is
AG> noticably weaker than a general intelligence. 

Noticeably weaker at communication with us, sure.  That's pretty
obvious.  And obviously, having a full AGI with all our faculties and
more would be nice.  So would $10 million dollars and a pony.

However, in designing apps for an early stage AGI system -- one that
is good, for instance, at optimizing code, or designing aircraft, or
ferreting out desired connections between data; but not (yet) good at
human level use of language -- using some shortcuts to ease
communication is IMO perfectly legitimate.  Think of it as a UI issue.

One way of looking at this is to acknowledge that any method by which
one would "teach" or "evolve" or "grow" an AGI to the point where it
_does_ have human level language capability will inevitably involve
_some_ form of communication.  Whether it's presenting examples to a
neural net, making entries in a knowledge base, or writing code, we
_must_ of necessity go through a process of communicating with the
computer at a less-than-human level until then.  I understand your desire
for a certain purity, but it is IMO misguided.  If we choose to use
some "trick" NLP methods in AGI development, that is scaffolding that
can be dumped when "true", innate, NLP is in place.

--
Cliff

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to