2008/5/5 Richard Loosemore <[EMAIL PROTECTED]>:
>  I was pointing out that the 'interpreter' (i.e. the programmer) could build
> mechanisms that are only meaningful of the symbols conform to their
> interpretation of what the symbols mean.
>
>  But if the system itself then builds symbols and uses them in such a way
> that those symbols conform to the *system's* idea of what the symbols mean,
> everything gets seriously fubared if the two sets of interpretations don't
> agree.  (which, in general, they will not).


This seems like a natural outcome.  The types of representation
developed by a whale and a human are likely to be quite different, and
likewise an AGI may not interpret the world in exactly the same way
that a human would, unless its cognitive architecture was identical to
that of a human.  Differing representations may have consequences for
the prospect of "friendly AI", but I expect that the goal of
successive generations of AI researchers will be to architect systems
which closely resemble humans in all respects deemed to be of
importance, rather than truly transcendent minds.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to