On 24/03/2008, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
>
>
> To try to understand what I am talking about, start by imagining a
> simulation of some physical operation, like a part of a complex factory in a
> Sim City kind of game.  In this kind of high-level model no one would ever
> imagine all of the objects should interact in one stereotypical way,
> different objects would interact with other objects in different kinds of
> ways.  And no one would imagine that the machines that operated on other
> objects in the simulation were not also objects in their own right.  For
> instance the machines used in production might require the use of other
> machines to fix or enhance them.  And the machines might produce or
> operate on objects that were themselves machines.  When you think about a
> simulation of some complicated physical systems it becomes very obvious that
> different kinds of objects can have different effects on other objects.  And
> yet, when it comes to AI, people go on an on about systems that totally
> disregard this seemingly obvious divergence of effect that is so typical of
> nature.  Instead most theories see insight as if it could be funneled
> through some narrow rational system or other less rational field operations
> where the objects of the operations are only seen as the ineffective object
> of the pre-defined operations of the program.
>


How would this differ from the sorts of computational systems I have been
muttering about? Where you have an architecture where an active bit of code
or program is equivalent to an object in the above paragraph. Also have a
look at Eurisko by Doug Lenat.

   Will Pearson

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to