----- Original Message ----
From: Richard Loosemore <[EMAIL PROTECTED]>
To: [email protected]
Sent: Friday, April 11, 2008 3:06:21 PM
Subject: Re: [agi] Blog essay on the complex systems problem

Richard Loosemore wrote:

[snip]
I would not say that your hierarchical control structure is doomed to 
failure because I do not yet know how close it is to what we understand 
of the human cognitive system.  The reason for saying that is that, in 
the end, the conclusion of my argument is that we must stay quite close 
to the human system, and adopt a methodology that supports a certain 
kind of "agnostic" exploration of different types of system.  In that 
context it be that your HCS is quite close to the system that works in 
human cognition, and in that case there would be nothing wrong with your 
choice.


Whew..

The only thing I would say is to watch out for dependencies between your 
HCS and other aspects of your system.  If the HCS requires a strictly 
serial evaluation of goals that are explicitly represented using the 
same knowledge representation scheme as is used for regular declarative 
knowledge, for example, I would counsel caution, because I believe that 
this design runs into trouble.


Hmm, I'm not sure about whether Texai will perform strictly serial evaluation 
of goals, as an HCS naturally lends itself to a hierarchical task network in 
which higher level tasks can be planned symbolically, and in which lower level 
tasks are simply performed reactively (e.g. subsumption architecture).  There 
are many opportunities for parallelism in an HCS.  With the provision that 
Texai goal-achieving tasks will have associated utilities derived from Bayesian 
 inference, they will indeed be explicitly represented using the 
 same knowledge representation scheme as is used for regular declarative 
knowledge.  The Texai KR scheme is based upon OpenCyc and will be elaborated to 
represent skills as procedures.

As for the organizational perspective, I am not quite sure which point 
you were addressing with that.


Perhaps my point is clarified if you can imagine that a multitude of  
unorganized human beings is a complex system, as you define it.  However, when 
organized, these same humans can perform in a scalable, understandable, 
justifiable, and predictable manner.  The Texai architecture not only aspires 
to be cognitively-plausible with respect to a single human mind, but to be 
organizationally-plausible with respect to a vast number of Texai instances 
acting in concert.  

And your question about the driverless cars architecture... you seem to 
be suggesting that this might be a "a partitioning and scaling solution 
to AGI complexity".  That choice of words has got me worried about a 
possible misunderstanding, because you might have been implying that the 
complex systems problem I have described was all about partitioning the 
AGI problem to reduce its "complicatedness" .... and that interpretation 
would be not where I was going with it at all!


Sorry, I considered your graph illustration of the problem, and I attempted to 
provide evidence that my solution has been field tested in an robotics 
application on the path to AGI.

-Steve
 
Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860






__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to