I gave the first Baum article a quick read through.  His claims are extraordinary, but 
seem valid. 


some unformed thoughts:

One of the first things that struck me was a concern that this method is so thoroughly 
grounded in rationality.  It must be the case that a real AGI will need to tolerate 
pockets of irrationality, at least at some time scales, to find truly optimal and 
creative solutions.  

This economically based system is so perfect at finding and exploiting loopholes, that 
if implemented as a subset of an AGI system, it might end up finding and exploiting 
loopholes and pockets of irrationality in the rest of the AGI instead of finding a 
good solution.

On further thought, this could be a valuable tool for long-term maintenance of the 
AGI.  Baum narrow AI's could be used as trouble shooters to comb other systems of the 
AGI and identify pockets of leakage in its decision making( cognitive dissonance?).  
Higher level agents could monitor the activity of the Baum modules and bring focus to 
the loopholes (some of which might be useful to the AGI and others not)

-Brad




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to