On 6/30/2003, Wei Dai wrote:
A perfect optimizer who behaves according to decision theory (or some
bounded-rationality version of it) is very vulnerable to small changes in
its utility function definition or the module responsible for interpreting
the meaning of terms in the utility function definition. Such a change,
say a bit flip caused by cosmic radiation, or the introduction of a new
philosophical idea, could cause the agent to behave completely counter to
the designer's intentions.

In the rule-based agent, on the other hand, the utility function
definition and its interpretation are effectively dispersed throughout the
set of rules. If the rules are designed with appropriate redundancy, it
should be much less likely for a catastrophic change in behavior to occur.

This seems to me to confuse the decision with how the decision is represented and implemented. There are presumably many ways to disperse a decision process and make it robust to random errors, and some of those ways may be compatible with pretty optimal behavior.



Robin Hanson [EMAIL PROTECTED] http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323





Reply via email to