On 5/18/07, Rémi Coulom <[EMAIL PROTECTED]> wrote:
My idea was very similar to what you describe. The program built a
collection of rules of the kind "if condition then move". Condition
could be anything from a "tree-search rule" of the kind "in this
particular position play x", or general rule such as "in atari,
extend".
It could be also anything in-between, such as a miai specific to the
current position. The strengths of moves were updated with an
incremental Elo-rating algorithm, from the outcomes of random
simulations.
The obvious way to update weights is to reward all the
rules that fired for the winning side, and penalize all rules that
fired for
the losing side, with rewards and penalties decaying toward the end
of the playout. But this is not quite Elo like, since it doesn't
consider rules
to beat each other. So one could make the reward dependent on the
relative
weight of the chosen rule versus all alternatives. increasing the
reward if the
alternatives carried a lot of weight.
Is that how your ratings worked?
I'm not sure how that compares with TD learning. Maybe someone more
familiar with the latter can point out the differences.
TD learning (with linear function approximation) uses a gradient
descent rule to update weights. The simplest gradient descent rule,
LMS or Widrow-Hoff, does something like you describe: rules that are
followed by positive reward (win) are increased in weight, and rules
that are followed by negative reward (loss) are decreased. The exact
update depends on the set of rules firing, and is proportional to the
error between the estimated reward (based on all rules) and the
actual reward. In other words, each weight is updated a little
towards the value which would have made a correct overall prediction.
TD learning is similar, except that it updates weights towards a
subsequent prediction of the reward (e.g. on the next move), instead
of the actual reward. Rich Sutton gives a much better explanation
than me: http://www.cs.ualberta.ca/%7Esutton/book/ebook/the-book.html
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/