Thanks, I'll check out that text.

Chris

On 1/8/07, belinda thom <[EMAIL PROTECTED]> wrote:
I'm doing something similar right now (although I'm not at present
using conx).

I used the algorithm Tom Mitchell suggests at the end of his 1st
chapter in Machine Learning (a textbook).

When you're assuming a linear activation function, and I don't
believe this method is any different for non-linear cases.

Updates are easily done for _each_ play in the game as follows: your
current training estimate of the value of a state is compared to the
value the function estimates for the "best" (in a minimax sense) next
state (when the player will next play). Check out his text, its
pretty clear.

HTH,
--b
_______________________________________________
Pyro-users mailing list
[email protected]
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users

Reply via email to