On Fri, Jan 23, 2009 at 7:29 PM, Graham Toal <[email protected]> wrote:
> On Fri, Jan 23, 2009 at 6:06 PM, Eugene Deon <[email protected]> wrote:
>> I've been trying to tune my leave estimation strategy by solving for the
>> exact values of my commonly considered estimation variables as to minimize
>> the sum of squared errors between my strategy estimates and all the leaves
>> in Quackle's "superleaves" file.
>>
>> So far my variables include only:
>>
>> -single letter values
>>
>> -double/triple/quad-letter penalties
>>
>> -vowel/consonant imbalance penalties
>>
>> -bonuses for number of tiles in CANISTER
>>
>> -a couple of letter pair values (QU, YY, IY, FF, ING)
>
> Why all the special cases? The value of a leave is how much it
> contributes to the next play.
>
> So... enumerate every possible play that can be made next with the
> unseen tiles and the leave. (This sounds like a lot of computation
> but it isn't)
>
> For each word, calculate the probability of drawing the tiles you need
> to make the play given the tiles you are holding (ie the leave). (I
> have the code that enumerates that probability function)
>
> Do this for other leaves, and compare. Special cases like letter pair
> values etc fall out in the wash; in fact, it works for larger leaves
> the same way (and faster, as the play choices are more limited)
>
> Graham

Graham,

This approach would effectively turn an N ply simulation into an N+1
ply simulation, except that the last ply would be exhaustive instead
of random.  I cannot believe that could be nearly as efficient as a
static rack evaluation at the leave nodes of a simulation.

Steve

Reply via email to