On 2/12/07, Jacques Basaldúa <[EMAIL PROTECTED]> wrote:
Erik van der Werf wrote:
> And then we get another small questions with a
> dangerous answer...
1. What makes a question big or small is not a personal
preference, but the number of millions times it happens
during a game.
Can't take a joke? IMO informed hash key generation is interesting
from an academic viewpoint, and a fun exercise. However, in my
experience anything to do with this subject tends to lead only to
relatively small over-all improvements (unless you have a crappy
implementation to start with), and when not used with great care and a
good understanding of the consequences of the domain-specific
optimizations then it easily weakens the performance. To give an
example, if you optimize your keys for global tree-search (using,
e.g., the BCH construction), I would not be surprised if they perform
sub-par on things like opening book construction, or shape hashing.
1a. Database lookup. Happens very few times (Unfortunately,
I must say, because I like the knowledge integration
part and I think it is one of my strong points.)
And what distinguishes database look up from things like transposition
table look up? Why wouldn't one use database look up during tree
2a. Local search + legal moves identification (see superko)
This happens billions of times per hour of go computing.
Forget about using hashes for superko/legal move detection, it's a
non-issue. For this you should only use hashes as a last resort.
Therefore, finding optimal solutions for 1a is a small question
and for 1b its is not a small question at all. Of course, if your
program is hyper-knowledge driven or does not search, this may
2. Using an unnecessarily big hash key slows down the program.
Hardly. IMO, if the hash calculation is a major factor in the
performance of your program, you should seriously consider adding some
In Migos I had 96-bit hashes and the update time was never an
important factor in the performance. Moreover, I could recompute all
relevant symmetrical hashes from scratch whenever needed without
severely hurting the performance.
On a 32 bit platform, for using a 64 bit key you have to choose
between bad (2x32) and worse (FPU 64 bit integers). Due to this
slow down, you will fail to complete searches you could have
done with a smaller hash. Therefore, the program may suffer
more from code obesity than from collisions. Don't take this
as if I didn't use 64 bit keys. I do. But only when it is a
And what fraction of your searches is affected by your new
superior-strength hash keys?
0 is smaller than 2^-64.
Thanks for sharing that :-)
4. The analysis based on uniform distributions are
fundamentally flawed because the move sequences are not
random in go, particularly, the BWBWBW sequence can only
be avoided by pass moves which do not occur in real games
globally before the end of the game.
And finally a question: Why do you admit that having two
identical hashes is a flaw (which is obvious) and do not
admit that having 32 hashes not forming a base is not?
I have no idea what you're talking about here.
When you have 31 independent hashes, the probability
that a new key forms a base is 1/2. If it does, you can
use this set with error probability = 0, if it doesn't
you will get collisions because the new key is not
independent. It's the same as having two identical keys.
If the key is dependent, reject it and get another random
key. That does not change any other statistical properties.
It could have been a random set, many sets pass as they are
others get some correction.
I'm speculating here because I seem to be missing some context, but
AFAICS almost any Go-specific application will have more legal moves /
hash keys than bits for the hash. Consequently for all practical
purposes you will never be able to set up a complete independent set
of hash keys, and in the rare case where you could do it (e.g., on a
small board) you might just as well store the complete state as a
computer-go mailing list