Erik van der Werf wrote:
And then we get another small questions with a
1. What makes a question big or small is not a personal
preference, but the number of millions times it happens
during a game.
1a. Database lookup. Happens very few times (Unfortunately,
I must say, because I like the knowledge integration
part and I think it is one of my strong points.)
2a. Local search + legal moves identification (see superko)
This happens billions of times per hour of go computing.
Therefore, finding optimal solutions for 1a is a small question
and for 1b its is not a small question at all. Of course, if your
program is hyper-knowledge driven or does not search, this may
2. Using an unnecessarily big hash key slows down the program.
On a 32 bit platform, for using a 64 bit key you have to choose
between bad (2x32) and worse (FPU 64 bit integers). Due to this
slow down, you will fail to complete searches you could have
done with a smaller hash. Therefore, the program may suffer
more from code obesity than from collisions. Don't take this
as if I didn't use 64 bit keys. I do. But only when it is a
3. In the cases where a 32 key is enough, a local search
with 32 stones or detecting superko, it works with error=0
and therefore, it is better than any hash size. 0 is smaller
4. The analysis based on uniform distributions are
fundamentally flawed because the move sequences are not
random in go, particularly, the BWBWBW sequence can only
be avoided by pass moves which do not occur in real games
globally before the end of the game.
And finally a question: Why do you admit that having two
identical hashes is a flaw (which is obvious) and do not
admit that having 32 hashes not forming a base is not?
When you have 31 independent hashes, the probability
that a new key forms a base is 1/2. If it does, you can
use this set with error probability = 0, if it doesn't
you will get collisions because the new key is not
independent. It's the same as having two identical keys.
If the key is dependent, reject it and get another random
key. That does not change any other statistical properties.
It could have been a random set, many sets pass as they are
others get some correction.
computer-go mailing list