Joern RENNECKE schrieb:
In http://gcc.gnu.org/ml/gcc/2006-07/msg00156.html, your wrote:
We could use a cheap hash and base start compare / branch trees in
every hash bin.
Say we have a 16 k range, 200 nodes, and want to keep the hash
bin/node ratio between 2 and 4.
Let i be the switch argument. Then we can calculate a 9 bit hash as
(i ^ (x << n)) & 0x3fffffff, where n is a value between 5 and 9. We
can pick the one which produces the flattest
average search trees.
Note that we no longer need to check that i is in range, as for
ordinary switch dispatch tables.
What I suggest is to implement a cost-estimation based decision. This
should include an architecture dependent modelling that could deliver an
exact cost (in terms of size) for size-optimization and as well a good
approximation for speed. For example in my case of 1500 labels inside
the 16K range I would perfer a branch table instead of a binary compare
tree. The size of the table is 16K * 4/8 bytes and this is really ok for
a huge program. The frequently used entries are likely to be in the
cache, so I expect the whole thing to perform well. However if I would
optimize it for my case that may not help in general? So all
modifications should be verified with a common standard benchmark like
SPEC. Is there someone on the list who wants to support me with these
tests or may deliver models for common architectures?
Christian