On Sat, 11 Mar 2006, Levi Pearson wrote:
Dude. WTF. When they said 'Premature optimization is the root of all evil',
they must have just recovered from a vision of this particular code.
If I were hiring someone, and got this as a serious submission, the resume
would be immediately dropped in the circular file.
I dunno; I consider it a reasonable abuse of the rules, since he just
specified fastest runtime. It's actually a straightforward solution to the
problem of finding a maximally fast hash lookup -- gperf is the traditional
tool for such things, and it was simple to modify its output to keep an
accumulator for each entry. The fact that it took many hours to generate a
perfect hash and resulted in a 64MB .c file is just a weakness of the tool
when applied to a dictionary with 100,000 entries.
The .c file could actually be much smaller, but I allowed gperf to create a
table up to 10x the input keyspace, since that seemed to make it run fastest
for my smaller test runs.
P.S. It might help (once you get past the fact that it's too big to compile)
if your array of strings actually had more words than "thoroughbred" in it.
Not true. "wristband", "bagged" and "cresting" are there, along with
everything else in /usr/share/dict/words. It's just a rather sparse table.
-J
/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/