On Monday, 30 May 2016 at 18:57:15 UTC, Chris wrote:
On Monday, 30 May 2016 at 18:20:39 UTC, Andrei Alexandrescu
wrote:
Please throw this hat into the ring as well: it should improve
average search on large vocabulary dramatically.
https://dpaste.dzfl.pl/dc8dc6e1eb53
It uses a BM-inspired trick - once the last characters
matched, if the match subsequently fails it needn't start from
the next character in the haystack. The "skip" is computed
lazily and in a separate function so as to keep the loop
tight. All in all a routine worth a look. I wanted to write
this for a long time. -- Andrei
Cool. So far, my experience with separate functions has been
that they slow down the loop. But this might do the trick.
Congratulations!
DMD:
./benchmark.dmd
std: 178 ±31 +36 (4475) -29 (5344)
manual: 167 ±46 +82 (2883) -32 (7054)
qznc: 114 ±7 +18 (1990) -5 (7288)
Chris: 228 ±49 +82 (3050) -35 (6780)
Andrei: 103 ±5 +47 ( 658) -2 (9295)
(avg slowdown vs fastest; absolute deviation)
CPU ID: GenuineIntel Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
LDC:
std: 184 ±19 +28 (3420) -14 (6523)
manual: 205 ±59 +120 (2463) -39 (7443)
qznc: 151 ±25 +44 (2983) -17 (6911)
Chris: 194 ±57 +78 (3702) -46 (6251)
Andrei: 101 ±2 +42 ( 435) -1 (9542)
(avg slowdown vs fastest; absolute deviation)
CPU ID: GenuineIntel Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz