Re: Hotspot predictability

2013-09-18 Thread Jamie Brandon
I was worrying about https://github.com/hugoduncan/criterium/issues/19 but actually these tests are not nearly fast enough to cause problems. I reran the last benchmark without dotimes and the results look pretty much the same - https://gist.github.com/jamii/6607031 I'm still a little worried

Hotspot predictability

2013-09-17 Thread Jamie Brandon
I ran a little benchmark comparing different implementations of a toy pattern matching protocol: https://gist.github.com/jamii/6595850 I'm surprised to find that the staged version is not significantly slower than the compiled version. Now I'm wondering whether this is feasible on a larger

Re: Hotspot predictability

2013-09-17 Thread Jamie Brandon
For larger patterns, the staged version is actually significantly faster: https://gist.github.com/jamii/6597235 On 17 September 2013 16:29, Jamie Brandon ja...@scattered-thoughts.net wrote: I ran a little benchmark comparing different implementations of a toy pattern matching protocol:

Re: Hotspot predictability

2013-09-17 Thread Jason Wolfe
I've found similar things with compiling schema validators. Be careful with doseq benchmarks like this though, if you aren't careful hotspot can completely eliminate parts of the computation as dead code if it can infer the results aren't used. Criterion has a neat trick for making sure this