On Sun, Aug 26, 2018 at 01:37:41PM +0200, René Scharfe wrote:

> Am 14.08.2018 um 03:58 schrieb Jeff King:
> > Your suggestion can be implemented using khash (my patch below).
> > 
> >> Before:
> >> Benchmark #1: ./git-cat-file --batch-all-objects --buffer --unordered 
> >> --batch-check='%(objectname)'
> >>
> >>   Time (mean ± σ):     269.5 ms ±  26.7 ms    [User: 247.7 ms, System: 
> >> 21.4 ms]
> >>
> >>   Range (min … max):   240.3 ms … 339.3 ms
> >>
> >> After:
> >> Benchmark #1: ./git-cat-file --batch-all-objects --buffer --unordered 
> >> --batch-check='%(objectname)'
> >>
> >>   Time (mean ± σ):     224.2 ms ±  18.2 ms    [User: 201.7 ms, System: 
> >> 22.1 ms]
> >>
> >>   Range (min … max):   205.0 ms … 259.0 ms
> > 
> > Yeah. My best-of-five dropped from 300ms to 247ms. That 300 was using
> > the memory pool, though khash's deletion strategy isn't all that
> > different (the wasted memory hangs around until the next hash resize,
> > but if you're evenly dropping and adding, you likely won't need to
> > resize).
> 
> With your khash patch:
> 
> Benchmark #1: ./git-cat-file --batch-all-objects --buffer --unordered 
> --batch-check='%(objectname)'
> 
>   Time (mean ± σ):     159.1 ms ±  20.5 ms    [User: 140.3 ms, System: 18.5 
> ms]
> 
>   Range (min … max):   140.0 ms … 214.0 ms
> 
> So it seems worth it.

Hmm, that really does. Which is a shame, because I hoped that one day we
could get rid of the nasty macro-infestation that is khash.h. But it
really is a lot faster than the alternatives.

> [...]

I agree with all of your comments here. What I posted was just trying to
do the least amount of work to get something we could time.

Do you want to pick up this topic and carry it forward?

-Peff

Reply via email to