On Thu, May 25, 2017 at 02:04:07PM -0400, Jeff King wrote:

> > ...that might be something worth thinking about---my gut feeling
> > tells me something but we should go by a measurement, not by gut
> > feeling of a random somebody.
> 
> Yeah, I'd agree. I had the impression the original change was motivated
> by gut feeling.

Hmph. On Linux, at least, I do not see that using "git merge" to
fast-forward is appreciably faster:

Here are timings for:

  git reset --hard HEAD~10 && git pull --rebase

in a git.git repo, using builds of git from various commits (all
best-of-five; the timings include the reset for simplicity, but
presumably it costs the same in each case):

  - 33b842a1e^ (just prior to the switch to git-merge)
    real  0m0.256s
    user  0m0.096s
    sys   0m0.020s

  - 33b842a1e  (using git-merge)
    real  0m0.227s
    user  0m0.092s
    sys   0m0.020s

So a little faster, but there seems to be 20-30ms of noise in my timings
anyway (the average for the "prior" case did seem to be higher, though).
It's possible that the difference would be more stark on Windows, where
the cost of the shell script would be higher.

The same test with the current master performs the same as 33b842a1e.
But if I then remove the optimization, as Tyler's patch did at the start
of this thread, the timings are similar to 33b842a1e^.

So I dunno. It does not seem appreciably faster, but what little speedup
it does provide is the same even with a more modern rebase. Which is
probably because that rebase isn't actually doing much in the first
place, so the optimized bits from Dscho's rebase--helper are not kicking
in yet.

Anyway. All this has shown me is that it's probably pointless to do this
timing at all on Linux. Somebody on Windows might get better results.

But regardless, we need to do something. Correctness must trump
optimizations, and the question is whether we can throw out the whole
conditional, or if we should just restrict when it kicks in.

-Peff

Reply via email to