2012/11/11 Jeff King <p...@peff.net>:
> On Sun, Nov 11, 2012 at 05:46:32PM +0100, René Scharfe wrote:
> Ultimately, I have some ideas for doing this in a breadth-first way,
> which would make it more naturally iterative. It would involve having N
> bits of storage per commit to check N tags, but it would mean that we
> could get accurate answers in the face of clock skew (like the
> merge-base calculation, it would merely get slower in the face of skew).

I guess the optimal algorithm may also depend on the commit graph
general shape, but intuitively, I'd say that the critical factor is
the number and distribution of tags. As soon as you have a significant
number of tags (let's say 1% of the commits are tagged, evenly
distributed), you'll quickly end up with every commit marked as
containing or not the target commit, so that each additional tag check
is cheap.

This suggests a complexity of O(number of commits) more often then
not, however you choose to traverse the graph.

At least on my almost-real-life repo*, with ~140,000 commits and
~2,000 tags, tag --contains takes a few seconds, of course more than
the worst-case test on a 40,000 commit / 1 tag repo, but still in the
same order of magnitude.

* : meaning we're in the process of migrating from clearcase (deep
sighs allowed !), and for now I kept almost everything in a single git
repo, which may not be the eventual choice

To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to