On Fri, May 3, 2013 at 10:38 AM, Jeff King <p...@peff.net> wrote:
> I found another race related to the packed-refs code. Consider for a
> moment what happens when we are looking at refs and another process does
> a simultaneous "git pack-refs --all --prune", updating packed-refs and
> deleting the loose refs.
>
> [...]
>
> We could fix this by making sure our packed-refs file is up to date
> before using it. E.g., resolving a ref with the following sequence:
>
>   1. Look for loose ref. If found, OK.
>
>   2. Compare inode/size/mtime/etc of on-disk packed-refs to their values
>      from the last time we loaded it. If they're not the same, reload
>      packed-refs from disk. Otherwise, continue.
>
>   3. Look for ref in in-memory packed-refs.
>
> Not too bad. We introduce one extra stat() for a ref that has been
> packed, and the scheme isn't very complicated.
>
> But what about enumerating refs via for_each_ref? It's possible to have
> the same problem there, and the packed-refs file may be moved into place
> midway through the process of enumerating the loose refs. So we may see
> refs/heads/master, but when we get to refs/remotes/origin/master, it has
> now been packed and pruned. I _think_ we can get by with:
>
>   1. Generate the complete sorted list of loose refs.
>
>   2. Check that packed-refs is stat-clean, and reload if necessary, as
>      above.
>
>   3. Merge the sorted loose and packed lists, letting loose override
>      packed (so even if we get repacked halfway through our loose
>      traversal and get half of the refs there, it's OK to see duplicates
>      in packed-refs, which we will ignore).
>
> This is not very far off of what we do now. Obviously we don't do the
> stat-clean check in step 2. But we also don't generate the complete list
> of loose refs before hitting the packed-refs file. Instead we lazily
> load the loose directories as needed. And of course we cache that
> information in memory, even though it may go out of date. I think the
> best we could do is keep a stat() for each individual directory we see,
> and check it before using the in-memory contents. That may be a lot of
> stats, but it's still better than actually opening each loose ref
> separately.
>
> So I think it's possible to fix, but I thought you might have some
> insight on the simplest way to fit it into the current ref code.
>
> Did I explain the problem well enough to understand? Can you think of
> any simpler or better solutions (or is there a case where my proposed
> solutions don't work?).

You don't really need to be sure that packed-refs is up-to-date. You
only need to make sure that don't rely on lazily loading loose refs
_after_ you have loaded packed-refs.

The following solution might work in both the resolve-a-single-ref and
enumerating-refs case:

0. Look for ref already cached in memory. If found, OK.

1. Look for loose ref. If found, OK.

2. If not found, load all loose refs and packed-refs from disk (in
that order), and store in memory for remainder of this process. Never
reload packed-refs from disk (unless you also reload all loose refs
first).

My rationale for this approach is that if you have a packed-refs file,
you will likely have fewer loose refs, so loading all of them in
addition to the pack-refs file won't be that expensive. (Conversely,
if you do have a lot of loose refs, you're more likely to hit #1, and
not have to load all refs.)

That said, my intuition on the number of loose vs. packed refs, or the
relative cost of reading all loose refs might be off here...


...Johan

-- 
Johan Herland, <jo...@herland.net>
www.herland.net
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to