On 05/03/2013 10:38 AM, Jeff King wrote:
> I found another race related to the packed-refs code. Consider for a
> moment what happens when we are looking at refs and another process does
> a simultaneous "git pack-refs --all --prune", updating packed-refs and
> deleting the loose refs.
> If we are resolving a single ref, then we will either find its loose
> form or not. If we do, we're done. If not, we can fall back on what was
> in the packed-refs file. If we read the packed-refs file at that point,
> we're OK. If the loose ref existed before but was pruned before we could
> read it, then we know the packed-refs file is already in place, because
> pack-refs would not have deleted the loose ref until it had finished
> writing the new file. But imagine that we already loaded the packed-refs
> file into memory earlier. We may be looking at an _old_ version of it
> that has an irrelevant sha1 from the older packed-refs file. Or it might
> not even exist in the packed-refs file at all, and we think the ref does
> not resolve.
> We could fix this by making sure our packed-refs file is up to date
> before using it. E.g., resolving a ref with the following sequence:
> 1. Look for loose ref. If found, OK.
> 2. Compare inode/size/mtime/etc of on-disk packed-refs to their values
> from the last time we loaded it. If they're not the same, reload
> packed-refs from disk. Otherwise, continue.
> 3. Look for ref in in-memory packed-refs.
> Not too bad. We introduce one extra stat() for a ref that has been
> packed, and the scheme isn't very complicated.
Let me think out loud alongside your analysis...
By this mechanism the reader can ensure that it never uses a version of
the packed-refs file that is older than its information that the
corresponding loose ref is absent from the filesystem.
This is all assuming that the filesystem accesses have a defined order;
how is that guaranteed? pack_refs() and commit_ref() both rely on
commit_lock_file(), which calls
close(fd) on the lockfile
prune_ref() locks the ref, verifies that its SHA-1 is unchanged, then
calls unlink(), then rollback_lock_file().
The atomicity of rename() guarantees that a reader sees either the old
or the new version of the file in question. But what guarantees are
there about accesses across two files? Suppose we start with ref "foo"
that exists only as a loose ref, and we have a pack-refs process doing
write packed-refs with "foo"
commit_lock_file() for packed-refs
read loose ref "foo" and verify that its SHA-1 is unchanged
unlink() loose ref "foo"
while another process is trying to read the reference:
look for loose ref "foo"
Is there any guarantee that the second process can't see the loose ref
"foo" as being missing but nevertheless read the old version of
packed-refs? I'm not strong enough on filesystem semantics to answer
> But what about enumerating refs via for_each_ref? It's possible to have
> the same problem there, and the packed-refs file may be moved into place
> midway through the process of enumerating the loose refs. So we may see
> refs/heads/master, but when we get to refs/remotes/origin/master, it has
> now been packed and pruned.
> I _think_ we can get by with:
> 1. Generate the complete sorted list of loose refs.
> 2. Check that packed-refs is stat-clean, and reload if necessary, as
> 3. Merge the sorted loose and packed lists, letting loose override
> packed (so even if we get repacked halfway through our loose
> traversal and get half of the refs there, it's OK to see duplicates
> in packed-refs, which we will ignore).
> This is not very far off of what we do now. Obviously we don't do the
> stat-clean check in step 2. But we also don't generate the complete list
> of loose refs before hitting the packed-refs file. Instead we lazily
> load the loose directories as needed. And of course we cache that
> information in memory, even though it may go out of date. I think the
> best we could do is keep a stat() for each individual directory we see,
> and check it before using the in-memory contents. That may be a lot of
> stats, but it's still better than actually opening each loose ref
The loose refs cache is only used by the for_each_ref() functions, not
for looking up individual references. Another approach would be to
change the top-level for_each_ref() functions to re-stat() all of the
loose references within the namespace that interests it, *then* verify
that the packed-ref cache is not stale, *then* start the iteration.
Then there would be no need to re-stat() files during the iteration.
(This would mean that we have to prohibit a second reference iteration
from being started while one is already in progress.)
Of course, clearing (part of) the loose reference cache invalidates any
pointers that other callers might have retained to refnames in the old
version of the cache. I've never really investigated what callers might
hold onto such pointers under the assumption that they will live to the
end of the process.
Given all of this trouble, there is an obvious question: why do we have
a loose reference cache in the first place? I think there are a few
1. In case one git process has to iterate through the same part of the
reference namespace more than once. (Does this frequently happen?)
2. Reading a bunch of loose references at the same time is more
efficient than reading them one by one interleaved with other file
reads. (I think this is a significant win.)
3. Keeping references in a cache means that their refnames have a longer
life, which callers can take advantage of to avoid making their own
copies. I haven't checked which callers might make this assumption, and
nowhere is the lifetime of such a refname documented so it is not even
clear what callers are *allowed* to assume. (In my changes I've tried
to stay on the safe side by not reducing any lifetimes.)
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html