This is a very interesting idea. "It's turtles all the way down."
On 05/20/2013 12:28 PM, Johan Herland wrote:
> (Sorry for going slightly off-topic and returning to the general
> discussion on how to resolve the race conditions...)
> For server-class installations we need ref storage that can be read
> (and updated?) atomically, and the current system of loose + packed
> files won't work since reading (and updating) more than a single file
> is not an atomic operation. Trivially, one could resolve this by
> dropping loose refs, and always using a single packed-refs file, but
> that would make it prohibitively expensive to update refs (the entire
> packed-refs file must be rewritten for every update).
Correct, or the "packed-refs" file would have to be updated in place
using some database-style approach for locking/transactions/whatever.
> Now, observe that we don't have these race conditions in the object
> database, because it is an add-only immutable data store.
Except for prune, of course, which can cause race conditions WRT to writers.
> What if we stored the refs as a tree object in the object database,
> referenced by a single (loose) ref? There would be a _single_ (albeit
> highly contentious) file outside the object database that represent
> the current state of the refs, but hopefully we can guarantee
> atomicity when reading (and updating?) that one file. Transactions can
> be done by:
> 1. Recording the tree id holding the refs before starting manipulation.
> 2. Creating a new tree object holding the manipulated state.
> 3. Re-checking the tree id before replacing the loose ref. If
> unchanged: commit, else: rollback/error out.
There are two closely related possibilities and I'm not sure which one
* Effectively treat all of the refs as loose refs, but stored not in the
filesystem but rather in a hierarchical tree structure in the object
database. E.g., all of the refs directly under "refs/heads" would be in
one tree object, those in refs/remotes/foo in a second, those for
refs/remotes/bar in another etc. and all of them linked up together in a
tree object representing "refs".
* Effectively treat all of the refs as packed refs, but store the single
"packed-refs" file as a single object in the object database.
(The first alternative sounds more practical to me. I also guess that's
what you mean, since down below you say that each change would require
producing "a few objects".)
Of course in either case we couldn't use a tree object directly, because
these new "reference tree" objects would refer not only to blobs and
other trees but also to commits and tags.
> All readers would trivially have access to a consistent refs view,
> since the state of the entire refs hierarchy is held in the tree id
> read from that single loose ref.
> It seems to me this should be somewhat less prohibitively expensive
> than maintaining all refs in a single packed-refs file. That said, we
> do end up producing a few new objects for every single ref update,
> most of which would be thrown away by a future "gc". This might bog
> things down, but I'm not sure how much.
> I'm sure someone must have had this idea before (although I don't
> remember this alternative being raised at the Git Merge conference),
> so please enlighten me as to why this won't work... ;)
[I know this is not what you are suggesting, but I am reminded of
Subversion, which stores trunk, branches, and tags in the same "tree"
space as the contents of the working trees. A Subversion commit
references a gigantic tree encompassing all branches of development and
all files on all of those branches (with cheap copies to reduce the
A Subversion commit thus describes the state of *every* branch and tag
at that moment in time. The model is conceptually very simple (in fact,
too simple, and I believe the Subversion developers regret not having
distinguished between the branch namespace and the file namespace).]
The main difficulty with this idea will be the extreme contention on
that "last loose reference file" pointing at the root of the reference
tree. Essentially *every* change to the repository will have to create
a new reference tree and point this file at the new version. I doubt
that would be a problem for short-lived operations, but I fear that a
long-lived operation would *never* get done. By the time it had
finished constructing its new reference tree, some other short-lived
operation will have changed it, and the long-lived process will have to
* Restart from the beginning.
* Die with a kind of "concurrent modification error".
* Resolve the difference between the reference tree at the start of its
operation and the reference tree as it exists when it is done with the
changes that they want to make. In some cases this might be able to be
done automatically as a kind of "reference tree merge" but the logic
might have to vary from case to case.
> PS: Keeping reflogs is just a matter of wrapping the ref tree in a
> commit object using the previous state of the ref tree as its parent.
Yes, there are a lot of nice aspects to this idea in that it reuses
concepts with which we are already familiar. For example, fetching from
a remote would approximately hook the remote's entire reference tree
into a subtree of the local "refs/remotes" reference subtree. But with
things like reflogs we would have to be careful not to keep obsolete
objects around *forever*--there would have to be some mechanism to prune
the old reference history.
Altogether a very interesting idea.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html