On Sat, Dec 03, 2016 at 07:47:51PM +0100, Lars Schneider wrote:

> >  - "<command> have": the command should output the sha1, size and
> > type of all the objects the external ODB contains, one object per
> > line.
> 
> This looks impractical. If a repo has 10k external files with
> 100 versions each then you need to read/transfer 1m hashes (this is
> not made up - I am working with Git repos than contain >>10k files
> in GitLFS).

Are you worried about the client-to-server communication, or the pipe
between git and the helper? I had imagined that the client-to-server
communication happen infrequently and be cached.

But 1m hashes is 20MB, which is still a lot to dump over the pipe.
Another option is that Git defines a simple on-disk data structure
(e.g., a flat file of sorted 20-byte binary sha1s), and occasionally
asks the filter "please update your on-disk index".

That still leaves open the question of how the external odb script
efficiently gets updates from the server. It can use an ETag or similar
to avoid downloading an identical copy, but if one hash is added, we'd
want to know that efficiently. That is technically outside the scope of
the git<->external-odb interface, but obviously it's related. The design
of the on-disk format might be make that problem easier or harder on the
external-odb script.

> Wouldn't it be better if Git collects all hashes that it currently 
> needs and then asks the external ODBs if they have them?

I think you're going to run into latency problems when git wants to ask
"do we have this object" and expects the answer to be no. You wouldn't
want a network request for each.

And I think it would be quite complex to teach all operations to work on
a promise-like system where the answer to "do we have it" might be
"maybe; check back after you've figured out the whole batch of hashes
you're interested in".

> >  - "<command> get <sha1>": the command should then read from the
> > external ODB the content of the object corresponding to <sha1> and
> > output it on stdout.
> > 
> >  - "<command> put <sha1> <size> <type>": the command should then read
> > from stdin an object and store it in the external ODB.
> 
> Based on my experience with Git clean/smudge filters I think this kind 
> of single shot protocol will be a performance bottleneck as soon as 
> people store more than >1000 files in the external ODB.
> Maybe you can reuse my "filter process protocol" (edcc858) here?

Yeah. This interface comes from my original proposal, which used the
rationale "well, the files are big, so process startup shouldn't be a
big deal". And I don't think I wrote it down, but an implicit rationale
was "it seems to work for LFS, so it should work here too". But of
course LFS hit scaling problems, and so would this. It was one of the
reasons I was interested in making sure your filter protocol could be
used as a generic template, and I think we would want to do something
similar here.

> > * Transfer
> > 
> > To tranfer information about the blobs stored in external ODB, some
> > special refs, called "odb ref", similar as replace refs, are used.
> > 
> > For now there should be one odb ref per blob. Each ref name should be
> > refs/odbs/<odbname>/<sha1> where <sha1> is the sha1 of the blob stored
> > in the external odb named <odbname>.
> > 
> > These odb refs should all point to a blob that should be stored in the
> > Git repository and contain information about the blob stored in the
> > external odb. This information can be specific to the external odb.
> > The repos can then share this information using commands like:
> > 
> > `git fetch origin "refs/odbs/<odbname>/*:refs/odbs/<odbname>/*"`

I'd worry about scaling this part. Traditionally our refs storage does
not scale very well.

-Peff

Reply via email to