Jeff King <p...@peff.net> writes:

> On Mon, Jul 24, 2017 at 02:58:38PM +1000, Andrew Ardill wrote:
>
>> On 24 July 2017 at 13:45, Farshid Zavareh <fhzava...@gmail.com> wrote:
>> > I'll probably test this myself, but would modifying and committing a 4GB
>> > text file actually add 4GB to the repository's size? I anticipate that it
>> > won't, since Git keeps track of the changes only, instead of storing a copy
>> > of the whole file (whereas this is not the case with binary files, hence 
>> > the
>> > need for LFS).
>> 
>> I decided to do a little test myself. I add three versions of the same
>> data set (sometimes slightly different cuts of the parent data set,
>> which I don't have) each between 2 and 4GB in size.
>> Each time I added a new version it added ~500MB to the repository, and
>> operations on the repository took 35-45 seconds to complete.
>> Running `git gc` compressed the objects fairly well, saving ~400MB of
>> space. I would imagine that even more space would be saved
>> (proportionally) if there were a lot more similar files in the repo.
>
> Did you tweak core.bigfilethreshold? Git won't actually try to find
> deltas on files larger than that (500MB by default). So you might be
> seeing just the effects of zlib compression, and not deltas.
>
> You can always check the delta status after a gc by running:
>
>   git rev-list --objects --all |
>   git cat-file --batch-check='%(objectsize:disk) %(objectsize) %(deltabase) 
> %(rest)'
>
> That should give you a sense of how much you're saving due to zlib (by
> comparing the first two numbers for a copy that isn't a delta; i.e.,
> with an all-zeros delta base) and how much due to deltas (how much
> smaller the first number is for an entry that _is_ a delta).

In addition to that, people need to take into account that "binary
vs text" is a secondary criteria when considering how effective our
deltifying algorithm works on their data.

We use the same xdelta algorithm that is oblivious to line breaks,
so given two pairs of input files (T1, T2) and (B1, B2), where T1
and B1 are comparative sizes and T2 and B2 are comparative sizes,
and the change made to T1 to produce T2 (e.g. copy byte range X-Y of
T1 to byte ranage starting from offset O of T2, insert this literal
byte string of length L, etc.) and the change made to B1 to produce
B2 are of comparative sizes (i.e. X-Ys and Os are similar), when T's
are text and B's are binary, you should get similarly sized delta to
represent T2 as a delta to T1 and B2 as a delta to B1.  

The reason why typical "binary" file does not delta well is not
inherent to their "binary"-ness but lies elsewhere.  It is because
tools that produce "binary" files tend not to care too much about
preserving original and only effect changes to limited part.  That
is what makes their data not delta well across versions.

Exceptions are editing exif data without changing the actual image
bits in jpeg files or editing id3 data without changing the actual
sound bits in mp3 files.  Binary files across these kind of
operations delta very well with Git, as "edit" is not done by
completely rewriting everything but is confined in a small area of
the file.


Reply via email to