Re: [PATCH/RFC 1/1] gc --auto: exclude the largest giant pack in low-memory config

2018-03-05 Thread Duy Nguyen
On Mon, Mar 5, 2018 at 9:00 PM, Ævar Arnfjörð Bjarmason
 wrote:
>
> On Thu, Mar 01 2018, Nguyễn Thái Ngọc Duy jotted:
>
>> pack-objects could be a big memory hog especially on large repos,
>> everybody knows that. The suggestion to stick a .keep file on the
>> largest pack to avoid this problem is also known for a long time.
>>
>> Let's do the suggestion automatically instead of waiting for people to
>> come to Git mailing list and get the advice. When a certain condition
>> is met, gc --auto create a .keep file temporary before repack is run,
>> then remove it afterward.
>>
>> gc --auto does this based on an estimation of pack-objects memory
>> usage and whether that fits in one third of system memory (the
>> assumption here is for desktop environment where there are many other
>> applications running).
>>
>> Since the estimation may be inaccurate and that 1/3 threshold is
>> arbitrary, give the user a finer control over this mechanism as well:
>> if the largest pack is larger than gc.bigPackThreshold, it's kept.
>
> This is very promising. Saves lots of memory on my ad-hoc testing of
> adding a *.keep file on an in-house repo.

The good news for you is when we run external rev-list on top of this,
memory consumption seems even better (and I think even peak memory
should be a bit lower too, but I'll need to verify that).

>> + if (big_pack_threshold)
>> + return pack->pack_size >= big_pack_threshold;
>> +
>> + /* First we have to scan through at least one pack */
>> + mem_want = pack->pack_size + pack->index_size;
>> + /* then pack-objects needs lots more for book keeping */
>> + mem_want += sizeof(struct object_entry) * nr_objects;
>> + /*
>> +  * internal rev-list --all --objects takes up some memory too,
>> +  * let's say half of it is for blobs
>> +  */
>> + mem_want += sizeof(struct blob) * nr_objects / 2;
>> + /*
>> +  * and the other half is for trees (commits and tags are
>> +  * usually insignificant)
>> +  */
>> + mem_want += sizeof(struct tree) * nr_objects / 2;
>> + /* and then obj_hash[], underestimated in fact */
>> + mem_want += sizeof(struct object *) * nr_objects;
>> + /*
>> +  * read_sha1_file() (either at delta calculation phase, or
>> +  * writing phase) also fills up the delta base cache
>> +  */
>> + mem_want += delta_base_cache_limit;
>> + /* and of course pack-objects has its own delta cache */
>> + mem_want += max_delta_cache_size;
>
> I'm not familiar enough with this part to say, but isn't this assuming a
> lot about the distribution of objects in a way that will cause is not to
> repack in some pathological cases?

Yeah this assumes a "normal" case. When the estimation is really off,
either we exclude the base pack or repack everything unnecessarily,
but we always repack. A wrong decision here can only affect
performance, not correctness.

There's one case I probably should address though. This "exclude the
base pack" will create two packs in the end, one big and one small.
But if the second pack is getting as big as the first one, it's time
we merge both into one.

> Probably worth documenting...
>
>> + /* Only allow 1/3 of memory for pack-objects */
>> + mem_have = total_ram() / 3;
>
> Would be great to have this be a configurable variable, so you could set
> it to e.g. 33% (like here), 50% etc.

Hmm.. isn't gc.bigPackThreshold enough? I mean in a controlled
environment, you probably already know how much ram is available, and
much of this estimation is based on pack size (well the number of
objects in the pack) anyway, you could avoid all this heuristics by
saying "when the base pack is larger than 1GB, always exclude it in
repack". This estimation should only needed when people do not
configure anything (and still expect reasonable defaults). Or when you
plan multiple 'gc' runs on the same machine?
-- 
Duy


Re: [PATCH/RFC 1/1] gc --auto: exclude the largest giant pack in low-memory config

2018-03-05 Thread Ævar Arnfjörð Bjarmason

On Thu, Mar 01 2018, Nguyễn Thái Ngọc Duy jotted:

> pack-objects could be a big memory hog especially on large repos,
> everybody knows that. The suggestion to stick a .keep file on the
> largest pack to avoid this problem is also known for a long time.
>
> Let's do the suggestion automatically instead of waiting for people to
> come to Git mailing list and get the advice. When a certain condition
> is met, gc --auto create a .keep file temporary before repack is run,
> then remove it afterward.
>
> gc --auto does this based on an estimation of pack-objects memory
> usage and whether that fits in one third of system memory (the
> assumption here is for desktop environment where there are many other
> applications running).
>
> Since the estimation may be inaccurate and that 1/3 threshold is
> arbitrary, give the user a finer control over this mechanism as well:
> if the largest pack is larger than gc.bigPackThreshold, it's kept.

This is very promising. Saves lots of memory on my ad-hoc testing of
adding a *.keep file on an in-house repo.

> + if (big_pack_threshold)
> + return pack->pack_size >= big_pack_threshold;
> +
> + /* First we have to scan through at least one pack */
> + mem_want = pack->pack_size + pack->index_size;
> + /* then pack-objects needs lots more for book keeping */
> + mem_want += sizeof(struct object_entry) * nr_objects;
> + /*
> +  * internal rev-list --all --objects takes up some memory too,
> +  * let's say half of it is for blobs
> +  */
> + mem_want += sizeof(struct blob) * nr_objects / 2;
> + /*
> +  * and the other half is for trees (commits and tags are
> +  * usually insignificant)
> +  */
> + mem_want += sizeof(struct tree) * nr_objects / 2;
> + /* and then obj_hash[], underestimated in fact */
> + mem_want += sizeof(struct object *) * nr_objects;
> + /*
> +  * read_sha1_file() (either at delta calculation phase, or
> +  * writing phase) also fills up the delta base cache
> +  */
> + mem_want += delta_base_cache_limit;
> + /* and of course pack-objects has its own delta cache */
> + mem_want += max_delta_cache_size;

I'm not familiar enough with this part to say, but isn't this assuming a
lot about the distribution of objects in a way that will cause is not to
repack in some pathological cases?

Probably worth documenting...

> + /* Only allow 1/3 of memory for pack-objects */
> + mem_have = total_ram() / 3;

Would be great to have this be a configurable variable, so you could set
it to e.g. 33% (like here), 50% etc.


Re: [PATCH/RFC 1/1] gc --auto: exclude the largest giant pack in low-memory config

2018-03-01 Thread Duy Nguyen
On Fri, Mar 2, 2018 at 1:14 AM, Junio C Hamano  wrote:
> Nguyễn Thái Ngọc Duy   writes:
>
>> pack-objects could be a big memory hog especially on large repos,
>> everybody knows that. The suggestion to stick a .keep file on the
>> largest pack to avoid this problem is also known for a long time.
>
> Yup, but not that it is not "largest" per-se.  The thing being large
> is a mere consequence that it is the base pack that holds the bulk
> of older parts of the history (e.g. the one that you obtained via
> the initial clone).

Thanks, "base pack" sounds much better. I was having trouble with
wording because I didn't nail this one down.

>> Let's do the suggestion automatically instead of waiting for people to
>> come to Git mailing list and get the advice. When a certain condition
>> is met, gc --auto create a .keep file temporary before repack is run,
>> then remove it afterward.
>>
>> gc --auto does this based on an estimation of pack-objects memory
>> usage and whether that fits in one third of system memory (the
>> assumption here is for desktop environment where there are many other
>> applications running).
>>
>> Since the estimation may be inaccurate and that 1/3 threshold is
>> arbitrary, give the user a finer control over this mechanism as well:
>> if the largest pack is larger than gc.bigPackThreshold, it's kept.
>
> If this is a transient mechanism during a single gc session, it
> would be far more preferrable if we can find a way to do this
> without actually having a .keep file on the filesystem.

That was my first attempt, manipulating packed_git::pack_keep inside
pack-objects. Then my whole git.git was gone. I was scared off so I
did this instead.

I've learned my lesson though (never test dangerous operations on your
worktree!) and will do that pack_keep again _if_ this gc --auto still
sounds like a good direction to go.
-- 
Duy


Re: [PATCH/RFC 1/1] gc --auto: exclude the largest giant pack in low-memory config

2018-03-01 Thread Junio C Hamano
Nguyễn Thái Ngọc Duy   writes:

> pack-objects could be a big memory hog especially on large repos,
> everybody knows that. The suggestion to stick a .keep file on the
> largest pack to avoid this problem is also known for a long time.

Yup, but not that it is not "largest" per-se.  The thing being large
is a mere consequence that it is the base pack that holds the bulk
of older parts of the history (e.g. the one that you obtained via
the initial clone).

> Let's do the suggestion automatically instead of waiting for people to
> come to Git mailing list and get the advice. When a certain condition
> is met, gc --auto create a .keep file temporary before repack is run,
> then remove it afterward.
>
> gc --auto does this based on an estimation of pack-objects memory
> usage and whether that fits in one third of system memory (the
> assumption here is for desktop environment where there are many other
> applications running).
>
> Since the estimation may be inaccurate and that 1/3 threshold is
> arbitrary, give the user a finer control over this mechanism as well:
> if the largest pack is larger than gc.bigPackThreshold, it's kept.

If this is a transient mechanism during a single gc session, it
would be far more preferrable if we can find a way to do this
without actually having a .keep file on the filesystem.