On Wed, Oct 10 2018, SZEDER Gábor wrote:

> On Thu, Oct 04, 2018 at 11:09:58PM -0700, Junio C Hamano wrote:
>> SZEDER Gábor <szeder....@gmail.com> writes:
>>
>> >>     git-gc - Cleanup unnecessary files and optimize the local repository
>> >>
>> >> Creating these indexes like the commit-graph falls under "optimize the
>> >> local repository",
>> >
>> > But it doesn't fall under "cleanup unnecessary files", which the
>> > commit-graph file is, since, strictly speaking, it's purely
>> > optimization.
>>
>> I won't be actively engaged in this discussion soon, but I must say
>> that "git gc" doing "garbage collection" is merely an implementation
>> detail of optimizing the repository for further use.  And from that
>> point of view, what needs to be updated is the synopsis of the
>> git-gc doc.  It states "X and Y" above, but it actually is "Y by
>> doing X and other things".
>
> Well, then perhaps the name of the command should be updated, too, to
> better reflect what it actually does...

I don't disagree, but between "git gc" being a longstanding thing called
not just by us , but third parties expecting it to a be a general swiss
army knife for "make repo better" (so for a new tool they'd need to
update their code), and general name bikeshedding I think it's best if
we just proceed for the sake of argument with the assumption that none
of us find the name confusing in this context.

At least that's the discussion I'm interested in. I.e. whether it makes
conceptual / structural sense for this stuff to live in the same place /
function in the code. We can always argue about the name of the function
as a separate topic.

>> I understand your "by definition there is no garbage immediately
>> after clone" position, and also I would understand if you find it
>> (perhaps philosophically) disturbing that "git clone" may give users
>> a suboptimal repository that immediately needs optimizing [*1*].
>>
>> But that bridge was crossed long time ago ever since pack transfer
>> was invented.  The data source sends only the pack data stream, and
>> the receiving end is responsible for spending cycles to build .idx
>> file.  Theoretically, .pack should be all that is needed---you
>> should be able to locate any necessary object by parsing the .pack
>> file every time you open it, and .idx is mere optimization.  You can
>> think of the .midx and graph files the same way.
>
> I don't think this is a valid comparison, because, practically, Git
> just didn't work after I deleted all pack index files.  So while they
> can be easily (re)generated, they are essential to make pack files
> usable.  The commit-graph and .midx files, however, can be safely
> deleted, and everything keeps working as before.

For "things that would run in 20ms now run in 30 seconds" (actual
numbers on a repo I have) values of "keeps working".

So I think this line gets blurred somewhat. In practice if you're
expecting the graph to be there to run the sort of commands that most
benefit from it it's essential that it be generated, not some nice
optional extra.

> OTOH, this is an excellent comparison, and I do think of the .midx and
> graph files the same way as the pack index files.  During a clone, the
> pack index file isn't generated by running a separate 'git gc
> (--auto)', but by clone (or fetch-pack?) running 'git index-pack'.
> The way I see it that should be the case for these other files as well.
>
> And it is much simpler, shorter, and cleaner to either run 'git
> commit-graph ...' or even to call write_commit_graph_reachable()
> directly from cmd_clone(), than to bolt on another option and config
> variable on 'git gc' [1] to coax it into some kind of an "after clone"
> mode, that it shouldn't be doing in the first place.  At least for
> now, so when we'll eventually get as far ...
>
>> I would not be surprised by a future in which the initial index-pack
>> that is responsible for receiving the incoming pack stream and
>> storing that in .pack file(s) while creating corresponding .idx
>> file(s) becomes also responsible for building .midx and graph files
>> in the same pass, or at least smaller number of passes.  Once we
>> gain experience and confidence with these new auxiliary files, that
>> ought to happen naturally.  And at that point, we won't be having
>> this discussion---we'd all happily run index-pack to receive the
>> pack data, because that is pretty much the fundamental requirement
>> to make use of the data.
>
> ... that what you wrote here becomes a reality (and I fully agree that
> this is what we should ultimately aim for), then we won't have that
> option and config variable still lying around and requiring
> maintenance because of backwards compatibility.

We'll still have the use-case of wanting to turn on
gc.writeCommitGraph=true or equivalent and wanting previously-cloned
repositories to "catch up" and get a commit graph ASAP (but not do a
full repack).

This is why my patch tries to unify those two codepaths, i.e. so I can
turn this on and know that on the next "git fetch" we'll have the graph
(without needing to also run a full repack if it's not needed).

> 1 - https://public-inbox.org/git/87in2hgzin....@evledraar.gmail.com/
>
>> [Footnote]
>>
>> *1* Even without considering these recent invention of auxiliary
>>     files, cloning from a sloppily packed server whose primary focus
>>     is to avoid spending cycles by not computing better deltas will
>>     give the cloner a suboptimal repository.  If we truly want to
>>     have an optimized repository ready to be used after cloning, we
>>     should run an equivalent of "repack -a -d -f" immediately after
>>     "git clone".
>
> I noticed a few times that I got surprisingly large packs from GitHub,
> e.g. there is over 70% size difference between --single-branch cloning
> v2.19.0 from GitHub and from my local clone or from kernel.org (~95MB
> vs. ~55MB vs ~52MB).  After running 'git repack -a -d -f' they all end
> up at ~65MB, which is a nice size reduction for the clone from GitHub,
> but the others just gained 10-13 more MBs.

To me this sounds like even more reason for why we shouldn't be
splitting up the entry point for the "does the repo look shitty? Fix
it!" function (currently called git-gc).

We might get such crappy packs from a clone, or from a fetch, or maybe
in the future when e.g. "git add" or "git commit" generate a pack
instead of a bunch of loose objects we'd want to immediately kick off
something in the background to optimize / consolidate them.

So instead of having clone/commit/add/whatever call some custom
function(s) to just do the things they *think* they want, we just call
"gc --auto", because that entry point needs to eventually know how to
"recover" from any of those states without any prior knowledge or hints
about what just happened.

This is why I'm calling "gc --auto" from clone in this WIP series, with
the only special sauce being "if you have stuff to do, don't background
yourself", because not having a commit graph at all after a clone is
just one special case of many where we have no commit graph and want to
have one made ASAP (e.g. someone rm'd it, or the "I want a commit graph"
config variable just got turned on).

So since we need all those smarts in some function anyway, let's just
have one entry point to that logic. Discarding what it doesn't need to
do (e.g. too_many_loose_objects() just after a clone) is a trivial cost,
so if we don't care about that we have less complexity to worry about by
not having a proliferation of entry points into what are now subsets of
the GC code.

Reply via email to