On 4/10/2018 10:51 PM, Junio C Hamano wrote:
Derrick Stolee <dsto...@microsoft.com> writes:

+               if ((*list)->generation != GENERATION_NUMBER_INFINITY) {
+                       if ((*list)->generation > GENERATION_NUMBER_MAX)
+                               die("generation number %u is too large to store in 
+                                   (*list)->generation);
+                       packedDate[0] |= htonl((*list)->generation << 2);
+               }

How serious do we want this feature to be?  On one extreme, we could
be irresponsible and say it will be a problem for our descendants in
the future if their repositories have more than billion pearls on a
single strand, and the above certainly is a reasonable way to punt.
Those who actually encounter the problem will notice by Git dying
somewhere rather deep in the callchain.

Or we could say Git actually does support a history that is
arbitrarily long, even though such a deep portion of history will
not benefit from having generation numbers in commit-graph.

I've been assuming that our stance is the latter and that is why I
made noises about overflowing 30-bit generation field in my review
of the previous step.

In case we want to do the "we know this is very large, but we do not
know the exact value", we may actually want a mode where we can
pretend that GENERATION_NUMBER_MAX is set to quite low (say 256) and
make sure that the code to handle overflow behaves sensibly.

I agree. I wonder how we can effectively expose this value into a test. It's probably not sufficient to manually test using compiler flags ("-D GENERATION_NUMBER_MAX=8").

+       for (i = 0; i < nr_commits; i++) {
+               if (commits[i]->generation != GENERATION_NUMBER_INFINITY &&
+                   commits[i]->generation != GENERATION_NUMBER_ZERO)
+                       continue;
+               commit_list_insert(commits[i], &list);
+               while (list) {
+               }
+       }
So we go over the list of commits just _once_ and make sure each of
them gets the generation assigned correctly by (conceptually
recursively but iteratively in implementation by using a commit
list) making sure that all its parents have generation assigned and
compute the generation for the commit, before moving to the next
one.  Which sounds correct.

Yes, we compute the generation number of a commit exactly once. We use the list as a stack so we do not have recursion limits during our depth-first search (DFS). We rely on the object cache to ensure we store the computed generation numbers, and computed generation numbers provide termination conditions to the DFS.


Reply via email to