On Mon, Apr 29 2019, Derrick Stolee wrote:
> On 4/27/2019 9:06 AM, Ævar Arnfjörð Bjarmason wrote:
>>
>> There's still cases left where we'll exit early, e.g. if you do:
>>
>> $ git diff -U1
>> diff --git a/commit-graph.c b/commit-graph.c
>> index 66865acbd7..63773764ce 100644
>> --- a/commit-graph.c
>> +++ b/commit-graph.c
>> @@ -1074,3 +1074,3 @@ void write_commit_graph(const char *obj_dir,
>> chunk_offsets[1] = chunk_offsets[0] + GRAPH_FANOUT_SIZE;
>> - chunk_offsets[2] = chunk_offsets[1] + hashsz * commits.nr;
>> + chunk_offsets[2] = chunk_offsets[0] + hashsz * commits.nr;
>> chunk_offsets[3] = chunk_offsets[2] + (hashsz + 16) * commits.nr;
>>
>> Which is obviously bad, but something I encounterd while hacking up [1]
>> we'll still hard die as before this patch on:
>>
>> $ git status
>> fatal: invalid parent position 1734910766
>> $
>
> I really appreciate you digging in deep into these kinds of issues. You
> seem to be hitting corrupted commit-graph files more often than we are
> (in VFS for Git world).
FWIW I've never encountered any of these in the wild. I just started
poking at this in 2ac138d568 ("commit-graph: fix segfault on e.g. "git
status"", 2019-03-25) because I was looking at the commit graph, running
its tests with -d, and we'd segfault previously on e.g. "git status" in
our own graph corruption tests.
> However, we should be _very careful_ when turning some of these errors
> to warnings. At the very least, we should do some high-level planning
> for how to handle this case.
Indeed. I should have been explicit, I don't think it's sane to do
anything except return NULL up the stack and say "the graph is screwed,
we can't use it" when initially parsing it/headers, but reading on...
> The biggest issue is that we have some logic that is run after a call to
> generation_numbers_enabled(), such as the `git rev-list --topo-order`
> logic, that relies on the commit-graph for correctness. If we output a
> warning and then stop using the commit-graph, then we will start having
> commits with finite generation pointing to commits with infinite generation.
>
> Perhaps, with some care, we can alert the algorithm to change the "minimum
> generation" that limits how far we dequeue the priority-queue. Changing it
> to zero will cause the algorithm to behave like the old algorithm.
>
> But, having an algorithm that usually takes 0.1 seconds suddenly take 10+
> seconds also violates some expectations.
>
> Q: How should we handle a detectably-invalid commit-graph?
...I don't think we need to do any paranoid algorithm fallback in
general. As you point out that's going to be a PITA as we read the
actual data in the graph in some cases.
> I think most of your patches have done a good job so far of detecting
> an invalid header, and responding by ignoring the commit-graph. This case
> of a detectable error in the chunk data itself is not something we can
> check on the first load without serious performance issues.
>
> I hope we can decide on a good solution.
...OK, so this is one of those PITA cases. I think it's fine to just
leave it.
Although maybe we'd still want to be more paranoid with O(n) cases like
"--contains" or "--topo-order" and cases that are surely just "look up
my immediate commit data", like "status" dying on this particular error.
But I think it's fine to just decide to nothing about this. I mainly
just wanted to make a note to myself & CC the list in case there was
interest...