> On 15 Nov 2016, at 19:03, Junio C Hamano <gits...@pobox.com> wrote:
> 
> Lars Schneider <larsxschnei...@gmail.com> writes:
> 
>>> The filter itself would need to be aware of parallelism
>>> if it lives for multiple objects, right?
>> 
>> Correct. This way Git doesn't need to deal with threading...
> 
> I think you need to be careful about three things (at least; there
> may be more):
> 
> ...
> 
> * Done naively, it will lead to unmaintainable code, like this:
> 
>  ...

Hi Junio,

I started to work on the "delayed responses to Git clean/smudge filter
requests" implementation and I am looking for a recommendation regarding 
code maintainability:

Deep down in convert.c:636 `apply_multi_file_filter()` [1] the filter learns
from the external process that the filter response is not yet available.
I need to transport this information quite a few levels up the call
stack. 


# Option 1
I could do this by explicitly passing a pointer such as "int *is_delayed" 
to the function. This would mean I need to update the function definitions 
for all functions on my way through the stack:

int apply_multi_file_filter()
int apply_filter()
int convert_to_working_tree_internal()
int convert_to_working_tree()
...

# Option 2
All these functions pass-through an "int" return value that communicates
if the filter succeeded or failed. I could define a special return value
to communicate the third state: delayed. 


What way do you think is better from a maintenance point of view?
I prefer option 2 but I fear that these "special" values could confuse
future readers of the code.

Thanks,
Lars


[1] 
https://github.com/git/git/blob/e2b2d6a172b76d44cb7b1ddb12ea5bfac9613a44/convert.c#L673-L777

Reply via email to