Brandon Williams <bmw...@google.com> writes:

> Per some of the discussion online and off I locally broke up up the question
> and answer and I wasn't very thrilled with the outcome for a number of 
> reasons.
>
> 1. The API is more complex....
> 2. Performance hit....
> ...
> Given the above, v3 is a reroll of the same design as in v2.  This is a good
> milestone in improving the attribute system as it achieves the goal of making
> the attribute subsystem thread-safe (ie multiple callers can be executing
> inside the attribute system at the same time) and will enable a future series
> to allow pathspec code to call into the attribute system.
>
> Most of the changes in this revision are cosmetic (variable renames, code
> movement, etc) but there was a memory leak that was also fixed.

I am OK with the patches presented in this round, but let me explain
why I still expect that we would eventually end up spliting the
question & answer into separate data structure before we can truly
go multi-threaded.

A typical application would do

        for path taken from some set:
                do_something(path)

and "do something with path" would be another helper function, which
may do

        do_something(path):
                ask 'text' attribute for the path
                switch on the attribute and do different things

With the original API, the latter would statically allocate an array
of <question, answer> pairs, with an optimization to populate
<question> which is immutable (because the codepath is always and
only interested in 'text' attribute, and you need a hash lookup to
intern the string "text" which costs cycles) only once, and make a
call to git_check_attr() function with the "path".  This obviously
will not work when two threads are calling this helper function, as
the threads both want their git_check_attr() to return their answers
to the array, but the <answer> part are shared between the threads.

A naive and inefficient way to split questions and answers is to
have two arrays, allocating the former statically (protected under a
mutex, of course) to avoid repeated cost of interning, while
allocating the latter (and some working area per invocation, like
the check_all_attr[]) dynamically and place it on stack or heap.
Because do_something() will be called number of times, the cost for
allocation and initialization of the <answer> part that is paid per
invocation will of course become very high.

We could in theory keep the original arrangement of having an
array of <question, answer> pairs and restructure the code to do:

        prepare the <question, answer> array
        for path taken from some set:
                do_something(the array, path)

That way, do_something() do not have to keep allocating,
initializing and destroying the array.

But after looking at the current set of codepaths, before coming to
the conclusion that we need to split the static part that is
specific to the callsite for git_check_attr() and the dynamic part
that is specific to the <callsite, thread> pair, I noticed that
typically the callers that can prepare the array before going into
the loop (which will eventually be spread across multiple threads)
are many levels away in the callchain, and they are not even aware
of what attributes are going to be requested in the leaf level
helper functions.  In other words, the approach to hoist "the
<question, answer> array" up in the callchain would not scale.  A
caller that loops over paths in the index and check them out does
not want to know (and we do not want to tell it) what exact
attributes are involved in the decision convert_to_working_tree()
makes for each path, for example.

So how would we split questions and answers in a way that is not
naive and inefficient?  

I envision that we would allow the attribute subsystem to keep track
of the dynamic part, which will receive the answers, holds working
area like check_all_attr[], and has the equivalent to the "attr
stack", indexed by <thread-id, callsite> pair (and the
identification of "callsite" can be done by using the address of the
static part, i.e. the array of questions that we initialize just
once when interning the list of attribute names for the first time).

The API to prepare and ask for attributes may look like:

        static struct attr_static_part Q;
        struct attr_dynamic_part *D;

        attr_check_init(&Q, "text", ...);
        D = git_attr_check(&Q, path);

where Q contains an array of interned attributes (i.e. questions)
and other immutable things that is unique to this callsite, but can
be shared across multiple threads asking the same question from
here.  As an internal implementation detail, it probably will have a
mutex to make sure that init will run only once.

Then the implementation of git_attr_check(&Q, path) would be:

    - see if there is already the "dynaic part" allocated for the
      current thread asking the question Q.  If there is not,
      allocate one and remember it, so that it can be reused in
      later calls by the same thread; if there is, use that existing
      one.

    - reinitialize the "dynamic part" as needed, e.g. clear the
      equivalent to check_all_attr[], adjust the equivalent to
      attr_stack for the current path, etc.  Just like the current
      code optimizes for the case where the entire program (a single
      thread) will ask the same question for paths in traversal
      order (i.e. falling in the same directory), this will optimize
      for the access pattern where each thread asks the same
      question for paths in its traversal order.

    - do what the current collect_some_attrs() thing does.

And this hopefully won't be as costly as the naive and inefficient
one.

The reason why I was pushing hard to split the static part and the
dynamic part in our redesign of the API is primarily because I
didn't want to update the API callers twice.  But I'd imagine that
your v3 (and your earlier "do not discard attr stack, but keep them
around, holding their tips in a hashmap for quick reuse") would at
least lay the foundation for the eventual shape of the API, let's
bite the bullet and accept that we will need to update the callers
again anyway.

Thanks.

Reply via email to