On 9/14/16 1:03 PM, Daniel Gryniewicz wrote:
> Bill has some ideas and some outstanding (although outdated) work on
> consolidating and cleaning up the fridge.  Bill, can you post a short
> description of your ideas on that?
>
Over a year ago, I developed a partial replacement for the thread
fridge, written the same time as Dan was working on the cache.  This
was my part of "napalming the forest".

After discussion, this group decided that the cache was more
important for immediate performance improvements, the thread fridge
re-write was too disruptive, and we'd do the threads in the
following release.  At the time, we thought this would be December.

Instead, I was tasked with limiting my effort to RDMA, which was
integrated separately with its own thread pool in ntirpc.

A small amount of my patch for removing some excess locking in the
dispatch loop was integrated, too.

I've re-written my code a second time to only do UDP, moving the
UDP-related threads into ntirpc using the same pool as RDMA.  They
are very similar control structures.

Then was directed to discard that effort and started a third time to
include both UDP and TCP in parallel.  But have never finished....

I still believe that the fridge and its locking and control
structures are a significant performance barrier.  But let's
continue getting a stable release out.  Then I'll re-write yet
again.  That will take 2-3 weeks intensive effort.

My plan was that I could have something ready to demonstrate at
Fall Bake-a-thon.  That was premised on the release having been
done by now.

I'm no longer in the Storage group, spending most of my time on
Open Standards efforts, Red Hat infrastructure, and related budgets.
(Sadly, lots and lots of my time on budgets.)


> On 09/14/2016 12:42 PM, Frank Filz wrote:
>> We have a number of uses of a "worker thread" model where a pool of threads
>> looks on a queue for work to do. One question is if every one of these uses
>> needs its own queue. The nfs_worker_thread threads probably do benefit from
>> being their own pool and queue, but do all the other uses.
>>
This is the part that I've redone.


>> Then there are some threads that wake up periodically to do work that is not
>> exactly queued (the reaper thread for example). It's not clear if these
>> threads avoid any wakeup at all If there is not any work to be done.
>>
Not currently.  A better design would be to have a task that would be
placed on a general thread as needed.  That's a major redesign of
Ganesha itself, and I've not touched that code.


>> Then there are some threads that live to sit blocked on some system call to
>> wait for work to do. These threads particularly don't make sense to be in a
>> thread pool unless they block intermittently (for example the proposed
>> threads to actually block on locks).

This is also a current problem for TCP.  Each connection has its own
output thread in ntirpc that sits and waits for the system.  My design
consolidates all those threads into a single TCP output so that all
the system calls (and following memory cleanup) have less contention.

I'm of the opinion that this contention causes considerable
performance problems, but have no measurements to support that opinion
(yet).  We did put in some hooks for before and after measurements.

The ideal design would be to have a thread per physical interface.
This would also assist thread and processor locality, as some
system designs have processors that are "closer" to a particular
interface.

But until we integrate something like DPDK, we have no method of
determining how many (non-RDMA) interfaces exist, and which TCP or
UDP connections are associated with which interface.  (RDMA knows
about its interfaces, as we talk directly.)


>> For shutdown purposes, we need to
>> examine if any of these threads is not able to be cancelled, and also find
>> the best way to cancel each one (for example, a thread blocking on an fcntl
>> F_SETLKW can be interrupted with a signal).
>>
For RDMA, my design is that each system call is its own tasklet, so that
there are no threads waiting for interfaces.  I'd hoped to split up the
TCP (and UDP) threads into separate sub-tasks in the same fashion, but
have never gotten that far.

I agree that we need to think about shutdown, and have not put enough
effort into it.


>> I also wonder if the multi-purpose threads have too many locks. It looks
>> like fridge has two separate mutexes.
>>
This was one of the things that I'd found, too.


>> Typically, when I implement a producer/consumer queue from scratch, I
>> protect the queue with the same mutex as is paired with the condition
>> variable used to signal the worker thread(s). Cancelling the thread can
>> either be accomplished with a separate "cancel" flag (also protected by the
>> mutex) or a special work item (perhaps put at the head of the queue instead
>> of the tail, depending on if you want to drain the queue before shutdown or
>> not).
>>
I did implement from scratch in ntirpc, and use a single mutex.

There isn't any cancelling an individual thread.  Instead, there's
cancelling the whole pool during shutdown.  Do we need to cancel
individual threads?


>> Any major changes here are 2.5 items, but it's worth starting a discussion.
>>
We actually had a lot of this discussion over a year ago (almost
18 months ago now).

There are also Malahal's lanes, which I'd like to implement.  But
there's only so much that we can do at one fell swoop.



------------------------------------------------------------------------------
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to