Hi William,

Thanks for pointing me out a lot.

On 2015/9/25 2:06, William Allen Simpson wrote:
> I'm confused.  You're using macros or something that aren't
> included in any header file visible here.
> 
> CPU_COUNT ?
> 
> sched_setaffinity ?

These are included in sched.h on Linux.

> Do any of these make a system call?  Then they are not useful.
> This thread pool has been designed to have only one system call
> per invocation -- or better yet continue processing the next
> task without a system call.

Yes, there has a system call in shed_setaffinity. But I don't understand
why they can't make a system call. Do you mean that it may lower
performance of threads? I think it depends on how much work the thread
is going to do.

> Are some of these Linux-only?  Then it has to be conditional.
> This has to compile on FreeBSD and Windows.

I'm sorry that it's Linux-only, and compiling on FreeBSD or Windows
may failed. My bad :(

> Please don't send untested patches that cannot compile -- or at
> least explain how to evaluate the patch!

I do have tested it on Linux, and obtained some performance benefits.

> Sender threads?  Whatever are you talking about?  Right now,
> there are 5 uses of this worker thread pool:
>  (1) per connection synchronous TCP write, which hopefully will
>      become TCP write completion someday.
>  (2) RDMA connection.
>  (3) RDMA read completion.
>  (4) RDMA write completion.
>  (5) RDMA send completion.

May those users use the same work pool simultaneously? I thought every
user has it's own pool, and every pool can set it's own CPU affinity
separately.

> Eventually (Ganesha 2.4), the dispatch threads will also be
> merged here.
> 
> Moreover, threads are allocated dynamically.  There is a
> minimum and a maximum.  There is one, and then that one
> starts another, which starts another.
> 
> As they conclude, they all wait for awhile, and after being
> unused and being greater than the minimum, they exit.
> 
> Your current code has them oddly counting themselves.  No idea
> how this would work.

I count the threads created in the pool and use it to split the threads to
different CPUs. There's problem I mentioned on the TODO list in my patch,
and should do more work on this later.

thanks,
Wei

> Furthermore, there's simply no indication how an RDMA task
> completion would choose "affinity" with a worker that had
> been previously used for the same connection -- or in the case
> none was available, back off or choose some other worker.
> 
> Or vice versa, a worker would choose a task associated with a
> connection that it had previously processed.
> 
> Overall, the idea of CPU affinity across a cluster is known to
> be a good idea.  But this is a naive proposal.
> 
> 
> 


------------------------------------------------------------------------------
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to