On Sun, Jul 23, 2017 at 11:26 AM, William Allen Simpson
<william.allen.simp...@gmail.com> wrote:
> On 7/21/17 11:17 AM, Matt Benjamin wrote:
>>
>> As we discussed Wed., I'd like to see something like a msg counter and
>> byte counter that induced switching to the next handle.  This seems
>> consistent w/the front or back queuing idea Dan proposed.
>
>
> I don't understand this comment.  We already have the number of
> bytes in a message.  But I don't know that that has to do with
> switching to another transport handle.

With a msg counter, it could fit into various strategies for allowing
a handle to execute another request if available, allowing for better
pipelining of requests.

>
> Moreover, that information isn't really available at the time of
> deciding what worker to assign.  The worker is assigned, then the
> data is read.

Presumably it takes affect before requeuing the handle, not before assigning it.

>
>> The
>> existing lookahead logic knows when we have reads or writes, but
>> doesn't know how much we read, which would count towards the byte
>> counter.
>>
> Since we are now reading the entire incoming request before
> dispatching and processing the request, and holding onto the input
> data until after it is passed to the FSAL for possible zero-copy,
> I'm not sure what you mean by lookahead.

I'm referring to the "lookahead" construct maintained by our decoders.
It has information after decoders have executed.

>
>
>> On Fri, Jul 21, 2017 at 11:06 AM, William Allen Simpson
>> <william.allen.simp...@gmail.com> wrote:
>>>
>>> My current Napalm code essentially gives the following priorities:
>>>
>>> New UDP, TCP, RDMA, or 9P connections are the "same" priority, as
>>> they each have their own channel, and they each have a dedicated
>>> epoll thread.
>>>
>>> ...
>>>
>>> Right now, they're all treated as first tier, and it handles them
>>> expeditiously.  After all, missing an incoming connection is far
>>> worse (as viewed by the client) than slowing receipt of data.
>>>
> After discussion, I've simplified my 2 year old code that was
> written more like a device driver.  It counted the incoming events
> and one worker task thread per connection handled those events.
>
> Now, there's only 1 tier.  And it depends more on the epoll-only
> rearm to delay incoming requests.  (But not as badly as existing.)
>
> This gets rid of the problem that Matt identified where a piggy
> client could send a lot of requests and they'd all be handled
> sequentially (via the counter) before another client.
>
> Now every request has its own task thread and is always added to the
> tail of the worker queue.  Complete fairness.  More system resources.

I do not currently understand all of the components of this design.
I'm confused by the notion of requests having task threads.  My
impression was that xprt handles are what is queued, and that handles
wait in line for task threads;  also that a handle we are willing to
prioritize can be queued at the head rather than the tail of said
queue.

Matt

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to