My current Napalm code essentially gives the following priorities:

New UDP, TCP, RDMA, or 9P connections are the "same" priority, as
they each have their own channel, and they each have a dedicated
epoll thread.

The only limit is the OS runs out of file descriptors and rejects
the connection attempt, hopefully before it tells us.

We probably want a new configurable number of connections limit per
type.  Currently, there's only 1 global configuration.  Temporarily,
that could be used as the same for every type.

But what should we do?  Accept the TCP connection and then close it?
Receive the UDP data to get it out of the OS buffers, but then
discard it?

Right now, they're all treated as first tier, and it handles them
expeditiously.  After all, missing an incoming connection is far
worse (as viewed by the client) than slowing receipt of data.

TCP or RDMA service requests are the second tier, vying for worker
threads with each other.  I'm not entirely sure what 9p is doing.

Service requests stay on the same thread as long as possible.  Each
new request will be assigned a new worker thread.  That will have
greatest client equality.

An alternative that DanG and I discussed this morning would be to
add some feedback from each FSAL that tells whether the request
was fast or slow.  We'd need yet another configurable parameter for
how many fast requests are allowed before the next request is
assigned a worker at the tail of the queue.  (Once we get async
FSALs going, slow requests always incur a task switch anyway, so
they'd reset the counter.)

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to