As I've been fixing the problem with ntirpc *_cleanup(), have
discovered that it all passes through ntirpc svc_rqst.[ch].

Trying to grok it, it's all rbtrees.

But noticed that there's a fixed cache size of 8192 -- it
logs a message telling me it should be a small prime.

Then noticed there are 7 partitions, each of which have
this cache size of 8192.

Then noticed the svc_rqst_new_evchan() that sets it up is
called in only 2 places:

nfs_rpc_dispatcher_thread.c, nfs_Init_svc()

for (ix = 0; ix < N_EVENT_CHAN; ++ix) {
         rpc_evchan[ix].chan_id = 0;
         code = svc_rqst_new_evchan(&rpc_evchan[ix].chan_id,
                                    NULL /* u_data */,
                                    SVC_RQST_FLAG_NONE);

svc_rqst.c, xprt_register()

/* Create a legacy/global event channel */
if (!(__svc_params->ev_u.evchan.id)) {
         code =
             svc_rqst_new_evchan(&(__svc_params->ev_u.evchan.id),
                                 NULL /* u_data */ ,
                                 SVC_RQST_FLAG_CHAN_AFFINITY);

===

Conclusion: 8 event channels don't need 7*8192 cached slots,
nor do they need rbtrees for "fast" lookup.

Moreover, these never seem to be looked up, as SVCXPRT (xp_ev)
points to the channel.

Rather, the list is checked for old transports to cleanup.
__svc_clean_idle2()

Transports are sorted by memory address.

I'm converting to linked lists, which will be very easy to
clean up.  Speak now, or forever hold your peace.


------------------------------------------------------------------------------
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to