Just to add to Corey's comments ...

IIRC, the more the numbers are higher on the right than the left, the more you 
should increase the threads.

Now that's just the server.  There are many client considerations.  Here's one 
... RPC slots.


In a normal configuration, where you have a server with many clients, the 
client defaults are typically fine.  However, if you have:  

- Few, or just one (1) client, and/or
- A few clients doing more meta-data calls than data movement (e.g., dentry, 
lstat, tranetc...)

You'll want to increase the number of RPC slots, which defaults to only sixteen 
(16).  These are the number of RPC requests that can be outstanding at any time.

- /proc/sys/sunrpc/tcp_slot_table_entries (sunrpc.tcp_slot_table_entries)

- /proc/sys/sunrpc/udp_slot_table_entries (sunrpc.ucp_slot_table_entries)

So if you're bumping your nfsd threads to 16, 32 or even 64, but testing with 
only one client and leaving the slots at only 16, you're likely to hit that 
limit and increasing server threads will not help much.  This is especially the 
case when you're doing "meta-data" operations like traversing the file system 
(although one should _avoid_ doing this over NFS, and do it on the NFS file 
server itself, and build file indexes instead).

Again, normally 16 slots are fine on a typical NFS client, because there are 
multiple NFS clients hitting a NFS server and they balance out.  But in fewer 
cases, or clients that have more meta-data operations going on, more slots 
helps.  Correspondingly, you can even decrease the number of slots on 
"troublesome" clients where users are using it too much like a "local file 
system."  Their operations will be more "Starved" waiting on RPC to return, but 
they will affect your server less.


And, again, if you have clients with lots of meta-data operations, really 
consider moving those operations to the server itself.  Traversing the file 
system is costly over NFS, with all of the dentry/lstat calls.  RUnning such 
programs on the NFS server itself usually doesn't incur as much of a load, 
because a lot of the meta-data can be cached on the NFS server itself in memory 
(since it is the authority, there is no coherency issue, all clients have to 
always verify with it, via the NFS service).


Just wanted to point out those details from first-hand experience.

-- Bryan

P.S.  Some users can often forget they are running on a distributed file system 
that is shared, not local.  If they are coming from Windows, remind them doing 
"find" on a NFS mount is like doing a "search" on a Windows Server in 
Explorer.  Bumping the RPC slots is only going to increase the issues in those 
cases, not relieve them.



-- 
Bryan J  Smith       Professional, Technical Annoyance 
Linked Profile:     http://www.linkedin.com/in/bjsmith

________________________________
From: Corey Kovacs <corey.kov...@gmail.com>
Sent: Monday, October 10, 2011 8:59 AM

One thing that is fairly obvious if you know where to look, is that your server 
needs to have more threads running. 

The line...

th 8 0 3828.727 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000

Indicates this. The "th" means "thread(s)" and the "8" is how many are running. 
The " 3828.727" is telling you a significant number of instances have occurred 
in which I/O was waiting for a thread in order to be serviced and this complete.

First order of business should be to increase this. A basic rule of thumb is 
one thread
for each client mount. 10 clients for one export, 10 or more threads. 10 
clients for two
exports on the server, 20 or more threads and so on. Autofs will help since 
there is an upper
limit (256?) I believe, and the automatic un-mounting of unused exports will 
free up theads for you.

Anyway, that will be one thing to move out of your way before you do any real 
testing.

Unless something has changed, you'll have to reboot to clear those numbers out.

_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to