Richard "Doc" Kinne wrote:
%iowait is the amount of time the CPU spends with an outstanding I/O request. IIRC "top", "uptime", etc (i.e. all the things you use to 'measure' system load) reports threads in iowait state as active. Throughput and TPS are two *different* measures of performance, and they're maximized by fairly opposite operations. The TPS I see from your sdb is on the order magnitude I'd expect from a single, older generation 7200 RPM SATA drive. See http://storageadvisors.adaptec.com/2007/04/17/yet-another-raid-10-vs-raid-5-question/ for some discussion of RAID5 IOps performance. The short answer is: I'd expect an 8 disk RAID5 write performance to be about 2x as fast as a single disk. (Ugly, eh?). So, you're still a little short. Now, RAID5 behaves radically different in sequential write (or large block write), and NFS is fairly small. If you're doing file copies, upping your NFS write size or read size might help you out (which effectively requires going to Jumbo frames on your ethernet). This didn't do anything for me as my NFS was the base of a VMWare system and VMWare appeared to read/write in storage blocks anyway. (Aside: if you want a small managed switch, you might try the Netgear GS108T. I'm running that at home. I think there's a 16 port cousin.) Battery backed write cache should help a great deal. I know Adaptec makes an 8xSATA controller for <$500. I've no experience with the controller directly. At work I use HP and at home I'm just going with Linux software RAID. Were you looking at retrans on the client? That's the important one. Retrans on the server will likely always be zero. I mislead you somewhat in my last email (sorry about that -- was going from memory and didn't check). The critical part is the "th" line in /proc/net/rpc/nfsd. This example is taken from my home server: r...@nottingham:~# grep th /proc/net/rpc/nfsdSee the article http://kamilkisiel.blogspot.com/2007/11/understanding-linux-nfsd-statistics.html, but in short: the 2nd number there (the 11815) is the number of times since boot where all NFS threads were in use. This example shows I need to tune my home server, which explains why my mail server performance sucks right now :-) This feels like the same situation I had at work on a RedHat Enterprise 5.3 server serving VMWare storage to 6 pretty beefy hosts. The issue was that when the RAID5 performance went to pieces it clogged up the NFS threads and the clients started experiencing timeouts. I'm out of time on this topic, but I just had an additional thought: what is the write performance of a local copy on that machine? How does it compare? -- Dewey |
_______________________________________________ bblisa mailing list [email protected] http://www.bblisa.org/mailman/listinfo/bblisa
