Matthias Bethke wrote:
The slowness is the same on SuSE and Gentoo based clients. The previous
installation handled the same thing without any problems, which I'd
certainly expect from a dual Xeon @3 GHz with $ GB RAM, a Compaq
SmartArray 642 U320 hostadpater and some 200 GB in a RAID5, connected
to the clients via GBit ethernet.
RAID-5? Ouch.
RAID-10 offers a much better raw performance; since individual mirrors
are striped, you get at least 4/3 the seek performance of a 4-disk
raid-5 out of a 4-disk raid-10 setup - depending on the controller, this
could even be double that figure ( if the controller cannot do perfect
alignment in raid-5).
Also, the network performance (linear read/write) is far superior to
raid-5, typically giving >20% improvements.
Definitely not good for GBit, but not so bad either considering it will
have taken half a minute just to open that file. The file is complete
despite the I/O error but the error is definitely related to the server
load, it never happens normally (and I get 9-11s for the 100 MB).
LoadAvg of over 10 for I/O only ? That is a serious problem.
I repeat, that is a *problem*, not bad performance.
Since you say the box has 4GB of RAM, what happens when you do a linear
read of 2 or 3 GB of data, first uncached and then cached ?
That should not be affected by the I/O subsystem at all.
Also, test your network speed by running netperf or ioperf between
client and server.
Get some baseline values for maximum performance first!
I have 16 nfsd processes running but the problem is there even if only a
single client is active. nfsstat on the server shows a huge number of
On the client, however, I get some retransmissions and very strange
read/write values compared to the server's. I thought of 32-bit overflow
but the value is obviously longer, I can drive it beyond 2^32 on the
I noticed a few things about the setup: the SA 642 adapter still has a
stoneage firmware, V1.30, but we never saw a need to upgrade as it
worked nicely with the kernel 2.4.21 cciss driver. Any know issues with
the 2.6 kernel with this one? I just flashed the latest version and will
And more bla I don't understand about NFS - what about the basics ?
Which versions are the server and client running ?
Since both could run either v2 or v3 and in-kernel or userspace, that's
4 x 4 = 16 possible combinations right there - and that is assuming they
both run the *same* minor versions of the NFS software.
And one parameter I haven't tried to tweak is the IO scheduler. I seem
to remember a recommendation to use noop for RAID5 as the cylinder
numbers are completely virtual anyway so the actual head scheduling
should be left to the controller. Any opinions on this?
I have never heard of the I/O scheduler being able to influence or get
data directly from disks.
In fact, as far as I know that is not even possible with IDE or SCSI,
which both have their own abstraction layers.
What you probably mean is the way the scheduler is allowed to interface
with the disk subsystem - which is solely determined by the disk
subsystem itself.
"Heads" and "cylinders" play no part in this - it's *all* virtual as far
as the scheduler is concerned.
I'd recommend reading the specs for the raid controller - twice.
Also dive into the module source if you're up for it - it can reveal a
lot more than just plugging it in and adding disks.
J
--
[email protected] mailing list