Without having any more information than this email (was there an earlier one I missed?), I would grasp at the straw that the different sides of the array are going through different controllers (either within the array or in the attached computer), and possibly even that the SAS drives are possibly causing some form of congestion or performance bottleneck on their controller for all of the SATA drives, or rather, the array isn't very good at mixing both types on the same controller and keeping up the expected speeds.

All conjecture, of course, without quite a bit more specifics (array brand and model, are these JBOD with software RAID or hardware RAID, etc)

-Steve


Nicholas Leippe wrote:
Regarding the 12-drive bay we have, I'm seeing some puzzling performance.
The drives are physically so in the bay:

e h k n
f i l o
g j m p

I configured mirrors across e+f and g+h, and both synced at the same, quick/expected speed. These are my four sas drives. The rest are sata.

If I configure a mirror with o+p, or m+n they both sync at between 50 and 80MB/s--very quick and within expectations.

However, if I configure a mirror with drives i+j, it syncs at <25MB/s.
If I configure a mirror with either i+p or j+p, it syncs at between 50 and 80MB/s--so it doesn't appear to be either drive i or drive j at fault, but their combination...

This doesn't make sense to me, especially where they are all point-to-point devices.

Is there some weird way that sas/sata addressing can affect this?

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to