Thanks for the replies. In the beginning, I only had SAS drives
installed when I observed the behavior, the SATA drives were added later
for comparison and troubleshooting.
The slow behavior is observed only after 10-15mins of running dd where
the file size is about 15GB, then the throughput drops suddenly from 70
to 50 to 20 to <10MB/s in a matter of seconds and never recovers.
This couldn't be right no matter how look at it.
On 10/27/2011 9:59 PM, Brian Wilson wrote:
On 10/27/11 07:03 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of weiliam.hong
3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
If they're all on the same HBA, they may be all on the same bus. It
*because* you're mixing SATA and SAS disks on the same bus. I'll
separating the tests, don't run them concurrently, and see if there's
is used. Are SAS and SATA drives handled differently ?
Also, the HBA might have different defaults for SAS vs SATA, look in
to see if write back / write through are the same...
I don't know if the HBA gives you some way to enable/disable the on-disk
cache, but take a look and see.
Also, maybe the SAS disks are only doing SATA. If the HBA is only
do SATA, then SAS disks will work, but might not work as optimally as
would if they were connected to a real SAS HBA.
And one final thing - If you're planning to run ZFS (as I suspect you
posting on this list running OI) ... It actually works *better*
*Footnote: ZFS works the worst, if you have ZIL enabled, no log
no HBA. It's a significant improvement, if you add a battery backed or
nonvolatile HBA with writeback. It's a signfiicant improvement
you get rid of the HBA, add a log device. It's a significant
yet again, if you get rid of the HBA and log device, and run with ZIL
disabled (if your work load is compatible with a disabled ZIL.)
zfs-discuss mailing list
First, ditto everything Edward says above. I'd add that your "dd"
test creates a lot of straight sequential IO, not anything that's
likely to be random IO. I can't speak to why your SAS might not be
performing any better than Edward did, but your SATA's probably
screaming on straight sequential IO, where on something more random I
would bet they won't perform as well as they do in this test. The
tool I've seen used for that sort of testing is iozone - I'm sure
there are others as well, and I can't attest what's better or worse.
zfs-discuss mailing list