> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of weiliam.hong
> 3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
> is used. Are SAS and SATA drives handled differently ?
If they're all on the same HBA, they may be all on the same bus. It may be
*because* you're mixing SATA and SAS disks on the same bus. I'll suggest
separating the tests, don't run them concurrently, and see if there's any
Also, the HBA might have different defaults for SAS vs SATA, look in the HBA
to see if write back / write through are the same...
I don't know if the HBA gives you some way to enable/disable the on-disk
cache, but take a look and see.
Also, maybe the SAS disks are only doing SATA. If the HBA is only able to
do SATA, then SAS disks will work, but might not work as optimally as they
would if they were connected to a real SAS HBA.
And one final thing - If you're planning to run ZFS (as I suspect you are,
posting on this list running OI) ... It actually works *better* without any
*Footnote: ZFS works the worst, if you have ZIL enabled, no log device, and
no HBA. It's a significant improvement, if you add a battery backed or
nonvolatile HBA with writeback. It's a signfiicant improvement again, if
you get rid of the HBA, add a log device. It's a significant improvement
yet again, if you get rid of the HBA and log device, and run with ZIL
disabled (if your work load is compatible with a disabled ZIL.)
zfs-discuss mailing list