On Jul 29, 2010, at 9:57 AM, Carol wrote:

> Yes I noticed that thread a while back and have been doing a great deal of 
> testing with various scsi_vhci options.  
> I am disappointed that the thread hasn't moved further since I also suspect 
> that it is related to mpt-sas or multipath or expander related.

The thread is in the ZFS forum, but the problem is not a ZFS problem.

> I was able to get aggregate writes up to 500MB out to the disks but reads 
> have not improved beyond an aggregate average of about 50-70MBps for the pool.

I find "zpool iostat" to be only marginally useful.  You need to look at the
output of "iostat -zxCn" which will show the latency of the I/Os.  Check to
see if the latency (asvc_t) is similar to the previous thread.

> I did not look much at read speeds during alot of my previous testing because 
> I thought write speeds were my issue... And I've since realized that my 
> userland write speed problem from zpool <-> zpool was actually read limited.

Writes are cached in RAM, so looking at iostat or zpool iostat doesn't offer
the observation point you'd expect.

> Since then I've tried mirrors, stripes, raidz, checked my drive caches, 
> tested recordsizes, volblocksizes, clustersizes, combinations therein, tried 
> vol-backed luns, file-backed luns, wcd=false - etc.
> 
> Reads from disk are slow no matter what.  Of course - once the arc cache is 
> populated, the userland experience is blazing - because the disks are not 
> being read.

Yep, classic case of slow disk I/O.

> Seeing write speeds so much faster that read strikes me as quite strange from 
> a hardware perspective, though, since writes also invoke a read operation - 
> do they not?

In many cases, writes do not invoke a read.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to