On Mon, 9 Jun 2008, Grant Lowe wrote: > > Here's my hardware: > > Sun E4500 with Solaris 10, 08/07 release. SAN attached through a > Brocade switch to EMC CX700. There is one LUN per file system.
What do you mean by "one LUN per file system"? Do you mean that the entire pool is mapped to one huge EMC LUN? File systems are just logical space allocations in a ZFS pool with logical blocksize and other options. How many spindles are hidden under the big EMC LUN? > # time dd if=/dev/zero of=test.dbf bs=8k count=1048576 > 1048576+0 records in > 1048576+0 records out > > real 6:29.9 > user 6.2 > sys 3:26.1 2691 blocks per second. With 12 drives, this would be 224 IOPS per disk. With 24 drives this would be 112 IOPS per disk. However, since the I/O is sequential and synchronization is not requested, ZFS will likely simply coalese the I/Os into larger requests so that fewer IOPS are required. > # time dd if=test.dbf of=/dev/null bs=8k > 1048576+0 records in > 1048576+0 records out > > real 3:06.4 > user 5.5 > sys 1:26.9 5625 blocks per second. With 12 drives, this would be 468 IOPS per disk. With 24 drives, it would be 234 IOPS per disk. Probably caching in the ARQ is making these rates seem possible. Even though you used 8K blocks, in my experience, this sort of sequential test is entirely meaningless for sequential I/O performance analysis. Even with 128K filesystem blocks, ZFS does sequential I/O quite efficiently when the I/O requests are 8K. > I thought that I would see better performance than this. I've read > a lot of the blogs, tried tuning this, and still no performance > gains. Are these speeds normal? Did I miss something (or > somethings)? Thanks for any help! While I don't claim any particular experience in this area, it seems logical to me that if you are mapping one pool to one huge LUN that you will be reducing ZFS's available transaction rate since it won't be able to schedule the parallel I/Os itself and therefore becomes subject to more of the latency associated with getting data to the array. Databases normally request that their writes be synced to disk so the latency until the RAID array responds that the data is safe is a major factor. Bob ====================================== Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org