I'm running on s10s_u4wos_12b and doing the following test. Create a pool, striped across 4 physical disks from a storage array. Write a 100GB file to the filesystem (dd from /dev/zero out to the file). Run I/O against that file, doing 100% random writes with an 8K block size.
zpool iostat shows the following... capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- testpl 305G 92.8G 10 83 1.25M 8.24M testpl 305G 92.8G 573 1.16K 69.4M 59.6M testpl 305G 92.8G 573 1.24K 70.2M 74.8M testpl 305G 92.8G 600 729 72.0M 31.3M testpl 305G 92.8G 448 1.23K 54.3M 70.1M testpl 305G 92.8G 576 1.39K 70.1M 76.3M The I/O stats from the array show the same. Running truss against the I/O tool shows that it is only doing writes... /18: pwrite64(6, "12 4 V x\003E9BD ODB7F F".., 8192, 0x00000006FB4FC000) = 8192 /9: pwrite64(6, "12 4 V x\003DCD9EB9F8CCA".., 8192, 0x0000000D3AFC6000) = 8192 /10: pwrite64(6, "12 4 V x\003DFC502 :AD ^".., 8192, 0x000000075ABF0000) = 8192 /12: pwrite64(6, "12 4 V x\003E09D\bC5\0E6".., 8192, 0x0000000CF8A9A000) = 8192 /11: pwrite64(6, "12 4 V x\003DFFD03ECA006".., 8192, 0xDD1C8000) = 8192 /5: pwrite64(6, "12 4 V x\003D7 eC19CA5 >".., 8192, 0x49E92000) = 8192 /8: pwrite64(6, "12 4 V x\003DB +DEB0 *BA".., 8192, 0x000000074FCB2000) = 8192 /4: pwrite64(6, "12 4 V x\003D6\rB7 K92B6".., 8192, 0x0000000295E1C000) = 8192 /3: pwrite64(6, "12 4 V x\003D5 }B2FB1486".., 8192, 0x000000118B862000) = 8192 /14: pwrite64(6, "12 4 V x\003E599 /84FB\n".., 8192, 0x00000003DFCD4000) = 8192 /6: pwrite64(6, "12 4 V x\003DA9DDA a9EE6".., 8192, 0x000000105DA36000) = 8192 /17: pwrite64(6, "12 4 V x\003E7F5 AEDF8 n".., 8192, 0x160CC000) = 8192 Is this normal? Thanks This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss