Thank you, Ian, You are getting ZFS over 2-disk RAID-0 to be twice as fast as dd raw disk read on one disk, which sounds more encouraging. But, there is something odd with dd from raw drive - it is only 28MB/s or so, if I divided that right? I would expect it to be around 100MB/s on 10K drives, or at least that should be roughly potential throughput rate. Compared to throughput from ZFS 2-disk RAID-0 which is showing 57MB/s. Any idea why raw dd read is so slow?
Also, I wonder if everyone is using different dd command then I am - I get summary line that shows elapsed time and MB/s. On 5/14/07, Ian Collins <[EMAIL PROTECTED]> wrote:
Marko Milisavljevic wrote: > To reply to my own message.... this article offers lots of insight into why dd access directly through raw disk is fast, while accessing a file through the file system may be slow. > > http://www.informit.com/articles/printerfriendly.asp?p=606585&rl=1 > > So, I guess what I'm wondering now is, does it happen to everyone that ZFS is under half the speed of raw disk access? What speeds are other people getting trying to dd a file through zfs file system? Something like > > dd if=/pool/mount/file of=/dev/null bs=128k (assuming you are using default ZFS block size) > > how does that compare to: > > dd if=/dev/dsk/diskinzpool of=/dev/null bs=128k count=10000 > > Testing on a old Athlon MP box, two U160 10K SCSI drives. bash-3.00# time dd if=/dev/dsk/c2t0d0 of=/dev/null bs=128k count=10000 10000+0 records in 10000+0 records out real 0m44.470s user 0m0.018s sys 0m8.290s time dd if=/test/play/sol-nv-b62-x86-dvd.iso of=/dev/null bs=128k count=10000 10000+0 records in 10000+0 records out real 0m22.714s user 0m0.020s sys 0m3.228s zpool status pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 Ian
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss