> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Anatoly
> 
> I've just made clean test for sequential data read. System has 45 mirror
> vdevs.

90 disks in the system...  I bet you have a lot of ram?


> 2. Read file normally:
> # time dd if=./big_file bs=128k of=/dev/null
> 161118683136 bytes (161 GB) copied, 103.455 seconds, 1.6 GB/s

I wonder how much of that is being read back from cache.  Would it be
impossible to reboot, or otherwise invalidate the cache, before reading the
file back?

With 90 disks, in theory, you should be able to read something like 90Gbit =
11GB / sec.  But of course various bus speed bottlenecks come into play, so
I don't think the 1.6GB/s is unrealistically high in any way.


> 3. Snapshot & send:
> # zfs snapshot volume/test@A
> # time zfs send volume/test@A > /dev/null
> real    7m20.635s
> user    0m0.004s
> sys     0m52.760s

This doesn't surprise me, based on gut feel, I don't think zfs send performs
optimally, in general.

I think your results are probably correct, and even if you revisit all this,
doing the reboots (or cache invalidation) and/or using a newly created pool,
as anyone here might suggest...  I think you'll still see the same results.
Somewhat unpredictably.

Even so, I always find zfs send performance still beats the pants off any
alternative... rsync and whatnot.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to