Good day,

I've just made clean test for sequential data read. System has 45 mirror vdevs.

1. Create 160GB random file.
2. Read it to /dev/null.
3. Do Snaspshot and send it to /dev/null.
4. Compare results.

1. Write speed is slow due to 'urandom':
# dd if=/dev/urandom bs=128k | pv > big_file
161118683136 bytes (161 GB) copied, 3962.15 seconds, 40.7 MB/s

2. Read file normally:
# time dd if=./big_file bs=128k of=/dev/null
161118683136 bytes (161 GB) copied, 103.455 seconds, 1.6 GB/s
real    1m43.459s
user    0m0.899s
sys     1m25.078s

3. Snapshot & send:
# zfs snapshot volume/test@A
# time zfs send volume/test@A > /dev/null
real    7m20.635s
user    0m0.004s
sys     0m52.760s

4. As you see, there is 4 times difference on pure sequential read, greenhouse conditions.
I repeated tests couple of times to check ARC influence - no much difference.
Real send speed on this system is around 60 MBytes/s with some 100 peak.
File read operation is good scaled for large number of disks. But 'zfs send' is lame.

In normal conditions moving of large portions of data may take days to weeks. It can't fill
10G Ethernet connection, sometimes even 1G.

Best regards,
Anatoly Legkodymov.

On 16.11.2011 06:08, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Anatoly

The speed of send/recv is around 30-60 MBytes/s for initial send and
17-25 MBytes/s for incremental. I have seen lots of setups with 1 disk
I suggest watching zpool iostat before, during, and after the send to
/dev/null.  Actually, I take that back - zpool iostat seems to measure
virtual IOPS, as I just did this on my laptop a minute ago, I saw 1.2k ops,
which is at least 5-6x higher than my hard drive can handle, which can only
mean it's reading a lot of previously aggregated small blocks from disk,
which are now sequentially organized on disk.  How do you measure physical
iops?  Is it just regular iostat?  I have seriously put zero effort into
answering this question (sorry.)

I have certainly noticed a delay in the beginning, while the system thinks
about stuff for a little while to kick off an incremental... And it's
acknowledged and normal that incrementals are likely fragmented all over the
place so you could be IOPS limited (hence watching the iostat).

Also, whenever I sit and watch it for long times, I see that it varies
enormously.  For 5 minutes it will be (some speed), and for 5 minutes it
will be 5x higher...

Whatever it is, it's something we likely are all seeing, but probably just
ignoring.  If you can find it in your heart to just ignore it too, then
great, no problem.  ;-)  Otherwise, it's a matter of digging in and
characterizing to learn more about it.


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to