On Wed, Nov 16, 2011 at 11:07 AM, Anatoly <legko...@fastmail.fm> wrote:

> I've just made clean test for sequential data read. System has 45 mirror
> vdevs.
> 1. Create 160GB random file.
> 2. Read it to /dev/null.
> 3. Do Snaspshot and send it to /dev/null.
> 4. Compare results.

What OS?

The following is under Solaris 10U9 with CPU_2010-10 + an IDR for a
SAS/SATA drive bug.

I just had to replicate over 20TB of small files, `zfs send -R
<zfs@snap> | zfs recv -e <zfs>`, and I got an AVERAGE throughput of
over 77MB/sec. (over 6TB /day). The entire replication took just over
3 days.

The source zpool is on J4400 750GB SATA drives, 110 of them in a
RAIDz2 configuration (22 vdevs of 5 disks each), the target was a pair
of old h/w raid boxes (one without any NVRAM cache) and a zpool
configuration of 6 striped vdevs (a total of 72 drives behind the h/w
raid controller doing raid5, this is temporary and only for moving
data physically around, so the lack of ZFS redundancy is not an

There are over 2300 snapshots on the source side and we were
replicating close to 2000 of them.

Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
zfs-discuss mailing list

Reply via email to