On Fri, Jun 11, 2010 at 1:22 AM, valrh...@gmail.com <valrh...@gmail.com> wrote:
> I've today set up a new fileserver using EON 0.600 (based on SNV130). I'm now 
> copying files between mirrors, and the performance is slower than I had 
> hoped. I am trying to figure out what to do to make things a bit faster in 
> terms of performance. Thanks in advance for reading, and sharing any thoughts 
> you might have.
>
> SYstem (brand new today): Dell Poweredge T410. Intel Xeon E5504 5.0 GHz (Core 
> i7-based) with 4 GB of RAM. I have one zpool of four 2-TB Hitachi Deskstar 
> SATA drives. I used the SATA mode on the motherboard (not the RAID mode, 
> because I don't want the motherboard's RAID controller to do something funny 
> to the drives). Everything gets recognized, and the EON storage "install" was 
> just fine.
>
> I then configured the drives into an array of two mirrors, made with zpool 
> create mirror (drives 1 and 2), then zpool add mirror (drives 3 and 4).
> The output from zpool status is:
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>        hextb_data  ONLINE       0     0     0
>          mirror-0  ONLINE       0     0     0
>            c1d0    ONLINE       0     0     0
>            c1d1    ONLINE       0     0     0
>          mirror-1  ONLINE       0     0     0
>            c2d0    ONLINE       0     0     0
>            c2d1    ONLINE       0     0     0
>
> This is a 4TB array, initially empty, that I want to copy data TO.
>
> I then added two more 2 TB drives that were an existing pool on an older 
> machine. I want to move about 625 GB of deduped data from the old pool (the 
> simple mirror of two 2 TB drives that I physically moved over) to the new 
> pool. The case can accommodate all six drives.
>
> I snapshotted the old data on the 2 TB array, and made a new filesystem on 
> the 4 TB array. I then moved the data over with:
>
> zfs send -RD data_on_old_p...@snapshot | zfs recv -dF data_on_new_pool
>
> Here's the problem. When I run "iostat -xn", I get:
>
>                   extended device statistics
>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   70.0    0.0 6859.4    0.3  0.2  0.2    2.1    2.4   5  10 c3d0
>   69.8    0.0 6867.0    0.3  0.2  0.2    2.2    2.4   5  10 c4d0
>   20.0   68.0  675.1 6490.6  0.9  0.6   10.0    6.6  22  32 c1d0
>   19.5   68.0  675.4 6490.6  0.9  0.6   10.1    6.7  22  33 c1d1
>   19.0   67.2  669.2 6492.5  1.2  0.7   13.8    7.8  28  36 c2d0
>   20.2   67.1  676.8 6492.5  1.2  0.7   13.9    7.8  28  37 c2d1
>
> The OLD pool is the mirror of c3d0 and c4d0. The NEW pool is the striped set 
> of mirrors involving c1d0, c1d1, c2d0 and c2d1.
>
> The transfer started out a few hours ago at about 3 MB/sec. Now it's nearly 7 
> MB/sec. But why is this so low? Everything is


I don't think you can reach maximum throughput with one stream

Your asvc_t does not seem to be bad, it is less than 8ms.

You might get better throughput with

   zfs send -RD data_on_old_p...@snapshot | cat "> /newpool/data"

But you have to test it out


deduped and compressed. And it's an internal transfer, within the same
machine, from one set of hard drives to another, via the SATA
controller. Yet the net effect is very slow. I'm trying to figure out
what this is, since it's much slower than I would have hoped.
>
> Any and all advice on what to do to troubleshoot and fix the problem would be 
> quite welcome. Thanks!
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to