Re: [zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-11 Thread Brandon High
On Thu, Jun 10, 2010 at 10:22 PM, valrh...@gmail.com valrh...@gmail.com wrote:
 SYstem (brand new today): Dell Poweredge T410. Intel Xeon E5504 5.0 GHz (Core 
 i7-based) with 4 GB of RAM. I have one zpool of four 2-TB Hitachi Deskstar 
 SATA drives. I used the SATA mode on the motherboard (not the RAID mode, 
 because I don't want the motherboard's RAID controller to do something funny 
 to the drives). Everything gets recognized, and the EON storage install was 
 just fine.

Check that the system is using the AHCI driver. There usually an
option in the BIOS for AHCI, SATA, or RAID.

You can check with 'prtconf -D'

If you're using the pci-ide driver, performance is going to be poor.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-11 Thread valrh...@gmail.com
So I think you're right. With the ATA option, I can see the pci-ide driver.

However, there is no AHCI option; the only other two are off (obviously 
useless) and RAID. The RAID option gives control over to the RAID 
controller on the motherboard. However, there is nothing I can do in terms of 
formatting the disks, initializing in various ways, that works at all. That is, 
when I boot back into EON, I can run format and don't see anything. It just 
says 

Searching for disks...done
No disks found!

Any ideas? Maybe I should just buy a SATA controller which is known to work 
with OpenSolaris?

The good part is that I can go back to ATA mode and my data is still there, so 
at least nothing has been lost yet. I don't these motherboard RAID controllers, 
because if something goes wrong, you have to have the same model controller. It 
also means you can't easily move drives. So I want to avoid the RAID, if there 
is something that requires the drive to be attached to that 
motherboard/controller.

Or is there another way?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-11 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of valrh...@gmail.com
 
 So I think you're right. With the ATA option, I can see the pci-ide
 driver.

Um, if you'd like to carry on a conversatin, you'll have to better at
quoting.  This response you posted is totally out of context, and a lot of
people (like me) won't know what you're talking about anymore, because your
previous thread of discussion isn't the only thing we're thinking about.

Suggestions are:

When replying, keep the original From line.  (As above.)

Use in-line quoting, as above. 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-11 Thread valrh...@gmail.com
  From: zfs-discuss-boun...@opensolaris.org
 [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of
 valrh...@gmail.com
  
  So I think you're right. With the ATA option, I
 can see the pci-ide
  driver.
 
 Um, if you'd like to carry on a conversatin, you'll
 have to better at
 quoting.  This response you posted is totally out of
 context, and a lot of
 people (like me) won't know what you're talking about
 anymore, because your
 previous thread of discussion isn't the only thing
 we're thinking about.
 
 Suggestions are:
 
 When replying, keep the original From line.  (As
 above.)
 
 Use in-line quoting, as above. 
Thanks. I just saw a rather heated exchange on ZFS discuss on how quoting is 
getting out of hand, so I tried to keep my message short. I'll do a better job 
in the future; thanks for the heads-up.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-11 Thread Asif Iqbal
On Fri, Jun 11, 2010 at 1:22 AM, valrh...@gmail.com valrh...@gmail.com wrote:
 I've today set up a new fileserver using EON 0.600 (based on SNV130). I'm now 
 copying files between mirrors, and the performance is slower than I had 
 hoped. I am trying to figure out what to do to make things a bit faster in 
 terms of performance. Thanks in advance for reading, and sharing any thoughts 
 you might have.

 SYstem (brand new today): Dell Poweredge T410. Intel Xeon E5504 5.0 GHz (Core 
 i7-based) with 4 GB of RAM. I have one zpool of four 2-TB Hitachi Deskstar 
 SATA drives. I used the SATA mode on the motherboard (not the RAID mode, 
 because I don't want the motherboard's RAID controller to do something funny 
 to the drives). Everything gets recognized, and the EON storage install was 
 just fine.

 I then configured the drives into an array of two mirrors, made with zpool 
 create mirror (drives 1 and 2), then zpool add mirror (drives 3 and 4).
 The output from zpool status is:
  state: ONLINE
  scrub: none requested
 config:

        NAME        STATE     READ WRITE CKSUM
        hextb_data  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c2d0    ONLINE       0     0     0
            c2d1    ONLINE       0     0     0

 This is a 4TB array, initially empty, that I want to copy data TO.

 I then added two more 2 TB drives that were an existing pool on an older 
 machine. I want to move about 625 GB of deduped data from the old pool (the 
 simple mirror of two 2 TB drives that I physically moved over) to the new 
 pool. The case can accommodate all six drives.

 I snapshotted the old data on the 2 TB array, and made a new filesystem on 
 the 4 TB array. I then moved the data over with:

 zfs send -RD data_on_old_p...@snapshot | zfs recv -dF data_on_new_pool

 Here's the problem. When I run iostat -xn, I get:

                   extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   70.0    0.0 6859.4    0.3  0.2  0.2    2.1    2.4   5  10 c3d0
   69.8    0.0 6867.0    0.3  0.2  0.2    2.2    2.4   5  10 c4d0
   20.0   68.0  675.1 6490.6  0.9  0.6   10.0    6.6  22  32 c1d0
   19.5   68.0  675.4 6490.6  0.9  0.6   10.1    6.7  22  33 c1d1
   19.0   67.2  669.2 6492.5  1.2  0.7   13.8    7.8  28  36 c2d0
   20.2   67.1  676.8 6492.5  1.2  0.7   13.9    7.8  28  37 c2d1

 The OLD pool is the mirror of c3d0 and c4d0. The NEW pool is the striped set 
 of mirrors involving c1d0, c1d1, c2d0 and c2d1.

 The transfer started out a few hours ago at about 3 MB/sec. Now it's nearly 7 
 MB/sec. But why is this so low? Everything is


I don't think you can reach maximum throughput with one stream

Your asvc_t does not seem to be bad, it is less than 8ms.

You might get better throughput with

   zfs send -RD data_on_old_p...@snapshot | cat  /newpool/data

But you have to test it out


deduped and compressed. And it's an internal transfer, within the same
machine, from one set of hard drives to another, via the SATA
controller. Yet the net effect is very slow. I'm trying to figure out
what this is, since it's much slower than I would have hoped.

 Any and all advice on what to do to troubleshoot and fix the problem would be 
 quite welcome. Thanks!
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-10 Thread valrh...@gmail.com
I've today set up a new fileserver using EON 0.600 (based on SNV130). I'm now 
copying files between mirrors, and the performance is slower than I had hoped. 
I am trying to figure out what to do to make things a bit faster in terms of 
performance. Thanks in advance for reading, and sharing any thoughts you might 
have.

SYstem (brand new today): Dell Poweredge T410. Intel Xeon E5504 5.0 GHz (Core 
i7-based) with 4 GB of RAM. I have one zpool of four 2-TB Hitachi Deskstar SATA 
drives. I used the SATA mode on the motherboard (not the RAID mode, because I 
don't want the motherboard's RAID controller to do something funny to the 
drives). Everything gets recognized, and the EON storage install was just 
fine. 

I then configured the drives into an array of two mirrors, made with zpool 
create mirror (drives 1 and 2), then zpool add mirror (drives 3 and 4). 
The output from zpool status is:
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
hextb_data  ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c1d0ONLINE   0 0 0
c1d1ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
c2d0ONLINE   0 0 0
c2d1ONLINE   0 0 0

This is a 4TB array, initially empty, that I want to copy data TO.

I then added two more 2 TB drives that were an existing pool on an older 
machine. I want to move about 625 GB of deduped data from the old pool (the 
simple mirror of two 2 TB drives that I physically moved over) to the new pool. 
The case can accommodate all six drives. 

I snapshotted the old data on the 2 TB array, and made a new filesystem on the 
4 TB array. I then moved the data over with:

zfs send -RD data_on_old_p...@snapshot | zfs recv -dF data_on_new_pool

Here's the problem. When I run iostat -xn, I get:

   extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   70.00.0 6859.40.3  0.2  0.22.12.4   5  10 c3d0
   69.80.0 6867.00.3  0.2  0.22.22.4   5  10 c4d0
   20.0   68.0  675.1 6490.6  0.9  0.6   10.06.6  22  32 c1d0
   19.5   68.0  675.4 6490.6  0.9  0.6   10.16.7  22  33 c1d1
   19.0   67.2  669.2 6492.5  1.2  0.7   13.87.8  28  36 c2d0
   20.2   67.1  676.8 6492.5  1.2  0.7   13.97.8  28  37 c2d1

The OLD pool is the mirror of c3d0 and c4d0. The NEW pool is the striped set of 
mirrors involving c1d0, c1d1, c2d0 and c2d1.

The transfer started out a few hours ago at about 3 MB/sec. Now it's nearly 7 
MB/sec. But why is this so low? Everything is deduped and compressed. And it's 
an internal transfer, within the same machine, from one set of hard drives to 
another, via the SATA controller. Yet the net effect is very slow. I'm trying 
to figure out what this is, since it's much slower than I would have hoped.

Any and all advice on what to do to troubleshoot and fix the problem would be 
quite welcome. Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss