Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-09 Thread Thomas W
Okay... I found the solution to my problem.

And it has nothing to do with my hard drives... It was the Realtek NIC drivers. 
I read about problems and added a new driver (I got that from the forum 
thread). And now I have about 30MB/s read and 25MB/s write performance. That's 
enough (for the beginning).

Thanks for all your input and support. 

Thomas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-08 Thread Thomas W
Hi, it's me again.

First of all, technically slicing the drive worked like it should.

I started to experiment and found some issues I don't really understand.

My base playground setup:
- Intel D945GCLF2, 2GB ram, Opensolaris from EON
- 2 Sata Seagates 500GB

A normal zpool of the two drives to get a TB of space.
Now I added a 1 TB USB drive (I sliced it to have 500GB partitions). I attached 
them to the Sata drives to mirror them.
Worked great...
But, suddenly the throughput dropped from around 15MB/s to 300KB/s. After 
detaching the USB drives it went back to 15MB/s.

My Question:
Is it possible that mixing USB 2.0 external drives and Sata drives isn't a good 
idea or is the problem that I sliced the external drive?

After removing the USB drive I done a little benchmarking as I was curious how 
well the Intel system works at all.
I wonder if this 'iostat' output is okay (For me it doesn't)
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G178  0  22.2M  0
sumpf804G   124G 78  0  9.85M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0

Why are there so many 0 in this chart? No wonder I only get 15MB/s max...

Thanks for helping a Solaris beginner. Your help is very appreciated.
Thomas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-08 Thread Erik Trimble

Thomas W wrote:

Hi, it's me again.

First of all, technically slicing the drive worked like it should.

I started to experiment and found some issues I don't really understand.

My base playground setup:
- Intel D945GCLF2, 2GB ram, Opensolaris from EON
- 2 Sata Seagates 500GB

A normal zpool of the two drives to get a TB of space.
Now I added a 1 TB USB drive (I sliced it to have 500GB partitions). I attached 
them to the Sata drives to mirror them.
Worked great...
But, suddenly the throughput dropped from around 15MB/s to 300KB/s. After 
detaching the USB drives it went back to 15MB/s.

My Question:
Is it possible that mixing USB 2.0 external drives and Sata drives isn't a good 
idea or is the problem that I sliced the external drive?

After removing the USB drive I done a little benchmarking as I was curious how 
well the Intel system works at all.
I wonder if this 'iostat' output is okay (For me it doesn't)
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G178  0  22.2M  0
sumpf804G   124G 78  0  9.85M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0

Why are there so many 0 in this chart? No wonder I only get 15MB/s max...

Thanks for helping a Solaris beginner. Your help is very appreciated.
Thomas
  


USB isn't great, but it's not responsible for your problem. Slicing the 
1TB disk into 2 partitions is.  Think about this:  the original zpool 
(with the 2 500GB drives) is configured in a stripe - (most) data is 
written across both drives simultaneously, so you will get roughly 2x 
the performance of a single drive.  Now, you've added 1/2 of a  SINGLE 
disk to create a mirror pair for each 500GB drive.  Now, when data is 
written to your zpool, data has to write to each 500GB drive (which can 
be done independently), but then also has to be written to each half of 
the 1TB usb drive - thus, this drive is in serious I/O contention, 
because for each write to the zpool, 2 writes are queued to the 1TB 
drive (1 for each 500GB partition).  This will cause both seek time and 
access time delays, which is going to thrash your 1TB disk but good.


To take a look at what's going on, use this command version of iostat:

% iostat -dnx

For instance, my current system shows:

$ iostat -dnx 10
   extended device statistics
   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   0.00.31.01.6  0.0  0.05.3   10.1   0   0 c7d0
   0.00.30.91.6  0.0  0.05.2   10.5   0   0 c8d0
   0.00.00.00.0  0.0  0.00.00.0   0   0 c7t1d0
   2.63.2  156.1   34.3  0.1  0.1   13.8   23.6   1   2 c9t2d0
   2.43.2  142.9   34.3  0.1  0.1   17.4   26.1   1   2 c9t3d0
   2.53.1  152.9   34.3  0.1  0.2   23.1   38.0   1   3 c9t4d0
   2.73.1  164.9   34.3  0.1  0.2   24.1   36.1   1   3 c9t5d0
   2.53.2  152.4   34.3  0.1  0.2   22.4   39.3   1   3 c9t6d0
   2.73.1  166.3   34.4  0.1  0.2   23.6   38.5   1   3 c9t7d0


I'm running a raidz pool on this, with all drives in c9.   As you can 
see, it's quite balanced, with all the c9 drives having roughly the same 
wait and wsvt_t.  Also, the %w is very low.  I suspect that you'll see a 
radically different picture, with your 1TB drive showing very high svc_t 
and %w numbers (or at least, much higher than your 500GB drives).



A drop from 15mb/s to 300kb/s seems a little radical, though (that's a 
45x reduction).  I'm also a little suspect about your USB connection.  
Try this to see what your USB connection throughput is:  remove the 1TB 
disk mirror partitions, and create a separate zpool with just 1 of the 
partitions, and then run iostat on it (under some load, of course). This 
will at least tell you what the raw performance of the USB disk is.





--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-08 Thread Richard Elling
On Mar 8, 2010, at 1:00 AM, Thomas W wrote:
 Hi, it's me again.
 
 First of all, technically slicing the drive worked like it should.
 
 I started to experiment and found some issues I don't really understand.
 
 My base playground setup:
 - Intel D945GCLF2, 2GB ram, Opensolaris from EON
 - 2 Sata Seagates 500GB
 
 A normal zpool of the two drives to get a TB of space.
 Now I added a 1 TB USB drive (I sliced it to have 500GB partitions). I 
 attached them to the Sata drives to mirror them.
 Worked great...
 But, suddenly the throughput dropped from around 15MB/s to 300KB/s. After 
 detaching the USB drives it went back to 15MB/s.
 
 My Question:
 Is it possible that mixing USB 2.0 external drives and Sata drives isn't a 
 good idea or is the problem that I sliced the external drive?
 
 After removing the USB drive I done a little benchmarking as I was curious 
 how well the Intel system works at all.
 I wonder if this 'iostat' output is okay (For me it doesn't)
 sumpf804G   124G257  0  32.0M  0
 sumpf804G   124G  0  0  0  0
 sumpf804G   124G178  0  22.2M  0
 sumpf804G   124G 78  0  9.85M  0
 sumpf804G   124G  0  0  0  0
 sumpf804G   124G257  0  32.0M  0
 sumpf804G   124G  0  0  0  0
 sumpf804G   124G  0  0  0  0
 sumpf804G   124G257  0  32.0M  0
 sumpf804G   124G  0  0  0  0
 sumpf804G   124G257  0  32.0M  0
 sumpf804G   124G  0  0  0  0
 
 Why are there so many 0 in this chart? No wonder I only get 15MB/s max...

The Nyquist-Shannon sampling theorem applies here.  If a function x(t)
contains no frequencies higher than 0.033 Hz, it is completely determined
by giving its ordinates at a series of points spaced 15 seconds apart.  In
other words, if your iostat samples at a higher rate than the txg commit
interval (30 seconds), then you can see periods of time where the disk is idle.
If your sampling interval is  30 seconds, then you will see a smoother
I/O rate.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Richard Elling
On Mar 2, 2010, at 11:58 AM, Thomas W wrote:
 Hi!
 
 I'm new to ZFS so this may be (or certainly is) a kind of newbie question.
 
 I started with a small server I built from parts I had left over.
 I only had 2 500GB drives and wanted to go for space. So i just created a 
 zpool without any option. That now looks like this.
 
NAMESTATE READ WRITE CKSUM
swamp   ONLINE   0 0 0
  c1d0  ONLINE   0 0 0
  c2d0  ONLINE   0 0 0
 
 So far so good. But like always the provisional solution became a permanent 
 solution. Now I have an extra 1TB disk that I can add to the system. And I 
 want to go for file security.
 
 How can I get the best out of this setup. Is there a way of mirroring the 
 data automatically between those three drives?
 
 Any help is appreciated but please don't tell me I have to delete anything ;)

If the number of available blocks on the 1TB disk is 2x the 500 GB disks, then 
you
could make 2 partitions (slices) on the 1TB disk and mirror to each of the other
disks.  If the sizes are close but slightly smaller on the sliced 1TB drive, 
then 
you might need to be on a later build to attach the mirror.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Thomas W
Thanks... works perfect!

Currently it's resilvering. That is all too easy ;)

Thanks again,
  Thomas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Cindy Swearingen

Hi Thomas,

I see that Richard has suggested mirroring your existing pool by
attaching slices from your 1 TB disk if the sizing is right.

You mentioned file security and I think you mean protecting your data
from hardware failures. Another option is to get one more disk to
convert this non-redundant pool to a mirrored pool by attaching the 1 TB 
disk and another similarly sized disk. See the example below.


Another idea would be to create a new pool with the 1 TB disk and then
use zfs send/receive to send over the data from swamp, but this wouldn't 
work because you couldn't reuse swamp's disks by attaching the 500GB

disks to the new pool because they are smaller than the 1 TB disk.

Keep in mind that if you do recreate this pool as a mirrored
configuration:

mirror pool = 1 500GB + 1 500GB disks, total capacity is 500GB
mirror pool = 1 500GB + 1GB disks, total capacity is 500GB

Because of the unequal disk sizing, the mirrored pool capacity would
be equal to the smallest disk.

Thanks,

Cindy

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c2t7d0ONLINE   0 0 0
  c2t8d0ONLINE   0 0 0

errors: No known data errors
# zpool attach tank c2t7d0 c2t9d0
# zpool attach tank c2t8d0 c2t10d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Tue Mar  2 
14:32:21 2010

config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c2t7d0   ONLINE   0 0 0
c2t9d0   ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
c2t10d0  ONLINE   0 0 0  56.5K resilvered

errors: No known data errors

On 03/02/10 12:58, Thomas W wrote:

Hi!

I'm new to ZFS so this may be (or certainly is) a kind of newbie question.

I started with a small server I built from parts I had left over.
I only had 2 500GB drives and wanted to go for space. So i just created a zpool 
without any option. That now looks like this.

NAMESTATE READ WRITE CKSUM
swamp   ONLINE   0 0 0
  c1d0  ONLINE   0 0 0
  c2d0  ONLINE   0 0 0

So far so good. But like always the provisional solution became a permanent 
solution. Now I have an extra 1TB disk that I can add to the system. And I want 
to go for file security.

How can I get the best out of this setup. Is there a way of mirroring the data 
automatically between those three drives?

Any help is appreciated but please don't tell me I have to delete anything ;)

Thanks a lot,
  Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Thomas Wuerdemann
Hi Cindy,

thanks for your advice. I guess this would be the better way to mirror one
drive on a physical extra drive but Richards suggetion was fitting my
current conditions better.

Because I didn't want to buy an extra disk or copy all data back and forth.
I just happened to have an extra 1TB drive around and wondered how I could
use this to create some sort of protection for my small but trusted file
server.

I belive this is a good solution by now.

Of course I will fix this as soon as I have an extra 500GB or bigger drive
available. Till then my current setup has to work ;)

Thanks a lot,
  Thomas


2010/3/2 Cindy Swearingen cindy.swearin...@sun.com

 Hi Thomas,

 I see that Richard has suggested mirroring your existing pool by
 attaching slices from your 1 TB disk if the sizing is right.

 You mentioned file security and I think you mean protecting your data
 from hardware failures. Another option is to get one more disk to
 convert this non-redundant pool to a mirrored pool by attaching the 1 TB
 disk and another similarly sized disk. See the example below.

 Another idea would be to create a new pool with the 1 TB disk and then
 use zfs send/receive to send over the data from swamp, but this wouldn't
 work because you couldn't reuse swamp's disks by attaching the 500GB
 disks to the new pool because they are smaller than the 1 TB disk.

 Keep in mind that if you do recreate this pool as a mirrored
 configuration:

 mirror pool = 1 500GB + 1 500GB disks, total capacity is 500GB
 mirror pool = 1 500GB + 1GB disks, total capacity is 500GB

 Because of the unequal disk sizing, the mirrored pool capacity would
 be equal to the smallest disk.

 Thanks,

 Cindy

 # zpool status tank
  pool: tank
  state: ONLINE
  scrub: none requested
 config:


NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c2t7d0ONLINE   0 0 0
  c2t8d0ONLINE   0 0 0

 errors: No known data errors
 # zpool attach tank c2t7d0 c2t9d0
 # zpool attach tank c2t8d0 c2t10d0
 # zpool status tank
  pool: tank
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Tue Mar  2 14:32:21
 2010
 config:


NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c2t7d0   ONLINE   0 0 0
c2t9d0   ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
c2t10d0  ONLINE   0 0 0  56.5K resilvered

 errors: No known data errors


 On 03/02/10 12:58, Thomas W wrote:

 Hi!

 I'm new to ZFS so this may be (or certainly is) a kind of newbie question.

 I started with a small server I built from parts I had left over.
 I only had 2 500GB drives and wanted to go for space. So i just created a
 zpool without any option. That now looks like this.

NAMESTATE READ WRITE CKSUM
swamp   ONLINE   0 0 0
  c1d0  ONLINE   0 0 0
  c2d0  ONLINE   0 0 0

 So far so good. But like always the provisional solution became a
 permanent solution. Now I have an extra 1TB disk that I can add to the
 system. And I want to go for file security.

 How can I get the best out of this setup. Is there a way of mirroring the
 data automatically between those three drives?

 Any help is appreciated but please don't tell me I have to delete anything
 ;)

 Thanks a lot,
  Thomas


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss