Softraid raid 5 throughput problem

2012-01-16 Thread keith
I built a storage server to run the Bacula storage daemon on.  My plan 
was to boot of a usb key then to use the four 2TB sata disks that are in 
the server as a softraid raid 5 volume. The server in question is a dell 
poweredge R310, i3 CPU 540 @ 3.07GHz with OBSD 5.0 amd64.


I put the OS onto the usb key but the softraid 5 volume seemed realy 
slow. Sftping files over the local network to the servers softraid 
volume was taking ages. So as I was short of time I just rebuilt the 
server installing OBSD into one of the sata disks wd0


Later I connect to the server and made a raid5 volume on the remaining 
three disks but the speed was really slow to I tried a raid1 on two of 
the disks and that works fine speed wise.


I've tried to get some stats to figure out what's going on

raid 5 (wd1, wd2,wd3) Time for newfs command to complete = 1 min 14 secs
raid 5 (wd1, wd2,wd3) Time to copy 2.3G file from wd0 onto the softraid5 
disk = 5 mins ish


raid 1 (wd1, wd2) = 1.8TB  Time for newfs command to complete = 4 secs
raid 1 (wd1, wd2) copy 2.3G Time to copy 2.3G file from wd0 onto 
softraid disk = 25 secs


As this point I though I'd try raid0 but the server went and hung for 
some reason.


#bioctl -d sd0
#bioctl -c 0 -l  /dev/wd2a,/dev/wd3a softraid0  It hung on this 
command Won't know what happed till I get to the datacenter.


Idealy I wanted one large disk but if can't get a quick raid5 working I 
will just use two softraid raid 1 disks and work around it. Does anyone 
have any suggestions  ?


Thanks
Keith



Re: Softraid raid 5 throughput problem

2012-01-16 Thread Tomas Bodzar
On Mon, Jan 16, 2012 at 1:12 PM, keith ke...@scott-land.net wrote:
 I built a storage server to run the Bacula storage daemon on. B My plan was
 to boot of a usb key then to use the four 2TB sata disks that are in the
 server as a softraid raid 5 volume. The server in question is a dell
 poweredge R310, i3 CPU 540 @ 3.07GHz with OBSD 5.0 amd64.

 I put the OS onto the usb key but the softraid 5 volume seemed realy slow.
 Sftping files over the local network to the servers softraid volume was
 taking ages. So as I was short of time I just rebuilt the server installing
 OBSD into one of the sata disks wd0

 Later I connect to the server and made a raid5 volume on the remaining
three
 disks but the speed was really slow to I tried a raid1 on two of the disks
 and that works fine speed wise.

 I've tried to get some stats to figure out what's going on

 raid 5 (wd1, wd2,wd3) Time for newfs command to complete = 1 min 14 secs
 raid 5 (wd1, wd2,wd3) Time to copy 2.3G file from wd0 onto the softraid5
 disk = 5 mins ish

 raid 1 (wd1, wd2) = 1.8TB B Time for newfs command to complete = 4 secs
 raid 1 (wd1, wd2) copy 2.3G Time to copy 2.3G file from wd0 onto softraid
 disk = 25 secs

 As this point I though I'd try raid0 but the server went and hung for some
 reason.

 #bioctl -d sd0
 #bioctl -c 0 -l B /dev/wd2a,/dev/wd3a softraid0  It hung on this
command
 Won't know what happed till I get to the datacenter.

 Idealy I wanted one large disk but if can't get a quick raid5 working I
will
 just use two softraid raid 1 disks and work around it. Does anyone have any
 suggestions B ?

If you are concerned about a speed then RAID5 (or similar) is (and
will not be) not a good choice with any filesystem

http://constantin.glez.de/blog/2010/01/home-server-raid-greed-and-why-mirrori
ng-still-best


 Thanks
 Keith



Re: Softraid raid 5 throughput problem

2012-01-16 Thread Joel Sing
On Monday 16 January 2012, keith wrote:
 I built a storage server to run the Bacula storage daemon on.  My plan
 was to boot of a usb key then to use the four 2TB sata disks that are in
 the server as a softraid raid 5 volume. The server in question is a dell
 poweredge R310, i3 CPU 540 @ 3.07GHz with OBSD 5.0 amd64.

 I put the OS onto the usb key but the softraid 5 volume seemed realy
 slow. Sftping files over the local network to the servers softraid
 volume was taking ages. So as I was short of time I just rebuilt the
 server installing OBSD into one of the sata disks wd0

 Later I connect to the server and made a raid5 volume on the remaining
 three disks but the speed was really slow to I tried a raid1 on two of
 the disks and that works fine speed wise.

 I've tried to get some stats to figure out what's going on

 raid 5 (wd1, wd2,wd3) Time for newfs command to complete = 1 min 14 secs
 raid 5 (wd1, wd2,wd3) Time to copy 2.3G file from wd0 onto the softraid5
 disk = 5 mins ish

 raid 1 (wd1, wd2) = 1.8TB  Time for newfs command to complete = 4 secs
 raid 1 (wd1, wd2) copy 2.3G Time to copy 2.3G file from wd0 onto
 softraid disk = 25 secs

RAID 5 with softraid(4) is not ready for primetime - in particular it does not 
support scrub or rebuild. If you have a single disk failure you will get to 
keep your data, however you will need to dump/rebuild/restore.

I'm not specifically aware of performance issues, but I'm not entirely 
surprised either - I'll try to take a look at some point. RAID 5 writes will 
be slower, but not that much slower...

 As this point I though I'd try raid0 but the server went and hung for
 some reason.

 #bioctl -d sd0
 #bioctl -c 0 -l  /dev/wd2a,/dev/wd3a softraid0  It hung on this
 command Won't know what happed till I get to the datacenter.

I'm guessing that you did not clear the existing RAID 1 metadata first, in 
which case you'll probably have a divide by zero with a trace that ends in 
sr_raid1_assemble() - there is a bug there that I hit the other night.

 Idealy I wanted one large disk but if can't get a quick raid5 working I
 will just use two softraid raid 1 disks and work around it. Does anyone
 have any suggestions  ?

I'd stick with RAID 1 - you can use more than two disks, which will give you 
increased redundancy and should improve read throughput. Obviously you'll 
have less capacity though.
-- 

Reason is not automatic. Those who deny it cannot be conquered by it.
 Do not count on them. Leave them alone. -- Ayn Rand



Re: Softraid raid 5 throughput problem

2012-01-16 Thread keith

On 16/01/2012 15:43, Joel Sing wrote:

On Monday 16 January 2012, keith wrote:

I built a storage server to run the Bacula storage daemon on.  My plan
was to boot of a usb key then to use the four 2TB sata disks that are in
the server as a softraid raid 5 volume. The server in question is a dell
poweredge R310, i3 CPU 540 @ 3.07GHz with OBSD 5.0 amd64.

I put the OS onto the usb key but the softraid 5 volume seemed realy
slow. Sftping files over the local network to the servers softraid
volume was taking ages. So as I was short of time I just rebuilt the
server installing OBSD into one of the sata disks wd0

Later I connect to the server and made a raid5 volume on the remaining
three disks but the speed was really slow to I tried a raid1 on two of
the disks and that works fine speed wise.

I've tried to get some stats to figure out what's going on

raid 5 (wd1, wd2,wd3) Time for newfs command to complete = 1 min 14 secs
raid 5 (wd1, wd2,wd3) Time to copy 2.3G file from wd0 onto the softraid5
disk = 5 mins ish

raid 1 (wd1, wd2) = 1.8TB  Time for newfs command to complete = 4 secs
raid 1 (wd1, wd2) copy 2.3G Time to copy 2.3G file from wd0 onto
softraid disk = 25 secs

RAID 5 with softraid(4) is not ready for primetime - in particular it does not
support scrub or rebuild. If you have a single disk failure you will get to
keep your data, however you will need to dump/rebuild/restore.

I'm not specifically aware of performance issues, but I'm not entirely
surprised either - I'll try to take a look at some point. RAID 5 writes will
be slower, but not that much slower...


As this point I though I'd try raid0 but the server went and hung for
some reason.

#bioctl -d sd0
#bioctl -c 0 -l  /dev/wd2a,/dev/wd3a softraid0  It hung on this
command Won't know what happed till I get to the datacenter.

I'm guessing that you did not clear the existing RAID 1 metadata first, in
which case you'll probably have a divide by zero with a trace that ends in
sr_raid1_assemble() - there is a bug there that I hit the other night.


Idealy I wanted one large disk but if can't get a quick raid5 working I
will just use two softraid raid 1 disks and work around it. Does anyone
have any suggestions  ?

I'd stick with RAID 1 - you can use more than two disks, which will give you
increased redundancy and should improve read throughput. Obviously you'll
have less capacity though.
Thanks for the quick answers, If I just create two raid 1 sets on the 
server then could I just make a raid 0 volume using both raid1's ?


Thanks
Keith



Re: Softraid raid 5 throughput problem

2012-01-16 Thread Bentley, Dain
Drop the RAID 5 and go with a RAID 10 as you were talking about but add a hot
spare if you can. RAID 10 doesn't have a parity bit which slows down write
times. But if a disk is bad and isn't replaced you can have a bad day. Hot
spares have saved my butt more than once.

Regards,
Dain Bentley

-Original Message-
From: keith [ke...@scott-land.net]
Received: Monday, 16 Jan 2012, 12:14pm
To: Joel Sing [j...@sing.id.au]
CC: misc@openbsd.org [misc@openbsd.org]
Subject: Re: Softraid  raid 5 throughput problem

On 16/01/2012 15:43, Joel Sing wrote:
 On Monday 16 January 2012, keith wrote:
 I built a storage server to run the Bacula storage daemon on.  My plan
 was to boot of a usb key then to use the four 2TB sata disks that are in
 the server as a softraid raid 5 volume. The server in question is a dell
 poweredge R310, i3 CPU 540 @ 3.07GHz with OBSD 5.0 amd64.

 I put the OS onto the usb key but the softraid 5 volume seemed realy
 slow. Sftping files over the local network to the servers softraid
 volume was taking ages. So as I was short of time I just rebuilt the
 server installing OBSD into one of the sata disks wd0

 Later I connect to the server and made a raid5 volume on the remaining
 three disks but the speed was really slow to I tried a raid1 on two of
 the disks and that works fine speed wise.

 I've tried to get some stats to figure out what's going on

 raid 5 (wd1, wd2,wd3) Time for newfs command to complete = 1 min 14 secs
 raid 5 (wd1, wd2,wd3) Time to copy 2.3G file from wd0 onto the softraid5
 disk = 5 mins ish

 raid 1 (wd1, wd2) = 1.8TB  Time for newfs command to complete = 4 secs
 raid 1 (wd1, wd2) copy 2.3G Time to copy 2.3G file from wd0 onto
 softraid disk = 25 secs
 RAID 5 with softraid(4) is not ready for primetime - in particular it does
not
 support scrub or rebuild. If you have a single disk failure you will get to
 keep your data, however you will need to dump/rebuild/restore.

 I'm not specifically aware of performance issues, but I'm not entirely
 surprised either - I'll try to take a look at some point. RAID 5 writes
will
 be slower, but not that much slower...

 As this point I though I'd try raid0 but the server went and hung for
 some reason.

 #bioctl -d sd0
 #bioctl -c 0 -l  /dev/wd2a,/dev/wd3a softraid0  It hung on this
 command Won't know what happed till I get to the datacenter.
 I'm guessing that you did not clear the existing RAID 1 metadata first, in
 which case you'll probably have a divide by zero with a trace that ends in
 sr_raid1_assemble() - there is a bug there that I hit the other night.

 Idealy I wanted one large disk but if can't get a quick raid5 working I
 will just use two softraid raid 1 disks and work around it. Does anyone
 have any suggestions  ?
 I'd stick with RAID 1 - you can use more than two disks, which will give
you
 increased redundancy and should improve read throughput. Obviously you'll
 have less capacity though.
Thanks for the quick answers, If I just create two raid 1 sets on the
server then could I just make a raid 0 volume using both raid1's ?

Thanks
Keith