On 16/01/2012 15:43, Joel Sing wrote:
On Monday 16 January 2012, keith wrote:
I built a storage server to run the Bacula storage daemon on.  My plan
was to boot of a usb key then to use the four 2TB sata disks that are in
the server as a softraid raid 5 volume. The server in question is a dell
poweredge R310, i3 CPU 540 @ 3.07GHz with OBSD 5.0 amd64.

I put the OS onto the usb key but the softraid 5 volume seemed realy
slow. Sftping files over the local network to the servers softraid
volume was taking ages. So as I was short of time I just rebuilt the
server installing OBSD into one of the sata disks wd0

Later I connect to the server and made a raid5 volume on the remaining
three disks but the speed was really slow to I tried a raid1 on two of
the disks and that works fine speed wise.

I've tried to get some stats to figure out what's going on

raid 5 (wd1, wd2,wd3) Time for newfs command to complete = 1 min 14 secs
raid 5 (wd1, wd2,wd3) Time to copy 2.3G file from wd0 onto the softraid5
disk = 5 mins ish

raid 1 (wd1, wd2) = 1.8TB  Time for newfs command to complete = 4 secs
raid 1 (wd1, wd2) copy 2.3G Time to copy 2.3G file from wd0 onto
softraid disk = 25 secs
RAID 5 with softraid(4) is not ready for primetime - in particular it does not
support scrub or rebuild. If you have a single disk failure you will get to
keep your data, however you will need to dump/rebuild/restore.

I'm not specifically aware of performance issues, but I'm not entirely
surprised either - I'll try to take a look at some point. RAID 5 writes will
be slower, but not that much slower...

As this point I though I'd try raid0 but the server went and hung for
some reason.

#bioctl -d sd0
#bioctl -c 0 -l  /dev/wd2a,/dev/wd3a softraid0<  It hung on this
command.... Won't know what happed till I get to the datacenter.
I'm guessing that you did not clear the existing RAID 1 metadata first, in
which case you'll probably have a divide by zero with a trace that ends in
sr_raid1_assemble() - there is a bug there that I hit the other night.

Idealy I wanted one large disk but if can't get a quick raid5 working I
will just use two softraid raid 1 disks and work around it. Does anyone
have any suggestions  ?
I'd stick with RAID 1 - you can use more than two disks, which will give you
increased redundancy and should improve read throughput. Obviously you'll
have less capacity though.
Thanks for the quick answers, If I just create two raid 1 sets on the server then could I just make a raid 0 volume using both raid1's ?

Thanks
Keith

Reply via email to