below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
I recently upgraded my file server, yet I'm still unsatisfied with the write speed. Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM. The four RAID disks are attached to the board's onbaord sATA controller (Sil3114 attached via PCI) Kernel is 2.6.21.1, custom on Slackware

Re: below 10MB/s write on raid5

2007-06-11 Thread Justin Piszcz
On Mon, 11 Jun 2007, Dexter Filmore wrote: I recently upgraded my file server, yet I'm still unsatisfied with the write speed. Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM. The four RAID disks are attached to the board's onbaord sATA controller (Sil3114 attached via

Re: LVM on raid10 - severe performance drop

2007-06-11 Thread Peter Rabbitson
Bernd Schubert wrote: Try to increase the read-ahead size of your lvm devices: blockdev --setra 8192 /dev/raid10/space or increase it at least to the same size as of your raid (blockdev --getra /dev/mdX). This did the trick, although I am still lagging behind the raw md device by about 3 -

Some RAID levels do not support bitmap

2007-06-11 Thread Jan Engelhardt
Hi, RAID levels 0 and 4 do not seem to like the -b internal. Is this intentional? Runs 2.6.20.2 on i586. (BTW, do you already have a PAGE_SIZE=8K fix?) 14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.0 -b internal -n 2 /dev/ram[01] mdadm: RUN_ARRAY failed: Input/output error mdadm: stopped

Re: below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote: On Mon, 11 Jun 2007, Dexter Filmore wrote: I recently upgraded my file server, yet I'm still unsatisfied with the write speed. Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM. The four RAID disks are attached to

Re: below 10MB/s write on raid5

2007-06-11 Thread Justin Piszcz
On Mon, 11 Jun 2007, Dexter Filmore wrote: On Monday 11 June 2007 14:47:50 Justin Piszcz wrote: On Mon, 11 Jun 2007, Dexter Filmore wrote: I recently upgraded my file server, yet I'm still unsatisfied with the write speed. Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of

Re: below 10MB/s write on raid5

2007-06-11 Thread Jon Nelson
On Mon, 11 Jun 2007, Justin Piszcz wrote: On Mon, 11 Jun 2007, Dexter Filmore wrote: On Monday 11 June 2007 14:47:50 Justin Piszcz wrote: On Mon, 11 Jun 2007, Dexter Filmore wrote: I recently upgraded my file server, yet I'm still unsatisfied with the write speed. Machine

Re: below 10MB/s write on raid5

2007-06-11 Thread Justin Piszcz
On Mon, 11 Jun 2007, Jon Nelson wrote: On Mon, 11 Jun 2007, Justin Piszcz wrote: On Mon, 11 Jun 2007, Dexter Filmore wrote: On Monday 11 June 2007 14:47:50 Justin Piszcz wrote: On Mon, 11 Jun 2007, Dexter Filmore wrote: I recently upgraded my file server, yet I'm still unsatisfied

Re: below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
10gb read test: dd if=/dev/md0 bs=1M count=10240 of=/dev/null What is the result? 71,7MB/s - but that's reading to null. *writing* real data however looks quite different. I've read that LVM can incur a 30-50% slowdown. Even then the 8-10MB/s I get would be a little low. --

Re: below 10MB/s write on raid5

2007-06-11 Thread Nix
On 11 Jun 2007, Justin Piszcz told this: You can do a read test. 10gb read test: dd if=/dev/md0 bs=1M count=10240 of=/dev/null What is the result? I've read that LVM can incur a 30-50% slowdown. FWIW I see a much smaller penalty than that. loki:~# lvs -o +devices LV VG

Re: Some RAID levels do not support bitmap

2007-06-11 Thread Bill Davidsen
Jan Engelhardt wrote: Hi, RAID levels 0 and 4 do not seem to like the -b internal. Is this intentional? Runs 2.6.20.2 on i586. (BTW, do you already have a PAGE_SIZE=8K fix?) 14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.0 -b internal -n 2 /dev/ram[01] mdadm: RUN_ARRAY failed: Input/output

Re: below 10MB/s write on raid5

2007-06-11 Thread Jon Nelson
On Mon, 11 Jun 2007, Nix wrote: On 11 Jun 2007, Justin Piszcz told this: You can do a read test. 10gb read test: dd if=/dev/md0 bs=1M count=10240 of=/dev/null What is the result? I've read that LVM can incur a 30-50% slowdown. FWIW I see a much smaller penalty than that.

Re: conflicting superblocks - Re: what is the best approach for fixing a degraded RAID5 (one drive failed) using mdadm?

2007-06-11 Thread Neil Brown
On Tuesday June 12, [EMAIL PROTECTED] wrote: Can anyone please advise which commands we should use to get the array back to at least a read only state? mdadm --assemble /dev/md0 /dev/sd[abcd]2 and let mdadm figure it out. It is good at that. If the above doesn't work, add --force, but be

[PATCH]is_power_of_2-dm

2007-06-11 Thread vignesh babu
Replacing (n (n-1)) in the context of power of 2 checks with is_power_of_2 Signed-off-by: vignesh babu [EMAIL PROTECTED] --- diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index ef124b7..3e1817a 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -19,6 +19,7 @@