below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
I recently upgraded my file server, yet I'm still unsatisfied with the write 
speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller 
(Sil3114 attached via PCI)
Kernel is 2.6.21.1, custom on Slackware 11.0.
RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each XFS.

The machine does some other work, too, but still I would have suspected to get 
into the 20-30MB/s area. Too much asked for?

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Justin Piszcz



On Mon, 11 Jun 2007, Dexter Filmore wrote:


I recently upgraded my file server, yet I'm still unsatisfied with the write
speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via PCI)
Kernel is 2.6.21.1, custom on Slackware 11.0.
RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each XFS.

The machine does some other work, too, but still I would have suspected to get
into the 20-30MB/s area. Too much asked for?

Dex


What do you get without LVM?

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LVM on raid10 - severe performance drop

2007-06-11 Thread Peter Rabbitson

Bernd Schubert wrote:


Try to increase the read-ahead size of your lvm devices:

blockdev --setra 8192 /dev/raid10/space

or increase it at least to the same size as of your raid (blockdev
--getra /dev/mdX).


This did the trick, although I am still lagging behind the raw md device 
by about 3 - 4%. Thanks for pointing this out!

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Some RAID levels do not support bitmap

2007-06-11 Thread Jan Engelhardt
Hi,


RAID levels 0 and 4 do not seem to like the -b internal. Is this 
intentional? Runs 2.6.20.2 on i586.
(BTW, do you already have a PAGE_SIZE=8K fix?)

14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.0 -b internal -n 2 /dev/ram[01]
mdadm: RUN_ARRAY failed: Input/output error
mdadm: stopped /dev/md0
14:47 ichi:/dev # mdadm -C /dev/md0 -l 0 -e 1.0 -b internal -n 2 /dev/ram[01]
mdadm: RUN_ARRAY failed: Cannot allocate memory
mdadm: stopped /dev/md0

Right... md: bitmaps not supported for this level.



Thanks,
Jan
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
 On Mon, 11 Jun 2007, Dexter Filmore wrote:
  I recently upgraded my file server, yet I'm still unsatisfied with the
  write speed.
  Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
  The four RAID disks are attached to the board's onbaord sATA controller
  (Sil3114 attached via PCI)
  Kernel is 2.6.21.1, custom on Slackware 11.0.
  RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each
  XFS.
 
  The machine does some other work, too, but still I would have suspected
  to get into the 20-30MB/s area. Too much asked for?
 
  Dex

 What do you get without LVM?

Hard to tell: the PV hogs all of the disk space, can't really do non-LVM 
tests.



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Justin Piszcz



On Mon, 11 Jun 2007, Dexter Filmore wrote:


On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:

On Mon, 11 Jun 2007, Dexter Filmore wrote:

I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via PCI)
Kernel is 2.6.21.1, custom on Slackware 11.0.
RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each
XFS.

The machine does some other work, too, but still I would have suspected
to get into the 20-30MB/s area. Too much asked for?

Dex


What do you get without LVM?


Hard to tell: the PV hogs all of the disk space, can't really do non-LVM
tests.


You can do a read test.

10gb read test:

dd if=/dev/md0 bs=1M count=10240 of=/dev/null

What is the result?

I've read that LVM can incur a 30-50% slowdown.

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Jon Nelson
On Mon, 11 Jun 2007, Justin Piszcz wrote:

 
 
 On Mon, 11 Jun 2007, Dexter Filmore wrote:
 
  On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
   On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via PCI)
Kernel is 2.6.21.1, custom on Slackware 11.0.
RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each
XFS.
   
The machine does some other work, too, but still I would have suspected
to get into the 20-30MB/s area. Too much asked for?
   
Dex
  
   What do you get without LVM?
 
  Hard to tell: the PV hogs all of the disk space, can't really do non-LVM
  tests.
 
 You can do a read test.
 
 10gb read test:
 
 dd if=/dev/md0 bs=1M count=10240 of=/dev/null

eek! Make sure to use iflag=direct
with that otherwise you'll get cached reads and that will throw
the numbers off considerably.


--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Justin Piszcz



On Mon, 11 Jun 2007, Jon Nelson wrote:


On Mon, 11 Jun 2007, Justin Piszcz wrote:




On Mon, 11 Jun 2007, Dexter Filmore wrote:


On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:

On Mon, 11 Jun 2007, Dexter Filmore wrote:

I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via PCI)
Kernel is 2.6.21.1, custom on Slackware 11.0.
RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each
XFS.

The machine does some other work, too, but still I would have suspected
to get into the 20-30MB/s area. Too much asked for?

Dex


What do you get without LVM?


Hard to tell: the PV hogs all of the disk space, can't really do non-LVM
tests.


You can do a read test.

10gb read test:

dd if=/dev/md0 bs=1M count=10240 of=/dev/null


eek! Make sure to use iflag=direct
with that otherwise you'll get cached reads and that will throw
the numbers off considerably.


Wow, makes a difference for faster devices.  Does bonnie++ use 
iflag=direct when benchmarking?


p34:~# dd if=/dev/md0 bs=1M count=1024 of=/dev/null iflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 15.0454 seconds, 71.4 MB/s
p34:~# dd if=/dev/md0 bs=1M count=1024 of=/dev/null
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 14.991 seconds, 71.6 MB/s
p34:~# dd if=/dev/md3 bs=1M count=1024 of=/dev/null iflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.08707 seconds, 348 MB/s
p34:~# dd if=/dev/md3 bs=1M count=1024 of=/dev/null
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.02948 seconds, 529 MB/s
p34:~#

p34:~# dd if=/dev/md3 bs=1M count=10024 of=/dev/null
10024+0 records in
10024+0 records out
10510925824 bytes (11 GB) copied, 17.7321 seconds, 593 MB/s
p34:~# sync
p34:~# dd if=/dev/md3 bs=1M count=10024 of=/dev/null iflag=direct
10024+0 records in
10024+0 records out
10510925824 bytes (11 GB) copied, 29.022 seconds, 362 MB/s
p34:~#


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
 10gb read test:

 dd if=/dev/md0 bs=1M count=10240 of=/dev/null

 What is the result?

71,7MB/s - but that's reading to null. *writing* real data however looks quite 
different.


 I've read that LVM can incur a 30-50% slowdown.

Even then the 8-10MB/s I get would be a little low. 



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Nix
On 11 Jun 2007, Justin Piszcz told this:
 You can do a read test.

 10gb read test:

 dd if=/dev/md0 bs=1M count=10240 of=/dev/null

 What is the result?

 I've read that LVM can incur a 30-50% slowdown.

FWIW I see a much smaller penalty than that.

loki:~# lvs -o +devices
  LV   VGAttr   LSize   Origin Snap%  Move Log Copy%  Devices
[...]
  usr  raid  -wi-ao   6.00G   /dev/md1(50)

loki:~# time dd if=/dev/md1 bs=1000 count=502400 of=/dev/null
502400+0 records in
502400+0 records out
50240 bytes (502 MB) copied, 16.2995 s, 30.8 MB/s

real0m16.360s
user0m0.310s
sys 0m11.780s

loki:~# time dd if=/dev/raid/usr bs=1000 count=502400 of=/dev/null
502400+0 records in
502400+0 records out
50240 bytes (502 MB) copied, 18.6172 s, 27.0 MB/s

real0m18.790s
user0m0.380s
sys 0m14.750s


So there's a penalty, sure, accounted for mostly in sys time, but it's
only about 10%: small enough that I at least can ignore it in exchange
for the administrative convenience of LVM.

-- 
`... in the sense that dragons logically follow evolution so they would
 be able to wield metal.' --- Kenneth Eng's colourless green ideas sleep
 furiously
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Some RAID levels do not support bitmap

2007-06-11 Thread Bill Davidsen

Jan Engelhardt wrote:

Hi,


RAID levels 0 and 4 do not seem to like the -b internal. Is this 
intentional? Runs 2.6.20.2 on i586.

(BTW, do you already have a PAGE_SIZE=8K fix?)

14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.0 -b internal -n 2 /dev/ram[01]
mdadm: RUN_ARRAY failed: Input/output error
mdadm: stopped /dev/md0
14:47 ichi:/dev # mdadm -C /dev/md0 -l 0 -e 1.0 -b internal -n 2 /dev/ram[01]
mdadm: RUN_ARRAY failed: Cannot allocate memory
mdadm: stopped /dev/md0

Right... md: bitmaps not supported for this level.
  


Bitmaps show what data has been modified but not written. For RAID-0 
there is no copy, therefore there can be no bitmap to show what still 
needs to be updated. I would have thought that RAID-4 would support 
bitmaps, but maybe it was just never added because use of RAID-4 is 
pretty uncommon.


BTW: RAID-4 seems to work fine with an external bitmap. Were you trying 
to do internal?


--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Jon Nelson
On Mon, 11 Jun 2007, Nix wrote:

 On 11 Jun 2007, Justin Piszcz told this:
  You can do a read test.
 
  10gb read test:
 
  dd if=/dev/md0 bs=1M count=10240 of=/dev/null
 
  What is the result?
 
  I've read that LVM can incur a 30-50% slowdown.
 
 FWIW I see a much smaller penalty than that.
 
 loki:~# lvs -o +devices
   LV   VGAttr   LSize   Origin Snap%  Move Log Copy%  Devices
 [...]
   usr  raid  -wi-ao   6.00G   /dev/md1(50)
 
 loki:~# time dd if=/dev/md1 bs=1000 count=502400 of=/dev/null
 502400+0 records in
 502400+0 records out
 50240 bytes (502 MB) copied, 16.2995 s, 30.8 MB/s
 
 loki:~# time dd if=/dev/raid/usr bs=1000 count=502400 of=/dev/null
 502400+0 records in
 502400+0 records out
 50240 bytes (502 MB) copied, 18.6172 s, 27.0 MB/s

And what is it like with 'iflag=direct' which I really feel you have to 
use, otherwise you get caching.

--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: conflicting superblocks - Re: what is the best approach for fixing a degraded RAID5 (one drive failed) using mdadm?

2007-06-11 Thread Neil Brown
On Tuesday June 12, [EMAIL PROTECTED] wrote:
 
 
 Can anyone please advise which commands we should use to get the array
 back to at least a read only state?

mdadm --assemble /dev/md0  /dev/sd[abcd]2

and let mdadm figure it out.  It is good at that.
If the above doesn't work, add --force, but be aware that there is
some possibility of hidden data corruption.  At least a fsck would
be advised.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH]is_power_of_2-dm

2007-06-11 Thread vignesh babu

Replacing (n  (n-1)) in the context of power of 2 checks
with is_power_of_2

Signed-off-by: vignesh babu [EMAIL PROTECTED]
--- 
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index ef124b7..3e1817a 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -19,6 +19,7 @@
 #include linux/time.h
 #include linux/vmalloc.h
 #include linux/workqueue.h
+#include linux/log2.h
 
 #define DM_MSG_PREFIX raid1
 #define DM_IO_PAGES 64
@@ -962,7 +963,7 @@ static void free_context(struct mirror_set *ms, struct 
dm_target *ti,
 
 static inline int _check_region_size(struct dm_target *ti, uint32_t size)
 {
-   return !(size % (PAGE_SIZE  9) || (size  (size - 1)) ||
+   return !(size % (PAGE_SIZE  9) || !is_power_of_2(size) ||
 size  ti-len);
 }
 
diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 0821a2b..f2d4b23 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -17,6 +17,7 @@
 #include linux/module.h
 #include linux/slab.h
 #include linux/vmalloc.h
+#include linux/log2.h
 
 #include dm-snap.h
 #include dm-bio-list.h
@@ -414,7 +415,7 @@ static int set_chunk_size(struct dm_snapshot *s, const char 
*chunk_size_arg,
chunk_size = round_up(chunk_size, PAGE_SIZE  9);
 
/* Check chunk_size is a power of 2 */
-   if (chunk_size  (chunk_size - 1)) {
+   if (!is_power_of_2(chunk_size)) {
*error = Chunk size is not a power of 2;
return -EINVAL;
}
diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
index 51f5e07..969944a 100644
--- a/drivers/md/dm-stripe.c
+++ b/drivers/md/dm-stripe.c
@@ -11,6 +11,7 @@
 #include linux/blkdev.h
 #include linux/bio.h
 #include linux/slab.h
+#include linux/log2.h
 
 #define DM_MSG_PREFIX striped
 
@@ -99,7 +100,7 @@ static int stripe_ctr(struct dm_target *ti, unsigned int 
argc, char **argv)
/*
 * chunk_size is a power of two
 */
-   if (!chunk_size || (chunk_size  (chunk_size - 1)) ||
+   if (!is_power_of_2(chunk_size) ||
(chunk_size  (PAGE_SIZE  SECTOR_SHIFT))) {
ti-error = Invalid chunk size;
return -EINVAL;

-- 
Vignesh Babu BM 
_ 
Why is it that every time I'm with you, makes me believe in magic?

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html