Re: md array numbering is messed up

2006-10-30 Thread Brad Campbell
Michael Tokarev wrote: Neil Brown wrote: On Sunday October 29, [EMAIL PROTECTED] wrote: Hi, I have 2 arrays whose numbers get inverted, creating havoc, when booting under different kernels. I have md0 (raid1) made up of ide drives and md1 (raid5) made up of five sata drives, when booting

Re: md array numbering is messed up

2006-10-30 Thread dean gaudet
On Mon, 30 Oct 2006, Brad Campbell wrote: Michael Tokarev wrote: My guess is that it's using mdrun shell script - the same as on Debian. It's a long story, the thing is quite ugly and messy and does messy things too, but they says it's compatibility stuff and continue shipping it. ...

Re: Propose of enhancement of raid1 driver

2006-10-30 Thread Al Boldi
Mario 'BitKoenig' Holbe wrote: Al Boldi [EMAIL PROTECTED] wrote: But what still isn't clear, why can't raid1 use something like the raid10 offset=2 mode? RAID1 has equal data on all mirrors, so sooner or later you have to seek somewhere - no matter how you layout the data on each mirror.

Re: Propose of enhancement of raid1 driver

2006-10-30 Thread Mario 'BitKoenig' Holbe
Al Boldi [EMAIL PROTECTED] wrote: Don't underestimate the effects mere layout can have on multi-disk array performance, despite it being highly hw dependent. I can't see the difference between equal mirrors and somehow interleaved layout on RAID1. Since you have to seek anyways, there should

Re: Propose of enhancement of raid1 driver

2006-10-30 Thread Al Boldi
Mario 'BitKoenig' Holbe wrote: Al Boldi [EMAIL PROTECTED] wrote: Don't underestimate the effects mere layout can have on multi-disk array performance, despite it being highly hw dependent. I can't see the difference between equal mirrors and somehow interleaved layout on RAID1. Since you

Re: Propose of enhancement of raid1 driver

2006-10-30 Thread Jeff Breidenbach
If linux RAID-10 is still much slower than RAID-1 this discussion is kind of moot, right? Jeff - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Kernel OOPS with partitioned software raid (+ further questions) [PATCH]

2006-10-30 Thread Christian P. Schmidt
Hi all, I'm running the following software-raid setup: two raid 0 with two 250GB disks each (sdd1-sdg1) named md_d2 and md_d3 one raid 5 with three 500GB disks (sda2-sdc2) and the two raid0 as members named md_d5 one raid 1 with 100MB of each of the 500GB disks (sda1-sdc1) named md_d1 The only

Re: md array numbering is messed up

2006-10-30 Thread Peb
Michael Tokarev wrote: Neil Brown wrote: On Sunday October 29, [EMAIL PROTECTED] wrote: Hi, I have 2 arrays whose numbers get inverted, creating havoc, when booting under different kernels. I have md0 (raid1) made up of ide drives and md1 (raid5) made up of five sata drives, when booting

Re: Kernel OOPS with partitioned software raid (+ further questions)

2006-10-30 Thread Neil Brown
On Monday October 30, [EMAIL PROTECTED] wrote: Hi all, I'm running the following software-raid setup: two raid 0 with two 250GB disks each (sdd1-sdg1) named md_d2 and md_d3 one raid 5 with three 500GB disks (sda2-sdc2) and the two raid0 as members named md_d5 one raid 1 with 100MB of

Re: md array numbering is messed up

2006-10-30 Thread Neil Brown
On Tuesday October 31, [EMAIL PROTECTED] wrote: Well I have the following mdadm.conf: DEVICE /dev/hda /dev/hdc /dev/sd* ARRAY /dev/md1 level=raid5 num-devices=4 UID=8ed64073:04d21e1c:33660158: a5bc892f ARRAY /dev/md0 level=raid1 num-devices=2 UID=cab9de58:d20bffae:654d1910: 6f440136 I

[PATCH 002 of 6] md: Change lifetime rules for 'md' devices.

2006-10-30 Thread NeilBrown
Currently md devices are created when first opened and remain in existence until the module is unloaded. This isn't a major problem, but it somewhat ugly. This patch changes the lifetime rules so that an md device will disappear on the last close if it has no state. Locking rules depend on

[PATCH 005 of 6] md: Allow reads that have bypassed the cache to be retried on failure.

2006-10-30 Thread NeilBrown
If a bypass-the-cache read fails, we simply try again through the cache. If it fails again it will trigger normal recovery precedures. cc: Raz Ben-Jehuda(caro) [EMAIL PROTECTED] Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid5.c | 150

[PATCH 003 of 6] md: Define raid5_mergeable_bvec

2006-10-30 Thread NeilBrown
This will encourage read request to be on only one device, so we will often be able to bypass the cache for read requests. cc: Raz Ben-Jehuda(caro) [EMAIL PROTECTED] Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid5.c | 24 1 file

[PATCH 006 of 6] md: Enable bypassing cache for reads.

2006-10-30 Thread NeilBrown
Call the chunk_aligned_read where appropriate. cc: Raz Ben-Jehuda(caro) [EMAIL PROTECTED] Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid5.c |5 + 1 file changed, 5 insertions(+) diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c ---

[PATCH 004 of 6] md: Handle bypassing the read cache (assuming nothing fails).

2006-10-30 Thread NeilBrown
cc: Raz Ben-Jehuda(caro) [EMAIL PROTECTED] Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid5.c | 78 +++ 1 file changed, 78 insertions(+) diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c ---

Re: [PATCH] Check bio address after mapping through partitions.

2006-10-30 Thread Jens Axboe
On Tue, Oct 31 2006, NeilBrown wrote: This would be good for 2.6.19 and even 18.2, if it is seens acceptable. raid0 at least (possibly other) can be made to Oops with a bad partition table and best fix seem to be to not let out-of-range request get down to the device. ### Comments for