On Tue, Oct 31 2006, NeilBrown wrote:
> This would be good for 2.6.19 and even 18.2, if it is seens acceptable.
> raid0 at least (possibly other) can be made to Oops with a bad partition
> table and best fix seem to be to not let out-of-range request get down
> to the device.
>
> ### Comments for
cc: "Raz Ben-Jehuda(caro)" <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c | 78 +++
1 file changed, 78 insertions(+)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers
Call the chunk_aligned_read where appropriate.
cc: "Raz Ben-Jehuda(caro)" <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c |5 +
1 file changed, 5 insertions(+)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drive
This will encourage read request to be on only one device,
so we will often be able to bypass the cache for read
requests.
cc: "Raz Ben-Jehuda(caro)" <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c | 24
1 fi
If a bypass-the-cache read fails, we simply try again through
the cache. If it fails again it will trigger normal recovery
precedures.
cc: "Raz Ben-Jehuda(caro)" <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c | 150 +
This allows udev to do something intelligent when an
array becomes available.
cc: [EMAIL PROTECTED]
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |2 ++
1 file changed, 2 insertions(+)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/m
Currently md devices are created when first opened and remain in existence
until the module is unloaded.
This isn't a major problem, but it somewhat ugly.
This patch changes the lifetime rules so that an md device will
disappear on the last close if it has no state.
Locking rules depend on bd_mu
Following are 6 patches for md in -lastest which I have been sitting
on for a while because I hadn't had a chance to test them properly.
I now have so there shouldn't be too many bugs left :-)
First is suitable for 2.6.19 (if it isn't too late and gregkh thinks it
is good). Rest are for 2.6.20.
On Tuesday October 31, [EMAIL PROTECTED] wrote:
>
> Well I have the following mdadm.conf:
>
> DEVICE /dev/hda /dev/hdc /dev/sd*
> ARRAY /dev/md1 level=raid5 num-devices=4 UID=8ed64073:04d21e1c:33660158:
> a5bc892f
> ARRAY /dev/md0 level=raid1 num-devices=2 UID=cab9de58:d20bffae:654d1910:
> 6f4401
This would be good for 2.6.19 and even 18.2, if it is seens acceptable.
raid0 at least (possibly other) can be made to Oops with a bad partition
table and best fix seem to be to not let out-of-range request get down
to the device.
### Comments for Changeset
Partitions are not limited to live wit
On Monday October 30, [EMAIL PROTECTED] wrote:
> Hi all,
>
> I'm running the following software-raid setup:
>
> two raid 0 with two 250GB disks each (sdd1-sdg1) named md_d2 and md_d3
> one raid 5 with three 500GB disks (sda2-sdc2) and the two raid0 as
> members named md_d5
> one raid 1 with 100MB
Michael Tokarev wrote:
> Neil Brown wrote:
>> On Sunday October 29, [EMAIL PROTECTED] wrote:
>>> Hi,
>>>
>>> I have 2 arrays whose numbers get inverted, creating havoc, when booting
>>> under different kernels.
>>>
>>> I have md0 (raid1) made up of ide drives and md1 (raid5) made up of five
>>> sat
Neil Brown wrote:
On Tuesday October 17, [EMAIL PROTECTED] wrote:
We talked about RAID5E a while ago, is there any thought that this would
actually happen, or is it one of the "would be nice" features? With
larger drives I suspect the number of drives in arrays is going down,
and anything
Hi all,
I'm running the following software-raid setup:
two raid 0 with two 250GB disks each (sdd1-sdg1) named md_d2 and md_d3
one raid 5 with three 500GB disks (sda2-sdc2) and the two raid0 as
members named md_d5
one raid 1 with 100MB of each of the 500GB disks (sda1-sdc1) named md_d1
The only r
If linux RAID-10 is still much slower than RAID-1 this discussion is kind
of moot, right?
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Mario 'BitKoenig' Holbe wrote:
> Al Boldi <[EMAIL PROTECTED]> wrote:
> > Don't underestimate the effects mere layout can have on multi-disk array
> > performance, despite it being highly hw dependent.
>
> I can't see the difference between equal mirrors and somehow interleaved
> layout on RAID1. Si
Al Boldi <[EMAIL PROTECTED]> wrote:
> Don't underestimate the effects mere layout can have on multi-disk array
> performance, despite it being highly hw dependent.
I can't see the difference between equal mirrors and somehow interleaved
layout on RAID1. Since you have to seek anyways, there shoul
Mario 'BitKoenig' Holbe wrote:
> Al Boldi <[EMAIL PROTECTED]> wrote:
> > But what still isn't clear, why can't raid1 use something like the
> > raid10 offset=2 mode?
>
> RAID1 has equal data on all mirrors, so sooner or later you have to seek
> somewhere - no matter how you layout the data on each
On Mon, 30 Oct 2006, Brad Campbell wrote:
> Michael Tokarev wrote:
> > My guess is that it's using mdrun shell script - the same as on Debian.
> > It's a long story, the thing is quite ugly and messy and does messy things
> > too, but they says it's compatibility stuff and continue shipping it.
..
On Mon, 30 Oct 2006, Neil Brown wrote:
> > [EMAIL PROTECTED]:~# mdadm --assemble /dev/md0 /dev/hde /dev/hdi
> > mdadm: cannot open device /dev/hde: Device or resource busy
>
> This is telling you that /dev/hde - or one of it's partitions - is
> "Busy". This means more than just 'open'. It means
Michael Tokarev wrote:
Neil Brown wrote:
On Sunday October 29, [EMAIL PROTECTED] wrote:
Hi,
I have 2 arrays whose numbers get inverted, creating havoc, when booting
under different kernels.
I have md0 (raid1) made up of ide drives and md1 (raid5) made up of five
sata drives, when booting with
Neil Brown wrote:
> On Sunday October 29, [EMAIL PROTECTED] wrote:
>> Hi,
>>
>> I have 2 arrays whose numbers get inverted, creating havoc, when booting
>> under different kernels.
>>
>> I have md0 (raid1) made up of ide drives and md1 (raid5) made up of five
>> sata drives, when booting with my cu
22 matches
Mail list logo