Since we're all about nits, I'll do my part:
>diff .prev/drivers/md/multipath.c ./drivers/md/multipath.c
>--- .prev/drivers/md/multipath.c 2006-08-29 14:52:50.0 +1000
>+++ ./drivers/md/multipath.c 2006-08-29 14:33:34.0 +1000
>@@ -228,6 +228,28 @@ static int multipath_issue
On Tue, 29 Aug 2006 15:39:24 +1000
NeilBrown <[EMAIL PROTECTED]> wrote:
>
> Each backing_dev needs to be able to report whether it is congested,
> either by modulating BDI_*_congested in ->state, or by
> defining a ->congested_fn.
> md/raid did neither of these. This patch add a congested_fn
> w
It is possible to request a 'check' of an md/raid array where
the whole array is read and consistancies are reported.
This uses the same mechanisms as 'resync' and so reports in the
kernel logs that a resync is being started.
This understandably confuses/worries people.
Also the text in /proc/md
raid1, raid10 and multipath don't report their 'congested' status
through bdi_*_congested, but should.
This patch adds the appropriate functions which just check the
'congested' status of all active members (with appropriate locking).
raid1 read_balance should be modified to prefer devices where
This is very different from other raid levels and all requests
go through a 'stripe cache', and it has congestion management
already.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c | 21 +
1 file changed, 21 insertions(+)
diff .prev
Each backing_dev needs to be able to report whether it is congested,
either by modulating BDI_*_congested in ->state, or by
defining a ->congested_fn.
md/raid did neither of these. This patch add a congested_fn
which simply checks all component devices to see if they are
congested.
Signed-off-by
Following are 4 patches for md, suitable for 2.6.19.
The first three define "congested_fn" functions for various raid level -
I all just found out about the need for these. These could address
responsiveness problems that some people have reported, particularly
while resync is running.
The last
On Monday August 28, [EMAIL PROTECTED] wrote:
> Neil Brown <[EMAIL PROTECTED]> writes:
> >
> > You say some of the drives are 'spare'. How did that happen? Did you
> > try to add them back to the array after it has failed? That is a
> > mistake.
>
> Surely it was, although not mine.
>
;-)
>
Jim,
Can you try the attached (and below) patch for 2.6.17.11?
Also, please make sure you are running the latest firmware.
Thanks,
-Adam
diff -Naur linux-2.6.17.11/drivers/scsi/3w-9xxx.c
linux-2.6.17.12/drivers/scsi/3w-9xxx.c
--- linux-2.6.17.11/drivers/scsi/3w-9xxx.c 2006-08-23 14:16:33
Jim,
The 3ware driver reset code is doing msleep() polling with local interrupts
enabled... You shouldn't be getting soft lockups. I'll do some investigation
into this.
BTW, this is a linux-scsi issue, not linux-kernel.
-Adam
On 8/28/06, Jim Klimov <[EMAIL PROTECTED]> wrote:
Hello linux-kern
howdyy! =) gesell dich doch auch zu mypix is echt kuuuhl hier.. voll
die netten menschen *fg*
Diese Einladung wurde von li (User-ID: 1139) versandt. Sie ist wahrnehmbar
unter:
http://www.mypix.ch/register/1139
Sollte es sich hierbei um unzulässigen Inhalt handeln, wird gebeten, d
James Brown wrote:
[...]
There is no mdadm/mdadm.conf! What I should do about this?
Having just read the post from Andreas Pelzner, perhaps I should create
a new initrd:
> Andreas Pelzner wrote:
you told me the rigt way. I had to add the lines "raid1" and "md_mod" to
/etc/mkinitrd/modules.
Hi,
i hope i picked the right list for this problem,
here's the formal report:
[1.] One line summary of the problem:
System crashes while accessing a 3TB RAID5 on an AMD64 with 2.6.17.11.
[2.] Full description of the problem/report:
We've been running a production SMB server with 5x400GB SA
Neil Brown wrote:
On Saturday August 26, [EMAIL PROTECTED] wrote:
All,
[...]
* Problem 1: Since moving from 2.4 -> 2.6 kernel, a reboot kicks one
device out of the array (c.f. post by Andreas Pelzner on 24th Aug 2006).
* Problem 2: When booting my system, unless both disks plugged in, I get
This might be a dumb question, but what causes md to use a large amount of
cpu resources when reading a large amount of data from a raid1 array?
Examples are on a 2.4GHz AMD64, 2GB, 2.6.15.1 (I realize there are md
enhancements to later versions; I had some other unrelated issues and
rolled back to
This might be a dumb question, but what causes md to use a large amount of
cpu resources when reading a large amount of data from a raid1 array?
Examples are on a 2.4GHz AMD64, 2GB, 2.6.15.1 (I realize there are md
enhancements to later versions; I had some other unrelated issues and
rolled back to
Neil Brown <[EMAIL PROTECTED]> writes:
> On Saturday August 26, [EMAIL PROTECTED] wrote:
>
>> after an intermittent network failure, our RAID6 array of AoE devices
>> can't run anymore. Looks like the system dropped each of the disks one
>> after the other, and at the third the array failed as ex
thanks a lot,
you told me the rigt way. I had to add the lines "raid1" and "md_mod" to
/etc/mkinitrd/modules. After recreating the initrd image "mkinitrd -o
/boot/initrd.img-2.6.17.8 /lib/modules/2.6.17.8" the server boots into
both raid disk correctly.
isp:~# cat /proc/mdstat
Personalities : [ra
> I don't think the processor is saturating. I've seen reports of this
> sort of thing before and until recently had no idea what was happening,
> couldn't reproduce it, and couldn't think of any more useful data to
> collect.
Well I can reproduce it easily enough. It's a production server, but
> The PCI bus is only capable of 133MB/s max. Unless you have dedicated
> SATA ports, each on its own PCI-e bus, you will not get speeds in excess
> of 133MB/s, 200MB/s+ I have read reports of someone using 4-5 SATA
> controllers (SiI 3112 cards on PCI-e x1 ports) and they got around 200MB/s
>
On Monday August 28, [EMAIL PROTECTED] wrote:
> Am Montag, 28. August 2006 04:03 schrieben Sie:
> > The easiest thing to do is simply recreate the array, making sure to
> > have the drives in the correct order, and any options (like chunk
> > size) the same. This will not hurt the data (if done co
Am Montag, 28. August 2006 04:03 schrieben Sie:
> The easiest thing to do is simply recreate the array, making sure to
> have the drives in the correct order, and any options (like chunk
> size) the same. This will not hurt the data (if done correctly).
First time I hear this. Good to know.
Thoug
hello, since i started forcing auto=yes to mdadm, due to udev
i discovered a small problem.
basically when starting an array the routine that should check and
create the device file in mdopen.c errors if the md is already active.
this is not needed since trying to activate an already active array
23 matches
Mail list logo