Mario Holbe [EMAIL PROTECTED] wrote:
hdc2/hde2 and when this was finished, recovery took md5 and *recovery*
started from hda8 to hde8 *without* resync of hdc8. After md5 recovery
Is this a known issue and/or is it fixed in later kernels already?
Hm, I'm wondering, if this got lost or just
Bailey, Scott scott.bailey at eds.com writes:
I think you need to tweak your kernel lines in /boot/grub/menu.lst to
include the following clause:
md=1,/dev/sda2,/dev/sdc2
...assuming I didn't misinterpret your configuration information. You
should be able to append this right
[EMAIL PROTECTED] wrote:
I've been going through the MD driver source, and to tell the truth, can't
figure out where the read error is detected and how to hook that event and
force a re-write of the failing sector. I would very much appreciate it if
I did that for RAID1, or at least most of
found a solution:
The whole thing worked after compiling md and raid1 modules into the kernel and
not as modules
A problem was mkinitrd in debian ubuntu, that is trying to be smart and
complains about a missing raid1 module when building the initial ramdisk with
mkinitrd
I had to patch the
Steve writes:
Thank you for the hint, but unfortunatly it does not change the behaviour
I now boot with a grub entry like 1.)
But still get 2. in dmesg after boot.
As one can see md0 is fine while md1 is not. Does this possibly have
something
to do with the fact that /dev/md1 is also used as the
Max Waterman wrote:
Can I just make it a slave device? How will that effect performance?
AFAIR (CMIIW):
- The standards does not allow a slave without a master.
- The master has a role to play in that it does coordination of some
sort (commands perhaps?) between the slave drive and the
John McMonagle wrote:
All panics seem to be associated with accessing bad spot on sdb
It seems really strange that one can get panic from a drive problem.
sarcasm Wow, yeah, never seen that happen with Linux before! /sarcasm
Just for the fun of it, try digging up a disk which has a bad spot
Molle Bestefich wrote:
sarcasm Wow, yeah, never seen that happen with Linux before! /sarcasm
Wait a minute, that wasn't a very productive comment.
Nevermind, I'm probably just ridden with faulty hardware.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a
On Monday March 7, [EMAIL PROTECTED] wrote:
I have no idea, but...
Is the disk IO reads or writes. If writes, scary Maybe data destined
for the array goes to the spare sometimes. I hope not. I feel safe with my
2.4 kernel. :)
It is writes, but don't be scared. It is just
On Sunday February 27, [EMAIL PROTECTED] wrote:
Hello.
Just for your information: There is a deadlock in the following situation:
MD2 is Raid 0 with 3 disks. sda1 sdb1 sdc1
MD3 is Raid 0 with 3 disks. sdd1 sde1 sdf1
MD4 is Raid 1 with 2 disks. MD2 and MD3!!
If a disk in MD2 fails, MD2
Neil Brown wrote:
It is writes, but don't be scared. It is just super-block updates.
In 2.6, the superblock is marked 'clean' whenever there is a period of
about 20ms of no write activity. This increases the chance on a
resync won't be needed after a crash.
(unfortunately) the superblocks
On Tuesday March 8, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
It is writes, but don't be scared. It is just super-block updates.
In 2.6, the superblock is marked 'clean' whenever there is a period of
about 20ms of no write activity. This increases the chance on a
resync won't be
Otherwise it could have a random value and might BUG.
This fixes a BUG during resync problem in raid1 introduced
by the bitmap-based-intent-loggin patches.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid1.c |1 +
1 files changed, 1 insertion(+)
diff
This isn't a real bug as the smallest slab-size is 32 bytes
but please apply for consistency.
Found by the Coverity tool.
Signed-off-by: Alexander Nyberg [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid1.c |2 +-
1 files changed, 1
Instead of setting one value lots of times, let's
set lots of values once each, as we should..
From: Paul Clements [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |2 +-
1 files changed, 1 insertion(+), 1 deletion(-)
diff
Before completing a 'write' the md superblock might need to be updated.
This is best done by the md_thread.
The current code schedules this up and queues the write request for later
handling by the md_thread.
However some personalities (Raid5/raid6) will deadlock if the
md_thread tries to submit
On Tuesday March 8, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
Then after 20ms with no write, they are all marked 'clean'.
Then before the next write they are all marked 'active'.
As the event count needs to be updated every time the superblock is
modified, the event count will be
NeilBrown [EMAIL PROTECTED] wrote:
The first two are trivial and should apply equally to 2.6.11
The second two fix bugs that were introduced by the recent
bitmap-based-intent-logging patches and so are not relevant
to 2.6.11 yet.
The changelog for the Fix typo in super_1_sync patch
Neil Brown wrote:
Is my perception of the situation correct?
No. Writing the superblock does not cause the array to be marked
active.
If the array is idle, the individual drives will be idle.
Ok, thank you for the clarification.
Seems like a design flaw to me, but then again, I'm biased
19 matches
Mail list logo