Neil Brown wrote:
On Thursday March 30, [EMAIL PROTECTED] wrote:
Is there any work going on to handle readerrors on a raid5 disk being
handled by recreating the faulty block from the other disks and just
rewriting the block, instead of kicking the disk out?
It's done. 2.6.15 I think, but defi
On Thursday March 30, [EMAIL PROTECTED] wrote:
>
> Is there any work going on to handle readerrors on a raid5 disk being
> handled by recreating the faulty block from the other disks and just
> rewriting the block, instead of kicking the disk out?
It's done. 2.6.15 I think, but definitely in 2.
Is there any work going on to handle readerrors on a raid5 disk being
handled by recreating the faulty block from the other disks and just
rewriting the block, instead of kicking the disk out?
I've problems on several occasions where two disks in a raid5 will have
single sector errors and th
On Wednesday March 29, [EMAIL PROTECTED] wrote:
> NeilBrown <[EMAIL PROTECTED]> wrote:
> >
> > + if (!uptodate) {
> > + int sync_blocks = 0;
> > + sector_t s = r1_bio->sector;
> > + long sectors_to_go = r1_bio->sectors;
> > + /* make sure these bits doesn't
NeilBrown <[EMAIL PROTECTED]> wrote:
>
> + if (!uptodate) {
> +int sync_blocks = 0;
> +sector_t s = r1_bio->sector;
> +long sectors_to_go = r1_bio->sectors;
> +/* make sure these bits doesn't get cleared. */
> +do {
> +
Signed-off-by: Brad Campbell <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid6main.c |2 ++
1 file changed, 2 insertions(+)
diff ./drivers/md/raid6main.c~current~ ./drivers/md/raid6main.c
--- ./drivers/md/raid6main.c~current~ 2006-03-
Following are three patches for md. The first fixes a problem that
can cause corruption in fairly unusual circumstances (re-adding a
device to a raid1 and suffering write-errors that are subsequntly
fixed and the device is re-added again).
The other two fix minor problems
The are suitable to go
And remove the comments that were put in inplace of a fix too
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff ./drivers/md/md.c~current~ ./drivers/md/md.c
--- ./drivers/md/md.c~current
Currently a device failure during recovery leaves bits set in the
bitmap. This normally isn't a problem as the offending device will be
rejected because of errors. However if device re-adding is being used
with non-persistent bitmaps, this can be a problem.
Signed-off-by: Neil Brown <[EMAIL PRO
On Saturday March 25, [EMAIL PROTECTED] wrote:
> Raid-6 did not create sysfs entries for stripe cache
>
> Signed-off-by: Brad Campbell <[EMAIL PROTECTED]>
>
> ---
> diff -u vanilla/linux-2.6.16/drivers/md/raid6main.c
> linux-2.6.16/drivers/md/raid6main.c
> --- vanilla/linux-2.6.16/drivers/md/rai
If what you say is true, then it was not a RAID0. It sounds like LINEAR.
Do you have the original command used to create the array?
Or the output from mdadm before you tried any recovery methods.
The output must be from before you re-created the array.
Output from commands like "mdadm -D /dev/md0"
I am pleased to announce the availability of
mdadm version 2.4
It is available at the usual places:
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/
mdadm is a tool for creating, managing and monitoring
device arrays usi
ok, guy and others.
this is a followup to the case I am currently trying (still) to solve.
synopsis:
the general consensus is that raid0 writes in a striping fashion.
However, the test case I have here doesn't appear to operate in the above
described manner. what was observed was this: on /dev/
On Wednesday March 29, [EMAIL PROTECTED] wrote:
>
> Thanks for your reply. As you guessed, this was a problem
> with our hardware/config and nothing to do with the raid software.
I'm glad you have found your problem!
>
> Can anybody point me to the syntax I could use for saying:
>
> "force rebu
On Sat, Mar 18, 2006 at 08:13:48AM +1100, Neil Brown wrote:
> On Friday March 17, [EMAIL PROTECTED] wrote:
> > Dear All,
> >
> > We have a number of machines running 4TB raid5 arrays.
> > Occasionally one of these machines will lock up solid and
> > will need power cycling. Often when this happens
man .. very very good.
blockdev --getsz says 512.
On 3/29/06, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Wednesday March 29, [EMAIL PROTECTED] wrote:
> > I was refering to bios reaching make_request in raid5.c .
> > I would be more precise.
> > I am dd'ing " dd if=/dev/md1 of=/dev/zero bs=1M coun
On Wednesday March 29, [EMAIL PROTECTED] wrote:
> I was refering to bios reaching make_request in raid5.c .
> I would be more precise.
> I am dd'ing " dd if=/dev/md1 of=/dev/zero bs=1M count=1 skip=10"
> I have added the following printk in make_request "printk
> ("%d:",bio->bi_size)"
> I am getti
Hello, list,
I think, this is generally hardware error, but looks like software problem
too.
At this point there is no dirty data in memory!
Cheers,
Janos
[EMAIL PROTECTED] /]# cmp -b /dev/sda1 /dev/sdb1
/dev/sda1 /dev/sdb1 differ: byte 68881481729, line 308395510 is 301 M-A 74
<
[EMAIL PROTECT
I was refering to bios reaching make_request in raid5.c .
I would be more precise.
I am dd'ing " dd if=/dev/md1 of=/dev/zero bs=1M count=1 skip=10"
I have added the following printk in make_request "printk ("%d:",bio->bi_size)"
I am getting sector sizes. 512:512:512:512:512
I suppose they gathe
19 matches
Mail list logo