Re: Md corruption using RAID10 on linux-2.6.21

2007-06-01 Thread Don Dupuis

Thanks Neil. This took care of my issue. I was doing a full set of
tests to make sure before I replied. Thanks for all your hard work.

Don

On 5/31/07, Neil Brown <[EMAIL PROTECTED]> wrote:

On Wednesday May 30, [EMAIL PROTECTED] wrote:
> Neil, I sent the scripts to you. Any update on this issue?

Sorry, I got distracted.

Your scripts are way more complicated than needed.  Most of the logic
in there is already in mdadm.

   mdadm --assemble /dev/md_d0 --run --uuid=$BOOTUUID /dev/sd[abcd]2

can replace most of it.  And you don't need to wait for resync to
complete before mounting filesystems.

That said: I cannot see anything in your script that would actually do
the wrong thing.

Hmmm... I see now I wasn't quite testing the right thing.  I need to
trigger a resync with one device missing.
i.e
  mdadm -C /dev/md0 -l10 -n4 -p n3 /dev/sd[abcd]1
  mkfs /dev/md0
  mdadm /dev/md0 -f /dev/sda1
  mdadm -S /dev/md0
  mdadm -A /dev/md0 -R --update=resync /dev/sd[bcd]1
  fsck -f /dev/md0

This fails just as you say.
Following patch fixes it as well as another problem I found while
doing this testing.

Thanks for pursuing this.

NeilBrown

diff .prev/drivers/md/raid10.c ./drivers/md/raid10.c
--- .prev/drivers/md/raid10.c   2007-05-21 11:18:23.0 +1000
+++ ./drivers/md/raid10.c   2007-05-31 15:11:42.0 +1000
@@ -1866,6 +1866,7 @@ static sector_t sync_request(mddev_t *md
int d = r10_bio->devs[i].devnum;
bio = r10_bio->devs[i].bio;
bio->bi_end_io = NULL;
+   clear_bit(BIO_UPTODATE, &bio->bi_flags);
if (conf->mirrors[d].rdev == NULL ||
test_bit(Faulty, &conf->mirrors[d].rdev->flags))
continue;
@@ -2036,6 +2037,11 @@ static int run(mddev_t *mddev)
/* 'size' is now the number of chunks in the array */
/* calculate "used chunks per device" in 'stride' */
stride = size * conf->copies;
+
+   /* We need to round up when dividing by raid_disks to
+* get the stride size.
+*/
+   stride += conf->raid_disks - 1;
sector_div(stride, conf->raid_disks);
mddev->size = stride  << (conf->chunk_shift-1);




-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-30 Thread Neil Brown
On Wednesday May 30, [EMAIL PROTECTED] wrote:
> Neil, I sent the scripts to you. Any update on this issue?

Sorry, I got distracted.

Your scripts are way more complicated than needed.  Most of the logic
in there is already in mdadm.

   mdadm --assemble /dev/md_d0 --run --uuid=$BOOTUUID /dev/sd[abcd]2

can replace most of it.  And you don't need to wait for resync to
complete before mounting filesystems.

That said: I cannot see anything in your script that would actually do
the wrong thing.

Hmmm... I see now I wasn't quite testing the right thing.  I need to
trigger a resync with one device missing.
i.e
  mdadm -C /dev/md0 -l10 -n4 -p n3 /dev/sd[abcd]1
  mkfs /dev/md0
  mdadm /dev/md0 -f /dev/sda1
  mdadm -S /dev/md0
  mdadm -A /dev/md0 -R --update=resync /dev/sd[bcd]1
  fsck -f /dev/md0

This fails just as you say.
Following patch fixes it as well as another problem I found while
doing this testing.

Thanks for pursuing this.

NeilBrown

diff .prev/drivers/md/raid10.c ./drivers/md/raid10.c
--- .prev/drivers/md/raid10.c   2007-05-21 11:18:23.0 +1000
+++ ./drivers/md/raid10.c   2007-05-31 15:11:42.0 +1000
@@ -1866,6 +1866,7 @@ static sector_t sync_request(mddev_t *md
int d = r10_bio->devs[i].devnum;
bio = r10_bio->devs[i].bio;
bio->bi_end_io = NULL;
+   clear_bit(BIO_UPTODATE, &bio->bi_flags);
if (conf->mirrors[d].rdev == NULL ||
test_bit(Faulty, &conf->mirrors[d].rdev->flags))
continue;
@@ -2036,6 +2037,11 @@ static int run(mddev_t *mddev)
/* 'size' is now the number of chunks in the array */
/* calculate "used chunks per device" in 'stride' */
stride = size * conf->copies;
+
+   /* We need to round up when dividing by raid_disks to
+* get the stride size.
+*/
+   stride += conf->raid_disks - 1;
sector_div(stride, conf->raid_disks);
mddev->size = stride  << (conf->chunk_shift-1);
 

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-30 Thread Don Dupuis

Neil, I sent the scripts to you. Any update on this issue?

Thanks
Don

On 5/21/07, Neil Brown <[EMAIL PROTECTED]> wrote:

On Monday May 21, [EMAIL PROTECTED] wrote:
> I was going to get back with you concerning the low resync rates. The
> data corruption happens like this.
> 1.  Start with fully active array. Everything is fine.
> 2.  I remove a drive. Everything is fine. I then will power off the machine.
> 3.  I powerup and load up an initramfs which has my initial root
> filesystem and scripts for handling the assembly of the array. My init
> script will determine which drive was remove and assemble the
> remaining 3.

Could I see this script please?

NeilBrown


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-21 Thread Neil Brown
On Monday May 21, [EMAIL PROTECTED] wrote:
> I was going to get back with you concerning the low resync rates. The
> data corruption happens like this.
> 1.  Start with fully active array. Everything is fine.
> 2.  I remove a drive. Everything is fine. I then will power off the machine.
> 3.  I powerup and load up an initramfs which has my initial root
> filesystem and scripts for handling the assembly of the array. My init
> script will determine which drive was remove and assemble the
> remaining 3.

Could I see this script please?

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-21 Thread Don Dupuis

On 5/21/07, Neil Brown <[EMAIL PROTECTED]> wrote:

On Monday May 21, [EMAIL PROTECTED] wrote:
> On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:
> >
> > I am still trying to get where I had the low recover rate with the
> > bitmap turned on. I will get back with you
> > Don
> >
> Any new updates Neil?
> Any new things to try to get you additional info?
> THanks
>
> Don

You said "I will get back with you" and I was waiting for that... I
hoped that your further testing might reveal some details that would
shine a light on the situation.

One question:  Your description seems to say that you get corruption
after the resync has finished.  Is the corruption there before the
resync starts?
I guess what I would like it:
  Start with fully active array.  Check for corruption.
  Remove one drive.  Check for corruptions.
  Turn off system.  Turn it on again, array assembles with one
  missing device.  Check for corruption.
  Add device, resync starts.  Check for corruption.
  Wait for resync to finish.  Check for corruption.

NeilBrown


I was going to get back with you concerning the low resync rates. The
data corruption happens like this.
1.  Start with fully active array. Everything is fine.
2.  I remove a drive. Everything is fine. I then will power off the machine.
3.  I powerup and load up an initramfs which has my initial root
filesystem and scripts for handling the assembly of the array. My init
script will determine which drive was remove and assemble the
remaining 3. If a resync happens, I will wait for the resync to
complete. If complete I will then do a fdisk -l /dev/md_d0 to make
sure the partition table is complete. Most of the time, it will be
"unknow partition table. At this point I am dead in the water because
I can't pivot_root to my real root filesystem. If I get through the
resync and the partition table is correct my other corruption will be
to filesystems on the md device. All filesystems are ext3 with full
data journaling enabled. I could have one corrupted or multiple. fsck
will not be able to clean up. Under normal circumstances, once I am up
running on the real root filesystem I would add the removed disk back
into the md device with the recover running in the back ground. Sorry
for the confusion on the "get back with you". I basically have 2
issues, the corruption issue is my main priority at this point.

Thanks

Don
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-21 Thread Neil Brown
On Monday May 21, [EMAIL PROTECTED] wrote:
> On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:
> >
> > I am still trying to get where I had the low recover rate with the
> > bitmap turned on. I will get back with you
> > Don
> >
> Any new updates Neil?
> Any new things to try to get you additional info?
> THanks
> 
> Don

You said "I will get back with you" and I was waiting for that... I
hoped that your further testing might reveal some details that would
shine a light on the situation.

One question:  Your description seems to say that you get corruption
after the resync has finished.  Is the corruption there before the
resync starts?
I guess what I would like it:
   Start with fully active array.  Check for corruption.
   Remove one drive.  Check for corruptions.
   Turn off system.  Turn it on again, array assembles with one
   missing device.  Check for corruption.
   Add device, resync starts.  Check for corruption.
   Wait for resync to finish.  Check for corruption.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-21 Thread Don Dupuis

On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:

On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:
> On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:
> > On 5/16/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> > > On Wednesday May 16, [EMAIL PROTECTED] wrote:
> > > ...
> > > >
> > > > The problem arises when I do a drive removal such as sda and then I
> > > > remove power from the system. Most of the time I will have a corrupted
> > > > partition on the md device. Other corruption will be my root partition
> > > > which is an ext3 filesystem. I seem to have a better chance of booting
> > > > a least 1 time with no errors with bitmap turned on, but If I repeat
> > > > the process, I will have corruption as well. Also with bitmap turned
> > > > on, adding the new drive into the md device will take way to too long.
> > > > I only get about 3MB per second on the resync. With bitmap turned off,
> > > > I will get between 10MB to 15MB resync rate. Has anyone else seen this
> > > > behavior, or is this situation is no tested very often? I would think
> > > > that I shouldn't get corruption with this raid  setup and jornaling of
> > > > my filesytems? Any help would be appreciated.
> > >
> > >
> > > The resync rate should be the same whether you have a bitmap or not,
> > > so that observation is very strange.  Can you double check, and report
> > > the contents of "/proc/mdstat" in the two situations.
> > >
> > > You say you have corruption on your root filesystem.  Presumably that
> > > is not on the raid?  Maybe the drive doesn't get a chance to flush
> > > it's cache when you power-off.  Do you get the same corruption if you
> > > simulate a crash without turning off the power. e.g.
> > >echo b > /proc/sysrq-trigger
> > >
> > > Do you get the same corruption in the raid10 if you turn it off
> > > *without* removing a drive first?
> > >
> > > NeilBrown
> > >
> > Powering off with all drives will not have corruption. When I have a
> > drive missing and the md device does a full resync, I will get the
> > corruption. Usually the md partition table is corrupt or gone. and
> > with the first drive gone it happens more frequently. If the partition
> > table is not corrupt, then the rootfilesystem or one of the other
> > filesystems on the md device will be corrupted. Yes my root filesystem
> > is on the raid device. I will update with the bitmap resync rate stuff
> > later.
> >
> > Don
> >
> Forgot to tell you that I have the drive write cache disabled on all my 
drives.
>
> Don
>
Here is the /proc/mdstat output doing a recover after adding a drive
to the md device:
unused devices: 
-bash-3.1$ cat /proc/mdstat
Personalities : [raid10]
md_d0 : active raid10 sda2[4] sdd2[3] sdc2[2] sdb2[1]
  3646464 blocks 256K chunks 3 near-copies [4/3] [_UUU]
  [>]  recovery =  2.6% (73216/2734848)
finish=4.8min speed=9152K/sec

unused devices: 
-bash-3.1$ cat /proc/mdstat
Personalities : [raid10]
md_d0 : active raid10 sda2[4] sdd2[3] sdc2[2] sdb2[1]
  3646464 blocks 256K chunks 3 near-copies [4/3] [_UUU]
  [>]  recovery =  3.4% (93696/2734848)
finish=4.6min speed=9369K/sec

I am still trying to get where I had the low recover rate with the
bitmap turned on. I will get back with you
Don


Any new updates Neil?
Any new things to try to get you additional info?
THanks

Don
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-16 Thread Don Dupuis

On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:

On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:
> On 5/16/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> > On Wednesday May 16, [EMAIL PROTECTED] wrote:
> > ...
> > >
> > > The problem arises when I do a drive removal such as sda and then I
> > > remove power from the system. Most of the time I will have a corrupted
> > > partition on the md device. Other corruption will be my root partition
> > > which is an ext3 filesystem. I seem to have a better chance of booting
> > > a least 1 time with no errors with bitmap turned on, but If I repeat
> > > the process, I will have corruption as well. Also with bitmap turned
> > > on, adding the new drive into the md device will take way to too long.
> > > I only get about 3MB per second on the resync. With bitmap turned off,
> > > I will get between 10MB to 15MB resync rate. Has anyone else seen this
> > > behavior, or is this situation is no tested very often? I would think
> > > that I shouldn't get corruption with this raid  setup and jornaling of
> > > my filesytems? Any help would be appreciated.
> >
> >
> > The resync rate should be the same whether you have a bitmap or not,
> > so that observation is very strange.  Can you double check, and report
> > the contents of "/proc/mdstat" in the two situations.
> >
> > You say you have corruption on your root filesystem.  Presumably that
> > is not on the raid?  Maybe the drive doesn't get a chance to flush
> > it's cache when you power-off.  Do you get the same corruption if you
> > simulate a crash without turning off the power. e.g.
> >echo b > /proc/sysrq-trigger
> >
> > Do you get the same corruption in the raid10 if you turn it off
> > *without* removing a drive first?
> >
> > NeilBrown
> >
> Powering off with all drives will not have corruption. When I have a
> drive missing and the md device does a full resync, I will get the
> corruption. Usually the md partition table is corrupt or gone. and
> with the first drive gone it happens more frequently. If the partition
> table is not corrupt, then the rootfilesystem or one of the other
> filesystems on the md device will be corrupted. Yes my root filesystem
> is on the raid device. I will update with the bitmap resync rate stuff
> later.
>
> Don
>
Forgot to tell you that I have the drive write cache disabled on all my drives.

Don


Here is the /proc/mdstat output doing a recover after adding a drive
to the md device:
unused devices: 
-bash-3.1$ cat /proc/mdstat
Personalities : [raid10]
md_d0 : active raid10 sda2[4] sdd2[3] sdc2[2] sdb2[1]
 3646464 blocks 256K chunks 3 near-copies [4/3] [_UUU]
 [>]  recovery =  2.6% (73216/2734848)
finish=4.8min speed=9152K/sec

unused devices: 
-bash-3.1$ cat /proc/mdstat
Personalities : [raid10]
md_d0 : active raid10 sda2[4] sdd2[3] sdc2[2] sdb2[1]
 3646464 blocks 256K chunks 3 near-copies [4/3] [_UUU]
 [>]  recovery =  3.4% (93696/2734848)
finish=4.6min speed=9369K/sec

I am still trying to get where I had the low recover rate with the
bitmap turned on. I will get back with you
Don
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-16 Thread Don Dupuis

On 5/16/07, Don Dupuis <[EMAIL PROTECTED]> wrote:

On 5/16/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Wednesday May 16, [EMAIL PROTECTED] wrote:
> ...
> >
> > The problem arises when I do a drive removal such as sda and then I
> > remove power from the system. Most of the time I will have a corrupted
> > partition on the md device. Other corruption will be my root partition
> > which is an ext3 filesystem. I seem to have a better chance of booting
> > a least 1 time with no errors with bitmap turned on, but If I repeat
> > the process, I will have corruption as well. Also with bitmap turned
> > on, adding the new drive into the md device will take way to too long.
> > I only get about 3MB per second on the resync. With bitmap turned off,
> > I will get between 10MB to 15MB resync rate. Has anyone else seen this
> > behavior, or is this situation is no tested very often? I would think
> > that I shouldn't get corruption with this raid  setup and jornaling of
> > my filesytems? Any help would be appreciated.
>
>
> The resync rate should be the same whether you have a bitmap or not,
> so that observation is very strange.  Can you double check, and report
> the contents of "/proc/mdstat" in the two situations.
>
> You say you have corruption on your root filesystem.  Presumably that
> is not on the raid?  Maybe the drive doesn't get a chance to flush
> it's cache when you power-off.  Do you get the same corruption if you
> simulate a crash without turning off the power. e.g.
>echo b > /proc/sysrq-trigger
>
> Do you get the same corruption in the raid10 if you turn it off
> *without* removing a drive first?
>
> NeilBrown
>
Powering off with all drives will not have corruption. When I have a
drive missing and the md device does a full resync, I will get the
corruption. Usually the md partition table is corrupt or gone. and
with the first drive gone it happens more frequently. If the partition
table is not corrupt, then the rootfilesystem or one of the other
filesystems on the md device will be corrupted. Yes my root filesystem
is on the raid device. I will update with the bitmap resync rate stuff
later.

Don


Forgot to tell you that I have the drive write cache disabled on all my drives.

Don
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-16 Thread Don Dupuis

On 5/16/07, Neil Brown <[EMAIL PROTECTED]> wrote:

On Wednesday May 16, [EMAIL PROTECTED] wrote:
...
>
> The problem arises when I do a drive removal such as sda and then I
> remove power from the system. Most of the time I will have a corrupted
> partition on the md device. Other corruption will be my root partition
> which is an ext3 filesystem. I seem to have a better chance of booting
> a least 1 time with no errors with bitmap turned on, but If I repeat
> the process, I will have corruption as well. Also with bitmap turned
> on, adding the new drive into the md device will take way to too long.
> I only get about 3MB per second on the resync. With bitmap turned off,
> I will get between 10MB to 15MB resync rate. Has anyone else seen this
> behavior, or is this situation is no tested very often? I would think
> that I shouldn't get corruption with this raid  setup and jornaling of
> my filesytems? Any help would be appreciated.


The resync rate should be the same whether you have a bitmap or not,
so that observation is very strange.  Can you double check, and report
the contents of "/proc/mdstat" in the two situations.

You say you have corruption on your root filesystem.  Presumably that
is not on the raid?  Maybe the drive doesn't get a chance to flush
it's cache when you power-off.  Do you get the same corruption if you
simulate a crash without turning off the power. e.g.
   echo b > /proc/sysrq-trigger

Do you get the same corruption in the raid10 if you turn it off
*without* removing a drive first?

NeilBrown


Powering off with all drives will not have corruption. When I have a
drive missing and the md device does a full resync, I will get the
corruption. Usually the md partition table is corrupt or gone. and
with the first drive gone it happens more frequently. If the partition
table is not corrupt, then the rootfilesystem or one of the other
filesystems on the md device will be corrupted. Yes my root filesystem
is on the raid device. I will update with the bitmap resync rate stuff
later.

Don
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Md corruption using RAID10 on linux-2.6.21

2007-05-16 Thread Neil Brown
On Wednesday May 16, [EMAIL PROTECTED] wrote:
...
> 
> The problem arises when I do a drive removal such as sda and then I
> remove power from the system. Most of the time I will have a corrupted
> partition on the md device. Other corruption will be my root partition
> which is an ext3 filesystem. I seem to have a better chance of booting
> a least 1 time with no errors with bitmap turned on, but If I repeat
> the process, I will have corruption as well. Also with bitmap turned
> on, adding the new drive into the md device will take way to too long.
> I only get about 3MB per second on the resync. With bitmap turned off,
> I will get between 10MB to 15MB resync rate. Has anyone else seen this
> behavior, or is this situation is no tested very often? I would think
> that I shouldn't get corruption with this raid  setup and jornaling of
> my filesytems? Any help would be appreciated.


The resync rate should be the same whether you have a bitmap or not,
so that observation is very strange.  Can you double check, and report
the contents of "/proc/mdstat" in the two situations.

You say you have corruption on your root filesystem.  Presumably that
is not on the raid?  Maybe the drive doesn't get a chance to flush
it's cache when you power-off.  Do you get the same corruption if you
simulate a crash without turning off the power. e.g.
   echo b > /proc/sysrq-trigger

Do you get the same corruption in the raid10 if you turn it off
*without* removing a drive first?

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html