On 10/19/07, Mike Snitzer <[EMAIL PROTECTED]> wrote:
> On 10/18/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> >
> > Sorry, I wasn't paying close enough attention and missed the obvious.
> > .....
> >
> > On Thursday October 18, [EMAIL PROTECTED] wrote:
> > > On 10/18/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> > > > On Wednesday October 17, [EMAIL PROTECTED] wrote:
> > > > > mdadm 2.4.1 through 2.5.6 works. mdadm-2.6's "Improve allocation and
> > > > > use of space for bitmaps in version1 metadata"
> > > > > (199171a297a87d7696b6b8c07ee520363f4603c1) would seem like the
> > > > > offending change.  Using 1.2 metdata works.
> > > > >
> > > > > I get the following using the tip of the mdadm git repo or any other
> > > > > version of mdadm 2.6.x:
> > > > >
> > > > > # mdadm --create /dev/md2 --run -l 1 --metadata=1.0 --bitmap=internal
> > > > > -n 2 /dev/sdf --write-mostly /dev/nbd2
> > > > > mdadm: /dev/sdf appears to be part of a raid array:
> > > > >     level=raid1 devices=2 ctime=Wed Oct 17 10:17:31 2007
> > > > > mdadm: /dev/nbd2 appears to be part of a raid array:
> > > > >     level=raid1 devices=2 ctime=Wed Oct 17 10:17:31 2007
> > > > > mdadm: RUN_ARRAY failed: Input/output error
> >                                ^^^^^^^^^^^^^^^^^^
> >
> > This means there was an IO error.  i.e. there is a block on the device
> > that cannot be read from.
> > It worked with earlier version of mdadm because they used a much
> > smaller bitmap.  With the patch you mention in place, mdadm tries
> > harder to find a good location and good size for a bitmap and to
> > make sure that space is available.
> > The important fact is that the bitmap ends up at a different
> > location.
> >
> > You have a bad block at that location, it would seem.
>
> I'm a bit skeptical of that being the case considering I get this
> error on _any_ pair of disks I try in an environment where I'm
> mirroring across servers that each have access to 8 of these disks.
> Each of the 8 mirrors consists of a local member and a remote (nbd)
> member.  I can't see all 16 disks having the very same bad block(s) at
> the end of the disk ;)
>
> I feels to me like the calculation that you're making isn't leaving
> adequate room for the 128K bitmap without hitting the superblock...
> but I don't have hard proof yet ;)

To further test this I used 2 local sparse 732456960K loopback devices
and attempted to create the raid1 in the same manner.  It failed in
exactly the same way.  This should cast further doubt on the bad block
theory no?

I'm using a stock 2.6.19.7 that I then backported various MD fixes to
from 2.6.20 -> 2.6.23...  this kernel has worked great until I
attempted v1.0 sb w/ bitmap=internal using mdadm 2.6.x.

But would you like me to try a stock 2.6.22 or 2.6.23 kernel?

Mike
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to