When I try to create a RAID1 array with ver 1.0 superblock using mdadm
2.2 I'm getting:
WARNING - superblock isn't sized correctly
Looking at the code (and adding a bit more debugging) it is clear that
all 3 checks fail in super1.c's calc_sb_1_csum()'s make sure I can
count... test.
Is this a
On 4/7/06, Neil Brown [EMAIL PROTECTED] wrote:
On Friday April 7, [EMAIL PROTECTED] wrote:
Seeing this hasn't made it into a released kernel yet, I might just
change it. But I'll have to make sure that old mdadm's don't mess
things up ... I wonder how I will do that :-(
Thanks for
On 4/12/06, Neil Brown [EMAIL PROTECTED] wrote:
One thing that is on my todo list is supporting shared raid1, so that
several nodes in the cluster can assemble the same raid1 and access it
- providing that the clients all do proper mutual exclusion as
e.g. OCFS does.
Very cool... that would
On 7/25/06, Paul Clements [EMAIL PROTECTED] wrote:
This patch (tested against 2.6.18-rc1-mm1) adds a new sysfs interface
that allows the bitmap of an array to be dirtied. The interface is
write-only, and is used as follows:
echo 1000 /sys/block/md2/md/bitmap
(dirty the bit for chunk 1000
On 7/26/06, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
I tracked down the thread you referenced and these posts (by you)
seems to summarize things well:
http://marc.theaimsgroup.com/?l=linux-raidm=16563016418w=2
http://marc.theaimsgroup.com/?l=linux-raidm
On 7/26/06, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
I tracked down the thread you referenced and these posts (by you)
seems to summarize things well:
http://marc.theaimsgroup.com/?l=linux-raidm=16563016418w=2
http://marc.theaimsgroup.com/?l=linux-raidm
Aside from this write-mostly sysfs support, is there a way to toggle
the write-mostly bit of an md member with mdadm? I couldn't identify
a clear way to do so.
It'd be nice if mdadm --assemble would honor --write-mostly...
On 6/1/06, NeilBrown [EMAIL PROTECTED] wrote:
It appears in
On 8/5/06, Mike Snitzer [EMAIL PROTECTED] wrote:
Aside from this write-mostly sysfs support, is there a way to toggle
the write-mostly bit of an md member with mdadm? I couldn't identify
a clear way to do so.
It'd be nice if mdadm --assemble would honor --write-mostly...
I went ahead
FYI, with both mdadm ver 2.4.1 and 2.5.2 I can't mdadm --create with a
ver1 superblock and a write intent bitmap on x86_64.
running: mdadm --create /dev/md2 -e 1.0 -l 1 --bitmap=internal -n 2
/dev/sdd --write-mostly /dev/nbd2
I get: mdadm: RUN_ARRAY failed: Invalid argument
Mike
-
To
When using raid1 with one local member and one nbd member (marked as
write-mostly) MD hangs when trying to format /dev/md0 with ext3. Both
'cat /proc/mdstat' and 'mdadm --detail /dev/md0' hang infinitely.
I've not tried to reproduce on 2.6.18 or 2.6.19ish kernel.org kernels
yet but this issue
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
I can provided more detailed information; please just ask.
A complete sysrq trace (all processes) might help.
I'll send it to you off list.
thanks,
Mike
-
To unsubscribe from this list: send the
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
...
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
I can provided more detailed information; please just ask.
A complete
Is the goal to have the MD device be directly accessible from all
nodes? This strategy seems flawed in that it speaks to updating MD
superblocks then in-memory Linux data structures across a cluster.
The reality is if we're talking about shared storage the MD management
only needs to happen in
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
...
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
I can provided more detailed
On 6/14/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
...
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote
On 6/14/07, Paul Clements [EMAIL PROTECTED] wrote:
Bill Davidsen wrote:
Second, AFAIK nbd hasn't working in a while. I haven't tried it in ages,
but was told it wouldn't work with smp and I kind of lost interest. If
Neil thinks it should work in 2.6.21 or later I'll test it, since I have
a
On 6/14/07, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
Here are the steps to reproduce reliably on SLES10 SP1:
1) establish a raid1 mirror (md0) using one local member (sdc1) and
one remote member (nbd0)
2) power off the remote machine, whereby severing nbd0's connection
3
On 6/14/07, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
On 6/14/07, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
Here are the steps to reproduce reliably on SLES10 SP1:
1) establish a raid1 mirror (md0) using one local member (sdc1) and
one remote member
On 6/1/06, NeilBrown [EMAIL PROTECTED] wrote:
When an array has a bitmap, a device can be removed and re-added
and only blocks changes since the removal (as recorded in the bitmap)
will be resynced.
Neil,
Does the same apply when a bitmap-enabled raid1's member goes faulty?
Meaning even if a
On 7/23/07, Neil Brown [EMAIL PROTECTED] wrote:
On Saturday July 21, [EMAIL PROTECTED] wrote:
Could you share the other situations where a bitmap-enabled raid1
_must_ perform a full recovery?
When you add a new drive. When you create a new bitmap. I think that
should be all.
- Correct
On 8/3/07, Neil Brown [EMAIL PROTECTED] wrote:
On Monday July 23, [EMAIL PROTECTED] wrote:
On 7/23/07, Neil Brown [EMAIL PROTECTED] wrote:
Can you test this out and report a sequence of events that causes a
full resync?
Sure, using an internal-bitmap-enabled raid1 with 2 loopback
On 8/17/07, Mike Accetta [EMAIL PROTECTED] wrote:
Neil Brown writes:
On Wednesday August 15, [EMAIL PROTECTED] wrote:
Neil Brown writes:
On Wednesday August 15, [EMAIL PROTECTED] wrote:
...
This happens in our old friend sync_request_write()? I'm dealing with
Yes, that
On 9/19/07, Wiesner Thomas [EMAIL PROTECTED] wrote:
Has there been any progress on this? I think I saw it, or something
similar, during some testing of recent 2.6.23-rc kernels, on mke2fs took
about 11 min longer than all the others (~2 min) and it was not
repeatable. I worry that process
mdadm 2.4.1 through 2.5.6 works. mdadm-2.6's Improve allocation and
use of space for bitmaps in version1 metadata
(199171a297a87d7696b6b8c07ee520363f4603c1) would seem like the
offending change. Using 1.2 metdata works.
I get the following using the tip of the mdadm git repo or any other
version
On 10/17/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
mdadm 2.4.1 through 2.5.6 works. mdadm-2.6's Improve allocation and
use of space for bitmaps in version1 metadata
(199171a297a87d7696b6b8c07ee520363f4603c1) would seem like the
offending change. Using 1.2 metdata
On 10/18/07, Neil Brown [EMAIL PROTECTED] wrote:
On Wednesday October 17, [EMAIL PROTECTED] wrote:
mdadm 2.4.1 through 2.5.6 works. mdadm-2.6's Improve allocation and
use of space for bitmaps in version1 metadata
(199171a297a87d7696b6b8c07ee520363f4603c1) would seem like the
offending
On 10/18/07, Goswin von Brederlow [EMAIL PROTECTED] wrote:
Mike Snitzer [EMAIL PROTECTED] writes:
All,
I have repeatedly seen that when a 2 member raid1 becomes degraded,
and IO continues to the lone good member, that if the array is then
stopped and reassembled you get:
md
On 10/19/07, Neil Brown [EMAIL PROTECTED] wrote:
On Friday October 19, [EMAIL PROTECTED] wrote:
I'm using a stock 2.6.19.7 that I then backported various MD fixes to
from 2.6.20 - 2.6.23... this kernel has worked great until I
attempted v1.0 sb w/ bitmap=internal using mdadm 2.6.x.
On 10/18/07, Neil Brown [EMAIL PROTECTED] wrote:
Sorry, I wasn't paying close enough attention and missed the obvious.
.
On Thursday October 18, [EMAIL PROTECTED] wrote:
On 10/18/07, Neil Brown [EMAIL PROTECTED] wrote:
On Wednesday October 17, [EMAIL PROTECTED] wrote:
mdadm 2.4.1
On 10/19/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 10/18/07, Neil Brown [EMAIL PROTECTED] wrote:
Sorry, I wasn't paying close enough attention and missed the obvious.
.
On Thursday October 18, [EMAIL PROTECTED] wrote:
On 10/18/07, Neil Brown [EMAIL PROTECTED] wrote
On 10/22/07, Neil Brown [EMAIL PROTECTED] wrote:
On Friday October 19, [EMAIL PROTECTED] wrote:
On 10/19/07, Neil Brown [EMAIL PROTECTED] wrote:
On Friday October 19, [EMAIL PROTECTED] wrote:
I'm using a stock 2.6.19.7 that I then backported various MD fixes to
from 2.6.20 -
lvm2's MD v1.0 superblock detection doesn't work at all (because it
doesn't use v1 sb offsets).
I've tested the attached patch to work on MDs with v0.90.0, v1.0,
v1.1, and v1.2 superblocks.
please advise, thanks.
Mike
Index: lib/device/dev-md.c
On 10/23/07, Alasdair G Kergon [EMAIL PROTECTED] wrote:
On Tue, Oct 23, 2007 at 11:32:56AM -0400, Mike Snitzer wrote:
I've tested the attached patch to work on MDs with v0.90.0, v1.0,
v1.1, and v1.2 superblocks.
I'll apply this, thanks, but need to add comments (or reference) to explain
On 10/24/07, John Stoffel [EMAIL PROTECTED] wrote:
Bill == Bill Davidsen [EMAIL PROTECTED] writes:
Bill John Stoffel wrote:
Why do we have three different positions for storing the superblock?
Bill Why do you suggest changing anything until you get the answer to
Bill this question? If you
On Dec 7, 2007 12:42 AM, NeilBrown [EMAIL PROTECTED] wrote:
Currently and md array with a write-intent bitmap does not updated
that bitmap to reflect successful partial resync. Rather the entire
bitmap is updated when the resync completes.
This is because there is no guarentee that resync
Under 2.6.22.16, I physically pulled a SATA disk (/dev/sdac, connected to
an aacraid controller) that was acting as the local raid1 member of
/dev/md30.
Linux MD didn't see an /dev/sdac1 error until I tried forcing the issue by
doing a read (with dd) from /dev/md30:
Jan 21 17:08:07 lab17-233
cc'ing Tanaka-san given his recent raid1 BUG report:
http://lkml.org/lkml/2008/1/14/515
On Jan 21, 2008 6:04 PM, Mike Snitzer [EMAIL PROTECTED] wrote:
Under 2.6.22.16, I physically pulled a SATA disk (/dev/sdac, connected to
an aacraid controller) that was acting as the local raid1 member
On Jan 22, 2008 12:29 AM, Mike Snitzer [EMAIL PROTECTED] wrote:
cc'ing Tanaka-san given his recent raid1 BUG report:
http://lkml.org/lkml/2008/1/14/515
On Jan 21, 2008 6:04 PM, Mike Snitzer [EMAIL PROTECTED] wrote:
Under 2.6.22.16, I physically pulled a SATA disk (/dev/sdac, connected
and as such Linux (both
scsi and raid1) is completely unaware of any disconnect of the
physical device.
thanks,
Mike
-Original Message-
From: Mike Snitzer [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 22, 2008 7:10 PM
To: linux-raid@vger.kernel.org; NeilBrown
Cc: [EMAIL PROTECTED]; K
39 matches
Mail list logo