On Wed, Jan 09, 2008 at 07:16:34PM +1100, CaT wrote:
But I suspect that --assemble --force would do the right thing.
Without more details, it is hard to say for sure.
I suspect so aswell but throwing caution into the wind erks me wrt this
raid array. :)
Sorry. Not to be a pain but
I'm sorry- is this an inappropriate list to ask for help? There seemed
to be a fair amount of that when I searched the archives, but I don't
want to bug developers with my problems!
Please let me know if I should find another place to ask for help (and
please let me know where that might
On Thu, 10 Jan 2008, Neil Brown wrote:
On Wednesday January 9, [EMAIL PROTECTED] wrote:
On Sun, 2007-12-30 at 10:58 -0700, dean gaudet wrote:
i have evidence pointing to d89d87965dcbe6fe4f96a2a7e8421b3a75f634d1
On Jan 10, 2008 12:13 AM, dean gaudet [EMAIL PROTECTED] wrote:
w.r.t. dan's cfq comments -- i really don't know the details, but does
this mean cfq will misattribute the IO to the wrong user/process? or is
it just a concern that CPU time will be spent on someone's IO? the latter
is fine to
Jed Davidow wrote:
I have a RAID5 (5+1spare) setup that works perfectly well until I
reboot. I have 6 drives (two different models) partitioned to give me
2 arrays, md0 and md1, that I use for /home and /var respectively.
When I reboot, the system assembles each array, but swaps out what was
Hi Bill,
Maybe I'm using the wrong words...
In this instance, on the previous boot, md1 was assembled from
sd[efbac]2 and sdg2 was the spare. When I rebooted it assembled from
sd[efbgc]2 and had no spare (appears that sdg was swapped in for sda).
Since sdg2 had been the spare, the array is
Jed Davidow wrote:
I'm sorry- is this an inappropriate list to ask for help? There seemed
to be a fair amount of that when I searched the archives, but I don't
want to bug developers with my problems!
Please let me know if I should find another place to ask for help (and
please let me know
On Thursday January 10, [EMAIL PROTECTED] wrote:
On Wed, Jan 09, 2008 at 07:16:34PM +1100, CaT wrote:
But I suspect that --assemble --force would do the right thing.
Without more details, it is hard to say for sure.
I suspect so aswell but throwing caution into the wind erks me wrt
On Thursday January 10, [EMAIL PROTECTED] wrote:
It looks to me like md inspects and attempts to assemble after each
drive controller is scanned (from dmesg, there appears to be a failed
bind on the first three devices after they are scanned, and then again
when the second controller is
On Fri, Jan 11, 2008 at 07:21:42AM +1100, Neil Brown wrote:
On Thursday January 10, [EMAIL PROTECTED] wrote:
On Wed, Jan 09, 2008 at 07:16:34PM +1100, CaT wrote:
But I suspect that --assemble --force would do the right thing.
Without more details, it is hard to say for sure.
I
Hello,
I am starting to dig into the Block subsystem to try and uncover the
reason for some data I lost recently. My situation is that I have
multiple block drivers on top of each other and am wondering how the
effectss of a raid 5 rebuild would affect the block devices above it.
The layers are
distro: Ubuntu 7.10
Two files show up...
85-mdadm.rules:
# This file causes block devices with Linux RAID (mdadm) signatures to
# automatically cause mdadm to be run.
# See udev(8) for syntax
SUBSYSTEM==block, ACTION==add|change, ENV{ID_FS_TYPE}==linux_raid*, \
RUN+=watershed
One quick question about those rules. The 65-mdadm rule looks like it
checks ACTIVE arrays for filesystems, and the 85 rule assembles arrays.
Shouldn't they run in the other order?
distro: Ubuntu 7.10
Two files show up...
85-mdadm.rules:
# This file causes block devices with Linux RAID
On Thursday January 10, [EMAIL PROTECTED] wrote:
distro: Ubuntu 7.10
Two files show up...
85-mdadm.rules:
# This file causes block devices with Linux RAID (mdadm) signatures to
# automatically cause mdadm to be run.
# See udev(8) for syntax
SUBSYSTEM==block, ACTION==add|change,
On Thursday January 10, [EMAIL PROTECTED] wrote:
One quick question about those rules. The 65-mdadm rule looks like it
checks ACTIVE arrays for filesystems, and the 85 rule assembles arrays.
Shouldn't they run in the other order?
They are fine. The '65' rule applies to arrays. I.e. it
(Sorry- yes it looks like I posted an incorrect dmesg extract)
$egrep sd|md|raid|scsi /var/log/dmesg.0
[ 36.112449] md: linear personality registered for level -1
[ 36.117197] md: multipath personality registered for level -4
[ 36.121795] md: raid0 personality registered for level 0
[
On Thursday January 10, [EMAIL PROTECTED] wrote:
(Sorry- yes it looks like I posted an incorrect dmesg extract)
This still doesn't seem to match your description.
I see:
[ 41.247389] md: bindsdf1
[ 41.247584] md: bindsdb1
[ 41.247787] md: bindsda1
[ 41.247971] md: bindsdc1
[
On Thursday January 10, [EMAIL PROTECTED] wrote:
Hello,
I am starting to dig into the Block subsystem to try and uncover the
reason for some data I lost recently. My situation is that I have
multiple block drivers on top of each other and am wondering how the
effectss of a raid 5 rebuild
On Thursday January 10, [EMAIL PROTECTED] wrote:
On Jan 10, 2008 12:13 AM, dean gaudet [EMAIL PROTECTED] wrote:
w.r.t. dan's cfq comments -- i really don't know the details, but does
this mean cfq will misattribute the IO to the wrong user/process? or is
it just a concern that CPU time
On Fri, 11 Jan 2008, Neil Brown wrote:
Thanks.
But I suspect you didn't test it with a bitmap :-)
I ran the mdadm test suite and it hit a problem - easy enough to fix.
damn -- i lost my bitmap 'cause it was external and i didn't have things
set up properly to pick it up after a reboot :)
if
20 matches
Mail list logo