Christian Pernegger wrote:
The fact that the disk had changed minor numbers after it was plugged
back in bugs me a bit. (was sdc before, sde after). Additionally udev
removed the sdc device file, so I had to manually recreate it to be
able to remove the 'faulty' disk from its md array.
That's b
On Tue, 11 Jul 2006, Jan Engelhardt wrote:
md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1]
hda1[0]
2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8]
[]
[>] reshape = 0.2% (1099280/390708736)
finish=1031.7min s
> md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1]
> hda1[0]
> 2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8]
> []
> [>] reshape = 0.2% (1099280/390708736)
> finish=1031.7min speed=6293K/sec
>
> It is working, th
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
Jul 7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes. Needed 512
Jul 7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28
So the RAID5 reshape only works if y
I finally got around to testing 2.6.17.4 with libata-tj-stable-20060710.
Hardware: ICH7R in ahci mode + WD5000YS's.
EH: much, much better. Before the patch it seemed like errors were
only printed to dmesg but never handed up to any layer above. Now md
actually fails the disk when I pul
Molle Bestefich gmail.com> writes:
> Karl Voit wrote:
> > Before that, I'd like to check again now with
> > the latest kernel and the latest mdadm:
> >
> > # mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
> > mdadm: No suitable drives found for /dev/md0
> > [ ... snip: same mes
Karl Voit wrote:
Before that, I'd like to check again now with
the latest kernel and the latest mdadm:
# mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: No suitable drives found for /dev/md0
[ ... snip: same message with --run and --force ... ]
No idea what that means,
I'm (still) trying to setup a md array on the ICH7 SATA controller of
an Intel SE7230NH1-E with 4 WD5000YS disks.
On this controller (in ahci mode) I have not yet managed to get a disk
mark as failed.
- a bad cable just led to hangs and timeouts
- pulling the power on one of the SATA drives (whi
Nope, EVMS is not the culprit.
I installed the test system from scratch, EVMS nowhere in sight -- it
now boots successfully from a partitionable md array, courtesty of a
yaird-generated initrd I adapted for the purpose. Yay!
Or not. I get the "md: md_d0 still in use." error again :(
This is wit
Molle Bestefich gmail.com> writes:
> Karl Voit wrote:
> > OK, I upgraded my kernel and mdadm:
> >
> > "uname -a":
> > Linux ned 2.6.13-grml #1 Tue Oct 4 18:24:46 CEST 2005 i686 GNU/Linux
>
> That release is 10 months old.
> Newest release is 2.6.17.
Sorry, my fault. "dpkg -i does not boot the
Karl Voit wrote:
OK, I upgraded my kernel and mdadm:
"uname -a":
Linux ned 2.6.13-grml #1 Tue Oct 4 18:24:46 CEST 2005 i686 GNU/Linux
That release is 10 months old.
Newest release is 2.6.17.
You can see changes to MD since 2.6.13 here:
http://www.kernel.org/git/?p=linux%2Fkernel%2Fgit%2Fstable
Molle Bestefich gmail.com> writes:
> Karl Voit wrote:
>
> > root ned ~ # mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1\
> > /dev/sdc1 /dev/sdd1
> > mdadm: cannot open device /dev/sda1: Device or resource busy
> > mdadm: /dev/sda1 has no superblock - assembly aborted
>
> Odd message. Does "ls
Henrik Holst wrote:
Is sda1 occupying the entire disk? since the superblock is the /last/
"128Kb" (I'm assuming 128*1024 bytes) the superblocks should be one and
the same.
Ack, never considered that.
Ugly!!!
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of
Karl Voit wrote:
> > 443: root ned ~ # mdadm --examine /dev/sd[abcd]
>
> Shows that all 4 devices are ACTIVE SYNC
Please note that there is no "1" behind sda up to sdd!
Yes, you're right.
Seems you've created an array/superblocks on both sd[abcd] (line 443
onwards), and on sd[abcd]1 (lin
Henrik Holst idgmail.se> writes:
> I don't understand this. Do you have more than a single partion on sda?
> Is sda1 occupying the entire disk? since the superblock is the /last/
> "128Kb" (I'm assuming 128*1024 bytes) the superblocks should be one and
> the same.
I should have mentioned that I
Henrik Holst idgmail.se> writes:
> Karl Voit wrote:
> [snip]
> > Well this is because of the false(?) superblocks of sda-sdd in comparison
> to
> > sda1 to sdd1.
>
> I don't understand this.
Me neither *g*
This is the hint of a friend of mine, who is lot more experienced with
sw-raids.
> Do yo
Molle Bestefich gmail.com> writes:
> You should probably upgrade at some point, there's always a better
> chance that devels will look at your problem if you're running the
> version that they're sitting with..
OK, I upgraded my kernel and mdadm:
"uname -a":
Linux ned 2.6.13-grml #1 Tue Oct 4 1
Karl Voit wrote:
[snip]
> Well this is because of the false(?) superblocks of sda-sdd in comparison to
> sda1 to sdd1.
I don't understand this. Do you have more than a single partion on sda?
Is sda1 occupying the entire disk? since the superblock is the /last/
"128Kb" (I'm assuming 128*1024 bytes)
Molle Bestefich gmail.com> writes:
> From the paste bin:
>
> > 443: root ned ~ # mdadm --examine /dev/sd[abcd]
>
> Shows that all 4 devices are ACTIVE SYNC
Please note that there is no "1" behind sda up to sdd!
> Then:
>
> > 568: root ned ~ # mdadm --examine /dev/sd[abcd]1
>
> Suddenl
19 matches
Mail list logo