Re: only 4 spares and no access to my data

2006-07-10 Thread Karl Voit
Molle Bestefich molle.bestefich at gmail.com writes: From the paste bin: 443: root at ned ~ # mdadm --examine /dev/sd[abcd] Shows that all 4 devices are ACTIVE SYNC Please note that there is no 1 behind sda up to sdd! Then: 568: root at ned ~ # mdadm --examine /dev/sd[abcd]1

Re: only 4 spares and no access to my data

2006-07-10 Thread Henrik Holst
Karl Voit wrote: [snip] Well this is because of the false(?) superblocks of sda-sdd in comparison to sda1 to sdd1. I don't understand this. Do you have more than a single partion on sda? Is sda1 occupying the entire disk? since the superblock is the /last/ 128Kb (I'm assuming 128*1024 bytes)

Re: only 4 spares and no access to my data

2006-07-10 Thread Karl Voit
Molle Bestefich molle.bestefich at gmail.com writes: You should probably upgrade at some point, there's always a better chance that devels will look at your problem if you're running the version that they're sitting with.. OK, I upgraded my kernel and mdadm: uname -a: Linux ned 2.6.13-grml

Re: only 4 spares and no access to my data

2006-07-10 Thread Karl Voit
Henrik Holst henrik.holst at idgmail.se writes: Karl Voit wrote: [snip] Well this is because of the false(?) superblocks of sda-sdd in comparison to sda1 to sdd1. I don't understand this. Me neither *g* This is the hint of a friend of mine, who is lot more experienced with sw-raids.

Re: only 4 spares and no access to my data

2006-07-10 Thread Karl Voit
Henrik Holst henrik.holst at idgmail.se writes: I don't understand this. Do you have more than a single partion on sda? Is sda1 occupying the entire disk? since the superblock is the /last/ 128Kb (I'm assuming 128*1024 bytes) the superblocks should be one and the same. I should have

Re: only 4 spares and no access to my data

2006-07-10 Thread Molle Bestefich
Karl Voit wrote: 443: root at ned ~ # mdadm --examine /dev/sd[abcd] Shows that all 4 devices are ACTIVE SYNC Please note that there is no 1 behind sda up to sdd! Yes, you're right. Seems you've created an array/superblocks on both sd[abcd] (line 443 onwards), and on sd[abcd]1 (line

Re: only 4 spares and no access to my data

2006-07-10 Thread Molle Bestefich
Henrik Holst wrote: Is sda1 occupying the entire disk? since the superblock is the /last/ 128Kb (I'm assuming 128*1024 bytes) the superblocks should be one and the same. Ack, never considered that. Ugly!!! - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a

Re: only 4 spares and no access to my data

2006-07-10 Thread Molle Bestefich
Karl Voit wrote: OK, I upgraded my kernel and mdadm: uname -a: Linux ned 2.6.13-grml #1 Tue Oct 4 18:24:46 CEST 2005 i686 GNU/Linux That release is 10 months old. Newest release is 2.6.17. You can see changes to MD since 2.6.13 here:

Re: only 4 spares and no access to my data

2006-07-10 Thread Karl Voit
Molle Bestefich molle.bestefich at gmail.com writes: Karl Voit wrote: OK, I upgraded my kernel and mdadm: uname -a: Linux ned 2.6.13-grml #1 Tue Oct 4 18:24:46 CEST 2005 i686 GNU/Linux That release is 10 months old. Newest release is 2.6.17. Sorry, my fault. dpkg -i kernel does not

Re: Can't get md array to shut down cleanly

2006-07-10 Thread Christian Pernegger
Nope, EVMS is not the culprit. I installed the test system from scratch, EVMS nowhere in sight -- it now boots successfully from a partitionable md array, courtesty of a yaird-generated initrd I adapted for the purpose. Yay! Or not. I get the md: md_d0 still in use. error again :( This is with

ICH7 sata-ahci + software raid warning

2006-07-10 Thread Christian Pernegger
I'm (still) trying to setup a md array on the ICH7 SATA controller of an Intel SE7230NH1-E with 4 WD5000YS disks. On this controller (in ahci mode) I have not yet managed to get a disk mark as failed. - a bad cable just led to hangs and timeouts - pulling the power on one of the SATA drives

Re: only 4 spares and no access to my data

2006-07-10 Thread Karl Voit
Molle Bestefich molle.bestefich at gmail.com writes: Karl Voit wrote: Before that, I'd like to check again now with the latest kernel and the latest mdadm: # mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: No suitable drives found for /dev/md0 [ ... snip:

Test feedback 2.6.17.4+libata-tj-stable (EH, hotplug)

2006-07-10 Thread Christian Pernegger
I finally got around to testing 2.6.17.4 with libata-tj-stable-20060710. Hardware: ICH7R in ahci mode + WD5000YS's. EH: much, much better. Before the patch it seemed like errors were only printed to dmesg but never handed up to any layer above. Now md actually fails the disk when I pull

Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-10 Thread Justin Piszcz
On Sat, 8 Jul 2006, Neil Brown wrote: On Friday July 7, [EMAIL PROTECTED] wrote: Jul 7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough stripes. Needed 512 Jul 7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array info. -28 So the RAID5 reshape only works if

Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-10 Thread Jan Engelhardt
md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1] hda1[0] 2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8] [] [] reshape = 0.2% (1099280/390708736) finish=1031.7min speed=6293K/sec It is working, thanks!

Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-10 Thread Justin Piszcz
On Tue, 11 Jul 2006, Jan Engelhardt wrote: md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1] hda1[0] 2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8] [] [] reshape = 0.2% (1099280/390708736) finish=1031.7min

Re: Test feedback 2.6.17.4+libata-tj-stable (EH, hotplug)

2006-07-10 Thread Tejun Heo
Christian Pernegger wrote: The fact that the disk had changed minor numbers after it was plugged back in bugs me a bit. (was sdc before, sde after). Additionally udev removed the sdc device file, so I had to manually recreate it to be able to remove the 'faulty' disk from its md array. That's