Re: once again raid5
Hi ! Neil Brown schrieb: Your best bet would be: mdadm --create /dev/md2 --level 5 -n 4 /dev/hda1 /dev/hdk1 missing /dev/hdo1 and hope that the data you find on md2 isn't too corrupted. You might be lucky, but I'm not holding my breath - sorry. This worked AFAIS but there are troubles with the filesystems ( no superblocks ...ext3,xfs) :-( -snip- ~# mdadm --detail /dev/md2 /dev/md2: Version : 00.90.01 Creation Time : Sun Apr 3 12:34:42 2005 Raid Level : raid5 Array Size : 735334848 (701.27 GiB 752.98 GB) Device Size : 245111616 (233.76 GiB 250.99 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Sun Apr 3 12:34:42 2005 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 39e90883:c8f824d7:16732793:9ba70289 Events : 0.60470023 Number Major Minor RaidDevice State 0 5610 active sync /dev/hdi1 1 5711 active sync /dev/hdk1 2 00- removed 3 8913 active sync /dev/hdo1 -snap- Thanks Ronny ps: For people which are interested in... -snip- server:~# mount -t ext3 /dev/mapper/raid5--volume-home /mnt/data mount: wrong fs type, bad option, bad superblock on /dev/mapper/raid5--volume-home, or too many mounted file systems (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) server:~# mount -t xfs /dev/mapper/raid5--volume-data /mnt/data mount: /dev/mapper/raid5--volume-data: can't read superblock -snap- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: once again raid5
On Thursday March 31, [EMAIL PROTECTED] wrote: Hi, we still have troubles with our raid5 array. You can find the history of the fault in detail in my other postings (11.3.2005). I will show you my attempts. There are 4 discs (Maxtor 250GB) in a raid5-array. One disc failed and we sent it back to Maxtor. Now, the array consists of 3 discs. I tried to reassemble it, mdadm -A --run --force /dev/md2 /dev/hdi1 /dev/hdk1 /dev/hdo1 but i got an error: -snip- mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument -snap- It looks like hdi1 doesn't think it is an active part of the array. It is just a spare. It is as-though the array was not fully synced when hdm1 (?) failed. Looking back through previous emails, it looks like you had 2 drive fail in a raid5 array. This means you lose. :-( Your best bet would be: mdadm --create /dev/md2 --level 5 -n 4 /dev/hda1 /dev/hdk1 missing /dev/hdo1 and hope that the data you find on md2 isn't too corrupted. You might be lucky, but I'm not holding my breath - sorry. NeilBrown - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: once again raid5
Hello, Neil Brown schrieb: It looks like hdi1 doesn't think it is an active part of the array. It is just a spare. It is as-though the array was not fully synced when hdm1 (?) failed. Mhm. Looking back through previous emails, it looks like you had 2 drive fail in a raid5 array. This means you lose. :-( hdm is since 2 weeks stable with 3 reallocated sectors...so, maybe no much data is lost. Your best bet would be: mdadm --create /dev/md2 --level 5 -n 4 /dev/hda1 /dev/hdk1 missing /dev/hdo1 and hope that the data you find on md2 isn't too corrupted. You might be Okay. But, isnt it better to use build instead of create? In the the manpages (printed 5.4.2004) ...i can see -snip- mdadm -build device . -raid-devices=Z devices This usage is similar to --create. The difference is that it creates a legacy array without a superblock. With these arrays there is no difference between initially creating the array and subsequently assembling the array, except that hopefully there is useful data there in the second case. -snap- lucky, but I'm not holding my breath - sorry. Thank you :-) ... so, there are no problems with the superblocks? Regards, Ronny - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html