Re: md raid 5 not working
Kanwar Ranbir Sandhu wrote: I realized I had once used the entire drives in a md RAID 5 set instead of building the RAID 5 on partitions. I had outdated md superblocks on /dev/sd[bde]! In fact, I forgot to mention in my reply that this was a little suspicious: Preferred Minor : 2 I guess you had md2 on entire disks, and then switched to md3 for partitions. -- Roberto Ragusamail at robertoragusa.it -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
Re: md raid 5 not working
Kanwar Ranbir Sandhu wrote: /dev/sdb: 0 0 8 160 active sync /dev/sdb 1 1 001 faulty removed 2 2 8 642 active sync /dev/sde Events : 13144 /dev/sdd: 0 0 8 160 active sync /dev/sdb 1 1 8 481 active sync /dev/sdd 2 2 8 642 active sync /dev/sde Events : 11958 /dev/sde: 0 0 000 removed 1 1 001 faulty removed 2 2 8 642 active sync /dev/sde Events : 13146 Your three disks have three different ideas about the state of the RAID array. The events counters are probably saying that: - sdd was removed from the array at 11958 (faulty!) - sdb was removed from the array at 13144 - we are now at 13146 I think you should try to assemble the array in degraded mode with only sdb and sde, which have almost the same age; sdd is probably stale and should be readded in a second moment (assuming the disk is really working). I'm not sure about what options are needed to assemble in a degraded mode by choosing only sdb and sde, so I will avoid giving you a bad advice which could destroy your data. :-) You may retry the rescue CD; is the array started in degraded mode? If so, you can try to readd the missing disk and let it sync. Maybe everything is clean on next boot. But be careful. As soon as an array is reassembled, run a read-only fsck to be sure you are reassembling something consistent (I personally would not trust any array involving sdd). -- Roberto Ragusamail at robertoragusa.it -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
Re: md raid 5 not working
On Sat, 2009-12-26 at 15:28 -0500, Kanwar Ranbir Sandhu wrote: My Linux RAID skills/experience aren't that deep, so I'm not sure how to fix this. I'd appreciate any pointers. Some details: Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 fd Linux raid autodetect Device Boot Start End Blocks Id System /dev/sdd1 1 60801 488384001 fd Linux raid autodetect Device Boot Start End Blocks Id System /dev/sde1 1 60801 488384001 fd Linux raid autodetect Solved my problem! As you can see here, the md superblock is on the first primary partition of each of these drives. But, Linux wasn't seeing these partitions. A simple 'ls -l /dev/sd*' only showed me /dev/sdb, /dev/sdd, and /dev/sde. The reason I provided mdadm details on these drives was because I couldn't see /dev/sdb1, etc. I couldn't give md information on the partitions. I realized I had once used the entire drives in a md RAID 5 set instead of building the RAID 5 on partitions. I had outdated md superblocks on /dev/sd[bde]! I suppose when I rebuilt the array properly, I didn't wipe the drives completely. Basically, the old md superblocks were confusing the kernel. To fix this, I ran the following from the rescue CD: mdadm --zero-superblock --force /dev/sdb mdadm --zero-superblock --force /dev/sdd mdadm --zero-superblock --force /dev/sde When I rebooted, /dev/md3 was detected properly and came up without any problems. Sweet! Thanks to everyone that replied. I didn't think I could solve this one on my own, but the man page + my realization of what was going on helped immensely. Regards, Ranbir -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
Re: md raid 5 not working
Kanwar Ranbir Sandhu wrote: On Sat, 2009-12-26 at 15:28 -0500, Kanwar Ranbir Sandhu wrote: My Linux RAID skills/experience aren't that deep, so I'm not sure how to fix this. I'd appreciate any pointers. Some details: Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 fd Linux raid autodetect Device Boot Start End Blocks Id System /dev/sdd1 1 60801 488384001 fd Linux raid autodetect Device Boot Start End Blocks Id System /dev/sde1 1 60801 488384001 fd Linux raid autodetect Solved my problem! As you can see here, the md superblock is on the first primary partition of each of these drives. But, Linux wasn't seeing these partitions. A simple 'ls -l /dev/sd*' only showed me /dev/sdb, /dev/sdd, and /dev/sde. The reason I provided mdadm details on these drives was because I couldn't see /dev/sdb1, etc. I couldn't give md information on the partitions. I realized I had once used the entire drives in a md RAID 5 set instead of building the RAID 5 on partitions. I had outdated md superblocks on /dev/sd[bde]! I suppose when I rebuilt the array properly, I didn't wipe the drives completely. Basically, the old md superblocks were confusing the kernel. To fix this, I ran the following from the rescue CD: mdadm --zero-superblock --force /dev/sdb mdadm --zero-superblock --force /dev/sdd mdadm --zero-superblock --force /dev/sde When I rebooted, /dev/md3 was detected properly and came up without any problems. Sweet! Thanks to everyone that replied. I didn't think I could solve this one on my own, but the man page + my realization of what was going on helped immensely. For future reference there is also the linux-raid mailing list which handles issue regardless of release. -- Bill Davidsen david...@tmr.com We have more to fear from the bungling of the incompetent than from the machinations of the wicked. - from Slashdot -- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines