As I recall, you usually can't put /boot in an LVM partition because you need to read /boot to load your LVM driver. Doing so will make your system not boot.
A quick search turned up this: http://www.tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html. It claims bootloaders don't understand lvm partitions (thus don't put /boot in an LVM). As for the errors in fdisk, my understanding is you shouldn't use fdisk to modify /dev/md as raid devices don't contain true partitions. You can use fdisk to modify your other partitions, but if you touch /dev/md and write it could blow away your LVM setup. Again, someone else feel free to | in if I've said something incorrect. -- Brett Johnson simpleroute | 1690 Williston Road | South Burlington, VT 05401 tel: 802-578-3983 | email: [email protected] | web: simpleroute.com On Fri, Sep 24, 2010 at 11:27 PM, Rion D'Luz <[email protected]> wrote: > Howdy doo y'all > > Elbow deep in WD drives and the particulars of SATA, green,blue,black, TLER > (Time Limited Error Recovery), Jumper Settings, etc... > to setup a server for LVM over raid, i got to asking myself this question: > > For simple raid1 arrays created either by distro installers or by > convention, what is the sense of > multiple partitions;? If sda1/sdb1 and sda2/sdb2 are assigned as devices to > create two arrays, > what happens if one partition gets munged up (say a bad superblock). I'm > going to fail and remove it and then > try to fsck it into submission? I'm going to toss it cuz drives are cheap > and i'll never trust it anyway! > And no data loss; HD recovery and the usual suspects of EXT tools aside, > i'll never underestimate the capacity for losing data; > that's what backups are for -:) > > I was thinking of /boot as md0 (sda2/sdb2) / as md1 (sda3/sdb3) and LVM as > md2 (sda5/sdb5) > (Can anyone provide better reason for /boot to be part of the / > filesystem?) > but if any of those partitions goes south, I'd still have to fail and > remove the good ones on that drive as well, no? > > So, why not just throw one big partition on each drive; creating one array > md0 (8e,fd), pvcreate on that and use volumes? > Is it that a 500GB or 1TB is too large to maintain a (ext4 or reiser) > filesystem? Where does grub put the MBR? device|partition? > Is LVM recovery a PITA? > > A second question is related to aligning cylinder boundaries, fdisk, and > "Disk /dev/md? doesn't contain a valid partition table" > Most forum posts suggest that message is normal and OK and to ignore it; > but for partitions already having filesystem on them before raiding, they > seem particularly vulnerable to fsck errors. > It's been suggested that putting a FS on each partition before raiding > creates big potential problems and that > "A better way is to create partitions without a filesystem, then format it > to the desired filesystem after raid creation" > > because the central issue related to (resizing and) fsck'ing large drives, > is the time to rebuild and (presumably for raid5 or raid10) > the vulnerability of the data during that time. > > " Yes, I am afraid it's perfectly normal. I had to do it on a 500GB array > lately and it took 10 hours. > ....................... > If it's not a problem you'd be better of starting over and trying to get > things right from the beginning. > Large arrays are a pain in the neck anyway, it will take ages to fsck a > filesystem on reboot (a lot faster with ext4), and if one of your drive > fails your data will be jeopardized as long as the replacement or spare is > syncing... Yesterday's technologies like raid and ext3 are out of sync with > today's hardware... " > > Thanks a Mega for your inputs, > > Rion > > -- > web: http://dluz.com/ > AIM/Jabber/MSN/: riondluz > Google: > xmpp:[email protected]<xmpp%[email protected]> > email: riondluz_at_gmail.com > Phone: 802.644.2255 > http://www.linkedin.com/pub/6/126/769 > > CLI forever! > > L I N U X .~. > Choice /V\ > of a GNU /( )\ > Generation ^^-^^ > POSIX > RULES > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >
