Howdy doo y'all Elbow deep in WD drives and the particulars of SATA, green,blue,black, TLER (Time Limited Error Recovery), Jumper Settings, etc... to setup a server for LVM over raid, i got to asking myself this question:
For simple raid1 arrays created either by distro installers or by convention,
what is the sense of
multiple partitions;? If sda1/sdb1 and sda2/sdb2 are assigned as devices to
create two arrays,
what happens if one partition gets munged up (say a bad superblock). I'm going
to fail and remove it and then
try to fsck it into submission? I'm going to toss it cuz drives are cheap and
i'll never trust it anyway!
And no data loss; HD recovery and the usual suspects of EXT tools aside, i'll
never underestimate the capacity for losing data;
that's what backups are for -:)
I was thinking of /boot as md0 (sda2/sdb2) / as md1 (sda3/sdb3) and LVM as md2
(sda5/sdb5)
(Can anyone provide better reason for /boot to be part of the / filesystem?)
but if any of those partitions goes south, I'd still have to fail and remove
the good ones on that drive as well, no?
So, why not just throw one big partition on each drive; creating one array md0
(8e,fd), pvcreate on that and use volumes?
Is it that a 500GB or 1TB is too large to maintain a (ext4 or reiser)
filesystem? Where does grub put the MBR? device|partition?
Is LVM recovery a PITA?
A second question is related to aligning cylinder boundaries, fdisk, and "Disk
/dev/md? doesn't contain a valid partition table"
Most forum posts suggest that message is normal and OK and to ignore it;
but for partitions already having filesystem on them before raiding, they seem
particularly vulnerable to fsck errors.
It's been suggested that putting a FS on each partition before raiding creates
big potential problems and that
"A better way is to create partitions without a filesystem, then format it to
the desired filesystem after raid creation"
because the central issue related to (resizing and) fsck'ing large drives, is
the time to rebuild and (presumably for raid5 or raid10)
the vulnerability of the data during that time.
" Yes, I am afraid it's perfectly normal. I had to do it on a 500GB array
lately and it took 10 hours.
.......................
If it's not a problem you'd be better of starting over and trying to get things
right from the beginning.
Large arrays are a pain in the neck anyway, it will take ages to fsck a
filesystem on reboot (a lot faster with ext4), and if one of your drive fails
your data will be jeopardized as long as the replacement or spare is syncing...
Yesterday's technologies like raid and ext3 are out of sync with today's
hardware... "
Thanks a Mega for your inputs,
Rion
--
web: http://dluz.com/
AIM/Jabber/MSN/: riondluz
Google: xmpp:[email protected]
email: riondluz_at_gmail.com
Phone: 802.644.2255
http://www.linkedin.com/pub/6/126/769
CLI forever!
L I N U X .~.
Choice /V\
of a GNU /( )\
Generation ^^-^^
POSIX
RULES
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
signature.asc
Description: This is a digitally signed message part.
