Brian Martin gave an (as always) excellent talk about LVM
Thursday night.  As I get older and more cynical, I look at
software that seems very nice and wonder "what happens if
this goes wrong?  How do I debug it?"

My concerns about LVM are about rapidly dealing with mistakes
and hardware failure in stressful situations (ever had a 
laptop drive go flaky 4 hours before a presentation?  And
successfully copied the presentation to a live spare with
minutes to spare?).  Brian gave the example of moving data
from a failed drive to a working drive in the same volume
group.  But if all you have for checking data integrity is
fsck at the file system level, and a file system is spread
across many drives in a volume group, how do you tell
which block errors correspond to which drive - quickly?  

For those of us with file systems smaller than the largest
drives available, there is something to be said for buying
a drive that is 3x larger than the current data set, and
buying a new and larger drive when we fill the current drive
to 80% or more.  For those of us using laptops, adding a
second drive usually isn't practical.  So for many situations,
LVM doesn't sound practical.   OTOH, if one needs a 20TB file
system on a server class machine, and assuming there are simple
ways to identify starting-to-fail drives, then LVM sounds great.

I run Redhat-derived distros, but bypass the default LVM setup
of the disks and use bare metal partitions instead.  It sounds 
like I should keep doing that until I cannot fit my system on
one drive.

Keith

-- 
Keith Lofstrom          [email protected]         Voice (503)-520-1993
KLIC --- Keith Lofstrom Integrated Circuits --- "Your Ideas in Silicon"
Design Contracting in Bipolar and CMOS - Analog, Digital, and Scan ICs
_______________________________________________
PLUG mailing list
[email protected]
http://lists.pdxlinux.org/mailman/listinfo/plug

Reply via email to