Re: Advice on raid/lvm

2009-04-16 Thread Henrique de Moraes Holschuh
On Thu, 09 Apr 2009, Mark Allums wrote:
 Is there an advantage of software raid10 over multiple raid1 arrays
 joined with LVM?  Capacity can be dynamically added with pairs of disks.


 Only one: simplicity.  It would make it easier for someone to  
 understand, in the beginning.

Well, md-raid10 is actually raid 10, not stripping stacked on top of a
mirror.  It has the concept of near and far copies of raid blocks, and one
of the possible configurations reduces to the same layout you get when you
stripe data over two mirror sets.  It is supposed to be able to perform a
lot better than device-mapper on top of two md raid1 sets ever could, but I
didn't test that to check.

What I *dislike* in md-raid10 is that it is more difficult to chose the
right disks to remove when you want to remove all possible disks from an
array without losing data, and also that either the kernel or the tools
didn't let me create a raid set with half the devices missing last time I
tried that (but that may have been fixed already on latest mdadm + latest
kernel).

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-16 Thread Henrique de Moraes Holschuh
On Fri, 10 Apr 2009, Mark Allums wrote:
 I also think that RAID 10 is pretty simple to understand.  Take four  
 disks.  Make two pairs.  Mirror each pair (RAID 1), then stripe across  
 the pairs (RAID 0).  It's just a combination.

That's just the most basic layout for raid-10...  It can get a LOT more
confusing (and probably less useful) than that.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-11 Thread James Youngman
On Wed, Apr 8, 2009 at 9:04 PM, Miles Fidelman
mfidel...@traversetechnologies.com wrote:

 I'm currently in my third day of rebuilding a machine that had /boot and /
 on an LVM volume on raided disks.  After one drive died, I ended up in a
 weird mode where LVM was mounting one of the component drives, rather than
 the raid volume - with the long result being that I'm reinstalling the o/s
 from scratch and hoping that my backups are good enough that I haven't lost
 any user data.

I had a similar experience a while back (an MD RAID1 set degraded and
LVM just accepted one of the two mirrored drives as a PV), though I
didn't need to reinstall and didn't lose any data.  Still, this is
certainly an area that could have worked better.

James.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-10 Thread randall

Tapani Tarvainen wrote:

On Thu, Apr 09, 2009 at 08:50:34PM -0500, Mark Allums (m...@allums.com) wrote:

  

Douglas A. Tutty wrote:
  

Is there an advantage of software raid10 over multiple raid1 arrays
joined with LVM?  Capacity can be dynamically added with pairs of disks.



  
Only one: simplicity.  It would make it easier for someone to  
understand, in the beginning.
  


Curiously, I would've thought the opposite, that is, bunch of separate
raid1 arrays would be easier to understand than raid10.
Raid1 is conceptually simple compared to any other raid level,
and if you're using lvm anyway, it doesn't make much difference
whether physical volumes are disks or disk pairs.

Anybody want to claim being a newbie and having an opinion here?

  

ehhmm,,

not a complete newbie anymore but the memory is still fresh ;)

one might argue that at least with etch raid1 + LVM would be easier 
since raid10 was not covered by the installer.
also you might generally say that people often become familiar with 
raid1 first since it is the most simple affordable solution at first for 
the most simple scenario's
raid1 vs raid 10 i think i would say that raid1 is easier to understand 
since it is 50% less complex.


but then again, 3 points above for raid1 being easier, assuming one 
already knows and understands LVM, if not it becomes a different story.


i still remember that when being fresh on checking the difference 
between raid levels on the wiki doing things like this

http://www.songshu.org/index.php/setup-raid-10
seemed complex to me at first, but so did LVM with its groups and 
volumes, so i think it depends on your previous experience and comfort 
level what really is easier.


--

www.songshu.org
Just another collection of nuts


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Advice on raid/lvm

2009-04-10 Thread Miles Bader
Tapani Tarvainen deb...@tapanitarvainen.fi writes:
 What load of gunk will be dumped into / to take it bigger than 500 MB?

 I've got a box where /lib takes 200MB now, of which /lib/modules is
 140MB - and that's per kernel, during kernel updates it temporarily
 doubles, taking /lib to 340MB or thereabouts.

 I do't see it at all impossible that the 500MB I have for / there now
 will get too small before the machine is retired.

I've got a box with a 200MB root partition, so I'm very sensitive to
such bloat.  500MB seems like luxury!  :-)

For a short period (2.6.28), the size of the kernel module tree in
debian bloated up dramatically (they increased the max cpus to 512 and
some per-module data structures used a fair amount of initialized space
proportional to the number of cpus), but due to data structure
improvements, it's back down to ~80MB per kernel in 2.6.29...

-Miles

-- 
Neighbor, n. One whom we are commanded to love as ourselves, and who does all
he knows how to make us disobedient.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-10 Thread Alex Samad
On Fri, Apr 10, 2009 at 08:05:32AM +0300, Tapani Tarvainen wrote:
 On Thu, Apr 09, 2009 at 11:09:15AM -0500, Boyd Stephen Smith Jr. 
 (b...@iguanasuicide.net) wrote:
 
   Is there an advantage of software raid10 over multiple raid1 arrays
   joined with LVM?
  
  Speed.
  
  Not much, if any.  LVM can stripe data across pvs ala RAID-0.
 
 Well, then you are doing software raid10, even though not with md. :-)
 But yes, the above description is ambiguous, and indeed lvm can
 do striping and mirroring (could before md was invented, or even
 before Linux ever got either).
 
 I once compared raid0 with md vs. lvm striping (being used with
 the latter from hp-ux), and decided to go with the former.
 There wasn't much difference in speed as I recall, but md allowed
 making bootable raid1 and it seemed better supported in Linux.
 

the last time I looked at raid1 md v's lvm. the conclusions I made where
md uses 2 block devices where as setting up raid 1 lv required 3 blocks
from the vg group 

Plus the day to day functionality  / management of lvm raid1 is not
close to that of md

Alex


-- 
Dick Cheney and I do not want this nation to be in a recession. We want 
anybody who can find work to be able to find work.

- George W. Bush
12/05/2000
60 minutes II, CBS


signature.asc
Description: Digital signature


Re: Advice on raid/lvm

2009-04-10 Thread Mark Allums

randall wrote:

Tapani Tarvainen wrote:
On Thu, Apr 09, 2009 at 08:50:34PM -0500, Mark Allums 
(m...@allums.com) wrote:


 

Douglas A. Tutty wrote:
 

Is there an advantage of software raid10 over multiple raid1 arrays
joined with LVM?  Capacity can be dynamically added with pairs of 
disks.



 
Only one: simplicity.  It would make it easier for someone to  
understand, in the beginning.
  


Curiously, I would've thought the opposite, that is, bunch of separate
raid1 arrays would be easier to understand than raid10.
Raid1 is conceptually simple compared to any other raid level,
and if you're using lvm anyway, it doesn't make much difference
whether physical volumes are disks or disk pairs.

Anybody want to claim being a newbie and having an opinion here?

  

ehhmm,,

not a complete newbie anymore but the memory is still fresh ;)

one might argue that at least with etch raid1 + LVM would be easier 
since raid10 was not covered by the installer.
also you might generally say that people often become familiar with 
raid1 first since it is the most simple affordable solution at first for 
the most simple scenario's
raid1 vs raid 10 i think i would say that raid1 is easier to understand 
since it is 50% less complex.


but then again, 3 points above for raid1 being easier, assuming one 
already knows and understands LVM, if not it becomes a different story.


i still remember that when being fresh on checking the difference 
between raid levels on the wiki doing things like this

http://www.songshu.org/index.php/setup-raid-10
seemed complex to me at first, but so did LVM with its groups and 
volumes, so i think it depends on your previous experience and comfort 
level what really is easier.




I of course meant raid 10 is simpler than 92 LVM volumes groups, under 
any physical scheme.


raid 1 is simplest, other than a single basic disk, but I seem to 
remember the option being RAID 5 or something else, and I was trying to 
say that I like RAID 10 better than I like RAID 5.


I also think that RAID 10 is pretty simple to understand.  Take four 
disks.  Make two pairs.  Mirror each pair (RAID 1), then stripe across 
the pairs (RAID 0).  It's just a combination.


And then, you can do any logical thing you want on top of that.

Mark Allums







--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Thu, Apr 09, 2009 at 02:17:03PM +1000, Alex Samad (a...@samad.com.au) wrote:

  Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
  array to a RAID1 and use two of the 500Gb drives for thhe new boot
  drive with LVM (With /boot / /home and so on on it).
 
 so you are going to have 
 
 md0 = raid1 2 x 500Gb 
 md1 = raid1 2 x 1Tb

Not necessarily, it might make sense to partition the drives
in smaller pieces and raid them separately.
But:

 I would create 3 partitions on the 500GB drives
 500M /boot (ext2 or ext3)
 20G / (ext3)

Could you explain the rationale behind this?
It doesn't make any sense to me.
The only (?) point in having a separate /boot
is when you can't boot directly off /,
like when it's in LVM or RAID5 or encrypted
or something like that.

If you are going to make non-LVM, non-encrypted, RAID1 ext3 /,
you can boot off it directly, without separate /boot.

If you are making separate /boot, you might as well
put / under LVM (which is what I'd do - and indeed
what I have done with just about every machine I have
installed since I've forgotten when).

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Wed, Apr 08, 2009 at 06:02:26PM -0300, Henrique de Moraes Holschuh 
(h...@debian.org) wrote:

 On Wed, 08 Apr 2009, Miles Fidelman wrote:
  One suggestion: think very carefully about whether you really want to do  
  this.
 
 I second that.  It is really not smart to have / (or /boot) in LVM if you
 can help it.
 
 I suggest that a small (1GB-4GB) partition for simple md-raid1 be used for
 / instead.  That won't give you any headaches, including on disaster
 recovery scenarios.

I would respectfully disagree. There are significant advantages in
putting / in LVM, it is a well-supported, standard configuration,
and avoiding it only gives false sense of security: in a disaster
situation you need to know basics of mdadm and lvm anyway, if
you use them.

Yes, leaving / out of LVM does give you a more complete
environment to work with when system crashes in a way that LVM
(the volume group containing /) is inaccessible.
It doesn't help much though unless you also leave /usr out,
and I've lost count on how often I've enlarged /usr and
been grateful it was under LVM.

All the essential tools for managing software raid and lvm are,
however, available even without / - indeed they're in initrd,
and if you can't use them, you're out of luck anyway.

On the other hand, having / in LVM means:
* you can enlarge / when necessary;
* you can encrypt / if desired;
* you can use other RAID configurations besides RAID1 with /;
* you don't have to create separate volumes for each of
  /usr, /var and /tmp (although you probably should anyway);
* it's the standard configuration, offered as automatic default
  installation option, and many people are using it so finding
  someone to help when needed shouldn't be hard.

As for the rest of your points, well, both software raid
and lvm do increase complexity and require learning some
new tricks, but they're well worth the trouble if you
manage any system more complex than a simple workstation,
IMHO.

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Alex Samad
On Thu, Apr 09, 2009 at 09:45:21AM +0300, Tapani Tarvainen wrote:
 On Thu, Apr 09, 2009 at 02:17:03PM +1000, Alex Samad (a...@samad.com.au) 
 wrote:
 
   Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
   array to a RAID1 and use two of the 500Gb drives for thhe new boot
   drive with LVM (With /boot / /home and so on on it).
  
  so you are going to have 
  
  md0 = raid1 2 x 500Gb 
  md1 = raid1 2 x 1Tb
 
 Not necessarily, it might make sense to partition the drives
badly described that is what I meant

 in smaller pieces and raid them separately.
 But:
 
  I would create 3 partitions on the 500GB drives
  500M /boot (ext2 or ext3)
  20G / (ext3)
 
 Could you explain the rationale behind this?
 It doesn't make any sense to me.
 The only (?) point in having a separate /boot
 is when you can't boot directly off /,
 like when it's in LVM or RAID5 or encrypted
 or something like that.

/boot can be loaded or and I like to have a resuce image on there just
incase I stuff something up on /, for the cost of a 1 partition and a
separate /boot it feel a bit better, when its part of / anything can
happen to it

 
 If you are going to make non-LVM, non-encrypted, RAID1 ext3 /,
 you can boot off it directly, without separate /boot.
 
 If you are making separate /boot, you might as well
 put / under LVM (which is what I'd do - and indeed
 what I have done with just about every machine I have
 installed since I've forgotten when).

I have had to recover to many servers and I like having root on a non
lvm partition just another layer I don't need it there is a problem.
Data is different.

So I can have a fully functional machine, or there could be something
wrong with the machine but I can boot into root, or the next stage
is being able to boot into the rescue image on /boot.

raid1 is easy to deal with when thing go wrong and again the cost of it
is minimal 1 partition slot and managing your data needs

Alex

 

-- 
Ann and I will carry out this equivocal message to the world: Markets must be 
open.

- George W. Bush
03/02/2001
at the swearing-in ceremony for Secretary of Agriculture Ann Veneman


signature.asc
Description: Digital signature


Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Thu, Apr 09, 2009 at 06:59:48PM +1000, Alex Samad (a...@samad.com.au) wrote:

   I would create 3 partitions on the 500GB drives
   500M /boot (ext2 or ext3)
   20G / (ext3)
  
  Could you explain the rationale behind this?

 /boot can be loaded or and I like to have a resuce image on there just
 incase I stuff something up on /, for the cost of a 1 partition and a
 separate /boot it feel a bit better, when its part of / anything can
 happen to it

Hmm. Are you mounting /boot read-only?
That should work and it'd indeed protect it better.
Otherwise the difference in safety against things
like mistyped rm commands and whatnot would be
marginal, I think.
If you have a concrete example where separate /boot
saved (or would have saved) the day, I'd be interested.

As for rescue image use, maybe it would make sense in some
types of use, but then I'd rather create a separate
partition just for that (and leave it unmounted
in normal use). But if the machine is physically
easily accessible, a removable medium like USB stick
would be better for that.

 I have had to recover to many servers and I like having root on a non
 lvm partition just another layer I don't need it there is a problem.

That's always the tradeoff, yes. LVM gives flexibility at the
cost of increased complexity, and whether it's worth the price
in any given situation, well, depends.

One scenario where I'd consider non-LVM root (and /usr and /var) is
where the machine is located in a hard-to-get-at place, with only less
experienced people present: being able to get the machine up by
talking them trough the hoops up to a point where I could get in with
ssh could indeed be easier then.

 raid1 is easy to deal with when thing go wrong

Well, it has its own issues, but yes, less moving parts than in LVM
(and lvm over raid1 is obviously more complex than either alone).

 and again the cost of it
 is minimal 1 partition slot and managing your data needs

Cost in disk space is minimal, agreed, but there are other costs
caused by the inflexibility it implies.
Anyway, nowadays I usually encrypt / anyway (or rather, encrypt
mdadm devices used as lvm physical volumes and put / in there),
and then there's no choice.

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Douglas A. Tutty
On Thu, Apr 09, 2009 at 10:00:40AM +0300, Tapani Tarvainen wrote:
 On Wed, Apr 08, 2009 at 06:02:26PM -0300, Henrique de Moraes Holschuh 
 (h...@debian.org) wrote:
  On Wed, 08 Apr 2009, Miles Fidelman wrote:
 
  I suggest that a small (1GB-4GB) partition for simple md-raid1 be used for
  / instead.  That won't give you any headaches, including on disaster
  recovery scenarios.

If you're going to have separate partitions (e.g. /usr, /var, /home),
then I would not call a 1-4 GB / small.  I have a 500 MB / of which
only 117 MB is used.

 I would respectfully disagree. There are significant advantages in
 putting / in LVM, it is a well-supported, standard configuration,
 and avoiding it only gives false sense of security: in a disaster
 situation you need to know basics of mdadm and lvm anyway, if
 you use them.

 Yes, leaving / out of LVM does give you a more complete
 environment to work with when system crashes in a way that LVM
 (the volume group containing /) is inaccessible.
 It doesn't help much though unless you also leave /usr out,
 and I've lost count on how often I've enlarged /usr and
 been grateful it was under LVM.

What is in /usr that you'd need (ok, other than man pages)?

 All the essential tools for managing software raid and lvm are,
 however, available even without / - indeed they're in initrd,
 and if you can't use them, you're out of luck anyway.

You don't have whatever notes you've left yourself in /root

 On the other hand, having / in LVM means:
 * you can enlarge / when necessary;

You should never have to enlarge a 500 MB /

 * you can encrypt / if desired;

Why would you need / encrypted (if swap, /tmp, /home, and parts of /var
are encrypted)?

 * you can use other RAID configurations besides RAID1 with /;

True, but for 500 MB is that helpful?  If you have more than 2 disks,
just put a 500 MB partition on each and have more than 2 components to
the raid1 array.

 * you don't have to create separate volumes for each of
   /usr, /var and /tmp (although you probably should anyway);

 * it's the standard configuration, offered as automatic default
   installation option, and many people are using it so finding
   someone to help when needed shouldn't be hard.

I've never used the automatic default; It always wastes resources on my
boxes.

 As for the rest of your points, well, both software raid
 and lvm do increase complexity and require learning some
 new tricks, but they're well worth the trouble if you
 manage any system more complex than a simple workstation,
 IMHO.


Figure out what all documentation, man pages (in text format), notes,
etc that you would want and put them in /root/doc.  Any scripts that you
find helpful for rebuilding arrays you could put in /root/bin.

Doug.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Douglas A. Tutty
On Wed, Apr 08, 2009 at 06:34:09PM -0500, Mark Allums wrote:

 Not really answering your question directly, but may I suggest, if cost 
 is not *absolutely* critical, that you consider RAID 10?  If it is a 
 server, then certainly you will want to get away from a three-drive RAID 
 5.  A RAID 10 is a good compromise between redundancy, speed, and cost. 
  It just takes four drives instead of three (or two.)

Is there an advantage of software raid10 over multiple raid1 arrays
joined with LVM?  Capacity can be dynamically added with pairs of disks.

Doug.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Thu, Apr 09, 2009 at 09:35:57AM -0400, Douglas A. Tutty (dtu...@vianet.ca) 
wrote:

 Is there an advantage of software raid10 over multiple raid1 arrays
 joined with LVM? 

Speed.
Also reduced complexity, if you can forgo LVM entirely.

Disadvantages are slightly bigger danger of data loss
and increased complexity, if you are using LVM anyway.

 Capacity can be dynamically added with pairs of disks.

Yes. I'd prefer that if speed isn't critical.

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Thu, Apr 09, 2009 at 09:32:47AM -0400, Douglas A. Tutty (dtu...@vianet.ca) 
wrote:

  Yes, leaving / out of LVM does give you a more complete
  environment to work with when system crashes in a way that LVM
  (the volume group containing /) is inaccessible.
  It doesn't help much though unless you also leave /usr out,

 What is in /usr that you'd need (ok, other than man pages)?

Besides man pages (which are useful if you don't have access
to the Internet), there's some useful stuff in /usr/bin and
/usr/sbin, like grub, mkinitramfs, ssh, find ...
Nothing essential, but as I said, everything essential is
on the initramdisk anyway.

 You don't have whatever notes you've left yourself in /root

You could keep your notes in /boot.

  On the other hand, having / in LVM means:
  * you can enlarge / when necessary;
 
 You should never have to enlarge a 500 MB /

Probably not, if you have separate /usr, /var, /tmp and /home,
as you generally should.
But it never is a long time. I've had to increase / at
least twice when it was too small for OS upgrade
(last in a system where it was just 50MB - which had been
plenty when the box was first installed, but not anymore).

  * you can encrypt / if desired;
 
 Why would you need / encrypted (if swap, /tmp, /home, and parts of /var
 are encrypted)?

To protect the notes you left in /root, of course. :-)

Seriously, there is a lot of potentially sensitive information
in there, like /etc/passwd, /etc/shadow, ssh keys, root's
shell history, c.

  * you can use other RAID configurations besides RAID1 with /;
 
 True, but for 500 MB is that helpful?  If you have more than 2 disks,
 just put a 500 MB partition on each and have more than 2 components to
 the raid1 array.

Last I tried, booting off 3+ -way raid1 wasn't supported
and didn't work (it's been a while though). So, you might want
to use raid6 for reliability (or raid10 for speed,
not that I can think why speed could matter in /).

Of course, this point is moot if you have both /boot and /
as separate, non-lvm partitions.
Come to think of it, that'd allow encrypting / as well,
although I can't see why that kind of non-standard setup
would be better than having / in lvm.

  * it's the standard configuration, offered as automatic default
installation option, and many people are using it so finding
someone to help when needed shouldn't be hard.
 
 I've never used the automatic default; It always wastes resources on my
 boxes.

To tell the truth, neither have I, but the point was that
it is a well-known, well-supported setup.

 Figure out what all documentation, man pages (in text format), notes,
 etc that you would want

Figuring that out in advance may not be that easy.

 and put them in /root/doc.  Any scripts that you
 find helpful for rebuilding arrays you could put in /root/bin.

You could just as well use /boot/doc and /boot/bin.

But, yeah: the issue is debatable, there's no really overwhelming
reason to go either way in every case.
There are situations where the advantages of lvm are not important
and its complexity may be a reason to avoid it. I think they're rare,
but my view may be biased by the fact I've used lvm for so long that
I no longer remember it ever being difficult. :-)

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread martin f krafft
also sprach Douglas A. Tutty dtu...@vianet.ca [2009.04.09.1532 +0200]:
  On the other hand, having / in LVM means:
  * you can enlarge / when necessary;
 
 You should never have to enlarge a 500 MB /

I bet you'll be wrong in 10 years.

  * you can encrypt / if desired;
 
 Why would you need / encrypted (if swap, /tmp, /home, and parts of /var
 are encrypted)?

Because it contains e.g. /bin/ls and you don't want that to be
trojaned. Obviously, an integrity checker can also help.

-- 
 .''`.   martin f. krafft madd...@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
in any hierarchy, each individual rises
 to his own level of incompetence,
 and then remains there.
   -- murphy (after dr. laurence j. peter)


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Advice on raid/lvm

2009-04-09 Thread Boyd Stephen Smith Jr.
In 20090409135738.ga4...@hamsu.tarvainen.info, Tapani Tarvainen wrote:
On Thu, Apr 09, 2009 at 09:35:57AM -0400, Douglas A. Tutty 
(dtu...@vianet.ca) wrote:
 Is there an advantage of software raid10 over multiple raid1 arrays
 joined with LVM?

Speed.

Not much, if any.  LVM can stripe data across pvs ala RAID-0.
-- 
Boyd Stephen Smith Jr.   ,= ,-_-. =.
b...@iguanasuicide.net  ((_/)o o(\_))
ICQ: 514984 YM/AIM: DaTwinkDaddy `-'(. .)`-'
http://iguanasuicide.net/\_/



signature.asc
Description: This is a digitally signed message part.


Re: Advice on raid/lvm

2009-04-09 Thread Douglas A. Tutty
On Thu, Apr 09, 2009 at 04:43:17PM +0200, martin f krafft wrote:
 also sprach Douglas A. Tutty dtu...@vianet.ca [2009.04.09.1532 +0200]:
   On the other hand, having / in LVM means:
   * you can enlarge / when necessary;
  
  You should never have to enlarge a 500 MB /
 
 I bet you'll be wrong in 10 years.

What load of gunk will be dumped into / to take it bigger than 500 MB?  

If ever / becomes bigger than 500M, then booting my old boxes will again
require a separate /boot (so that they can boot lower than the 504 MB
limit).  

 
   * you can encrypt / if desired;
  
  Why would you need / encrypted (if swap, /tmp, /home, and parts of /var
  are encrypted)?
 
 Because it contains e.g. /bin/ls and you don't want that to be
 trojaned. Obviously, an integrity checker can also help.
 

How does encrypting / prevent trojaning a binary?  I suppose it prevents
an attacker gaining root when the box is turned off and not physically
secured, but I don't know.  Does encrypting root counteract the age-old
wisdom that physical acess to the hardware will allow root compromise?

An integrity checker would only help if its being run from a
known-secure box, not the box with the questionable /bin/ls.

Encryption is great to protect secret content, while the box is
powered-off.  It doesn't help while the box is powered-on (since the
filesystems will be decrypted).  

Doug.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Boyd Stephen Smith Jr.
In 20090409181432.ga6...@blitz.hooton, Douglas A. Tutty wrote:
Does encrypting root counteract the age-old
wisdom that physical acess to the hardware will allow root compromise?

For the most part, yes.  But, when so configured, it also makes the box 
incapable of booting unattended.
-- 
Boyd Stephen Smith Jr.   ,= ,-_-. =.
b...@iguanasuicide.net  ((_/)o o(\_))
ICQ: 514984 YM/AIM: DaTwinkDaddy `-'(. .)`-'
http://iguanasuicide.net/\_/



signature.asc
Description: This is a digitally signed message part.


Re: Advice on raid/lvm

2009-04-09 Thread Mark Allums

Douglas A. Tutty wrote:

On Wed, Apr 08, 2009 at 06:34:09PM -0500, Mark Allums wrote:

Not really answering your question directly, but may I suggest, if cost 
is not *absolutely* critical, that you consider RAID 10?  If it is a 
server, then certainly you will want to get away from a three-drive RAID 
5.  A RAID 10 is a good compromise between redundancy, speed, and cost. 
 It just takes four drives instead of three (or two.)


Is there an advantage of software raid10 over multiple raid1 arrays
joined with LVM?  Capacity can be dynamically added with pairs of disks.

Doug.



Only one: simplicity.  It would make it easier for someone to 
understand, in the beginning.



MArk Allums


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Advice on raid/lvm

2009-04-09 Thread Mark Allums

Mark Allums wrote:

Douglas A. Tutty wrote:

On Wed, Apr 08, 2009 at 06:34:09PM -0500, Mark Allums wrote:

Not really answering your question directly, but may I suggest, if 
cost is not *absolutely* critical, that you consider RAID 10?  If it 
is a server, then certainly you will want to get away from a 
three-drive RAID 5.  A RAID 10 is a good compromise between 
redundancy, speed, and cost.  It just takes four drives instead of 
three (or two.)


Is there an advantage of software raid10 over multiple raid1 arrays
joined with LVM?  Capacity can be dynamically added with pairs of disks.

Doug.



Only one: simplicity.  It would make it easier for someone to 
understand, in the beginning.



MArk Allums




My assumption was that OP was not experienced.

If this is not the case,  then under *nix, RAID 10 my not be first 
choice.  I run RAID 10 with a nice Adaptec hardware RAID card under 
Windows Vista 64, but my Linux box is is RAID 1 with /boot under simple 
mdraid and the rest (including /) under LVM.  And with /boot outside of 
LVM, I can use GRUB.



MArk Allums


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Thu, Apr 09, 2009 at 02:14:32PM -0400, Douglas A. Tutty (dtu...@vianet.ca) 
wrote:

 What load of gunk will be dumped into / to take it bigger than 500 MB?

I've got a box where /lib takes 200MB now, of which /lib/modules is
140MB - and that's per kernel, during kernel updates it temporarily
doubles, taking /lib to 340MB or thereabouts.

I do't see it at all impossible that the 500MB I have for / there now
will get too small before the machine is retired.

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Thu, Apr 09, 2009 at 11:09:15AM -0500, Boyd Stephen Smith Jr. 
(b...@iguanasuicide.net) wrote:

  Is there an advantage of software raid10 over multiple raid1 arrays
  joined with LVM?
 
 Speed.
 
 Not much, if any.  LVM can stripe data across pvs ala RAID-0.

Well, then you are doing software raid10, even though not with md. :-)
But yes, the above description is ambiguous, and indeed lvm can
do striping and mirroring (could before md was invented, or even
before Linux ever got either).

I once compared raid0 with md vs. lvm striping (being used with
the latter from hp-ux), and decided to go with the former.
There wasn't much difference in speed as I recall, but md allowed
making bootable raid1 and it seemed better supported in Linux.

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-09 Thread Tapani Tarvainen
On Thu, Apr 09, 2009 at 08:50:34PM -0500, Mark Allums (m...@allums.com) wrote:

 Douglas A. Tutty wrote:
 Is there an advantage of software raid10 over multiple raid1 arrays
 joined with LVM?  Capacity can be dynamically added with pairs of disks.

 Only one: simplicity.  It would make it easier for someone to  
 understand, in the beginning.

Curiously, I would've thought the opposite, that is, bunch of separate
raid1 arrays would be easier to understand than raid10.
Raid1 is conceptually simple compared to any other raid level,
and if you're using lvm anyway, it doesn't make much difference
whether physical volumes are disks or disk pairs.

Anybody want to claim being a newbie and having an opinion here?

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Advice on raid/lvm

2009-04-08 Thread Kelly Harding
Hi,

My server has got debian on it currently in the following configuration:

250Gb boot drive with partitions for / /boot /home
3x500Gb drives in RAID5 array, with XFS on top directly.

Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
array to a RAID1 and use two of the 500Gb drives for thhe new boot
drive with LVM (With /boot / /home and so on on it).

Just wondering if theres any suggestions on how best to go about
migrating like this? not really worried too much about downtime, as
long as data is preserved. Reasons are mostly because the 250Gb drive
I believe could be getting close to failure (have backed up important
data on it to another machine using xfsdump).

Any suggestions gratefully recieved.

Thanks,

Kelly


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-08 Thread Tapani Tarvainen
On Wed, Apr 08, 2009 at 03:34:57PM +0100, Kelly Harding 
(kelly.hard...@gmail.com) wrote:

 My server has got debian on it currently in the following configuration:
 
 250Gb boot drive with partitions for / /boot /home
 3x500Gb drives in RAID5 array, with XFS on top directly.

 Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
 array to a RAID1 and use two of the 500Gb drives for thhe new boot
 drive with LVM (With /boot / /home and so on on it).

So the new configuration will be 2x1TB + 2x500GB drives in RAID1
setup and one of the 500GB drives will be removed along with
the 250GB one, correct?

 Just wondering if theres any suggestions on how best to go about
 migrating like this? not really worried too much about downtime, as
 long as data is preserved.

It would've been much easier if you'd used LVM to begin with
and partitioned the drives with expansion in mind, but it
shouldn't be too hard anyway.

A few clarifying questions:

Is there some reason against putting /boot and / on the new TB
disks instead of the old ones?
Can you leave all six disks connected during the transition
or are you limited to four simultaneously connected disks?
How full is the XFS system presently on the RAID5?

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-08 Thread Mike Castle
On Wed, Apr 8, 2009 at 7:34 AM, Kelly Harding kelly.hard...@gmail.com wrote:

 Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
 array to a RAID1 and use two of the 500Gb drives for thhe new boot
 drive with LVM (With /boot / /home and so on on it).

Before you do this, you may want to some serious investigation about
failure modes.

The article at http://blogs.zdnet.com/storage/?p=162 was just the
first example I looked at after searching for [raid 5 terabyte].

Essentially, with today's larger disks, the time it takes to do a
rebuild is sufficiently long enough that the risk of a second drive
failure during the rebuild is high enough to be troublesome.

mrc


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-08 Thread Tapani Tarvainen
On Wed, Apr 08, 2009 at 11:57:59AM -0700, Mike Castle 
(dalgoda+deb...@gmail.com) wrote:

 On Wed, Apr 8, 2009 at 7:34 AM, Kelly Harding kelly.hard...@gmail.com wrote:
 
  Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
  array to a RAID1 and use two of the 500Gb drives for thhe new boot
  drive with LVM (With /boot / /home and so on on it).

 with today's larger disks, the time it takes to do a
 rebuild is sufficiently long enough that the risk of a second drive
 failure during the rebuild is high enough to be troublesome.

That is true - but moving from RAID5 to RAID1 will improve the odds.
If the quality of the disks is same, probability of an unrecoverable
read error in one 1TB disk should be same in one of two 500GB disks,
and the probability of whole disk failure is bigger with 2 disks.
Moreover, RAID5 rebuild is much slower than RAID1 - I don't have
hard data at hand, but I think rebuilding 3x500GB RAID5 would
take longer than 2x1TB RAID1.
So 2x1TB as RAID1 is safer than 3x500GB as RAID5.

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-08 Thread Miles Fidelman

Kelly Harding wrote:

Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
array to a RAID1 and use two of the 500Gb drives for thhe new boot
drive with LVM (With /boot / /home and so on on it).


Any suggestions gratefully recieved.

  
One suggestion: think very carefully about whether you really want to do 
this.


I'm currently in my third day of rebuilding a machine that had /boot and 
/ on an LVM volume on raided disks.  After one drive died, I ended up in 
a weird mode where LVM was mounting one of the component drives, rather 
than the raid volume - with the long result being that I'm reinstalling 
the o/s from scratch and hoping that my backups are good enough that I 
haven't lost any user data.


Sigh...

--
Miles R. Fidelman, Director of Government Programs
Traverse Technologies 
145 Tremont Street, 3rd Floor

Boston, MA  02111
mfidel...@traversetechnologies.com
857-362-8314
www.traversetechnologies.com


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Advice on raid/lvm

2009-04-08 Thread Henrique de Moraes Holschuh
On Wed, 08 Apr 2009, Miles Fidelman wrote:
 One suggestion: think very carefully about whether you really want to do  
 this.

I second that.  It is really not smart to have / (or /boot) in LVM if you
can help it.

I suggest that a small (1GB-4GB) partition for simple md-raid1 be used for
/ instead.  That won't give you any headaches, including on disaster
recovery scenarios.

 I'm currently in my third day of rebuilding a machine that had /boot and  
 / on an LVM volume on raided disks.  After one drive died, I ended up in  
 a weird mode where LVM was mounting one of the component drives, rather  
 than the raid volume - with the long result being that I'm reinstalling  
 the o/s from scratch and hoping that my backups are good enough that I  
 haven't lost any user data.

If you forget to have THIS on lvm.conf when using software raid:

filter = [ a|/md|, r/.*/ ]

(or any other combination that makes sure lvm won't ever touch the md
component devices)

and lvm manages to put its filthy hands on one of the md component devices
(e.g.  because the raid didn't come up yet, or it was degraded and lvm
found the component device first), you're screwed.

Oh, and that needs to go inside the initrd as well, if you have / in lvm.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-08 Thread Mark Allums

Kelly Harding wrote:

Hi,

My server has got debian on it currently in the following configuration:

250Gb boot drive with partitions for / /boot /home
3x500Gb drives in RAID5 array, with XFS on top directly.

Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
array to a RAID1 and use two of the 500Gb drives for thhe new boot
drive with LVM (With /boot / /home and so on on it).

Just wondering if theres any suggestions on how best to go about
migrating like this? not really worried too much about downtime, as
long as data is preserved. Reasons are mostly because the 250Gb drive
I believe could be getting close to failure (have backed up important
data on it to another machine using xfsdump).

Any suggestions gratefully recieved.

Thanks,

Kelly




Not really answering your question directly, but may I suggest, if cost 
is not *absolutely* critical, that you consider RAID 10?  If it is a 
server, then certainly you will want to get away from a three-drive RAID 
5.  A RAID 10 is a good compromise between redundancy, speed, and cost. 
 It just takes four drives instead of three (or two.)


Mark Allums


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Advice on raid/lvm

2009-04-08 Thread Tapani Tarvainen
On Wed, Apr 08, 2009 at 06:34:09PM -0500, Mark Allums (m...@allums.com) wrote:

 Not really answering your question directly, but may I suggest, if cost  
 is not *absolutely* critical, that you consider RAID 10?  If it is a  
 server, then certainly you will want to get away from a three-drive RAID  
 5.  A RAID 10 is a good compromise between redundancy, speed, and cost.  
 It just takes four drives instead of three (or two.)

Whether it's a good compromise depends on how much speed vs. reliability
matter. A 4-disk RAID10 can survive 2-disk failure with 2/3 probability
(RAID 01 only with 1/3), whereas RAID6 can handle loss of any two disks
but is also much slower. With RAID5 the compromise is between capacity and
reliability, and as noted, the balance keeps getting worse as disks grow.
It may still make sense in some scenarios though.

But simply two separate RAID1 instances (which I understand OP was
planning) is actually more robust than RAID10 (in that even loss
of three disks won't lose _all_ data in there), and it can be done
with disks of different sizes. So I'd prefer that if speed is not
critical but reliability requirements don't quite warrant RAID6 either.

Of course it would also be possible to divide the 1TB disks in half,
use one half as RAID1 and the other as RAID10 together with the
500GB disks for speed, but if the latter are slower to begin with,
the speed gain may not be all that great.

So, I think OP's original plan (as I understood it) is a sound
compromise between cost, reliability, speed and simplicity
under a wide range of requirements.

-- 
Tapani Tarvainen


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Advice on raid/lvm

2009-04-08 Thread Alex Samad
On Wed, Apr 08, 2009 at 03:34:57PM +0100, Kelly Harding wrote:
 Hi,
 
 My server has got debian on it currently in the following configuration:
 
 250Gb boot drive with partitions for / /boot /home
 3x500Gb drives in RAID5 array, with XFS on top directly.
 
 Aiming to get a couple of 1Tb drives to migrate the 3x500Gb RAID5
 array to a RAID1 and use two of the 500Gb drives for thhe new boot
 drive with LVM (With /boot / /home and so on on it).

so you are going to have 

md0 = raid1 2 x 500Gb 
md1 = raid1 2 x 1Tb

I would create 3 partitions on the 500GB drives
500M /boot (ext2 or ext3)
20G / (ext3)
REST as a LVM PV

I would raid1 these

on the 1TB, 1 partition

1TB - raid 1 - LVM PV

so you end up with 
/boot  / on raid1 no lvm
and then 2 LVM PV 1 x 1TB and 1 x ~460GB

you can create 1 vg or 2 depending on what you want to put on there from
there create these LV's

/tmp
swap
/var/log/ (my personal preference, badly config syslog server bringing
down server )
/var/tmp

then carve up what ever else you need say 
/home/you


Alex


 
 Just wondering if theres any suggestions on how best to go about
 migrating like this? not really worried too much about downtime, as
 long as data is preserved. Reasons are mostly because the 250Gb drive
 I believe could be getting close to failure (have backed up important
 data on it to another machine using xfsdump).
 
 Any suggestions gratefully recieved.
 
 Thanks,
 
 Kelly
 
 

-- 
Brook's Law:
Adding manpower to a late software project makes it later.


signature.asc
Description: Digital signature