Re: LVM root?

2006-10-14 Thread Goswin von Brederlow
Lennart Sorensen [EMAIL PROTECTED] writes:

 Well the documentation for LVM certainly said there was no way to keep
 the stripe setting if you added more disks to a VG.  Only the initial
 set could have a stripe setting.  After all the striping was based on
 what disks where available initially, and divided among them.  If you
 add more volumes later, it would be silly to force it to rearrange all
 your data just to stripe it across the new PVs.  Maybe they added such
 an option for people to be silly.  Not imposible to do after all, just
 seems silly.

Striping is something LVM does on a LV basis. When you create the LV
you set the number of stripes, which can be less than the number of
PVs. The number of stripes obviously can't just change when you add a
PV to the VG.

But according to the manpage when you change the size of a striped LV
it will by default use the same number of stripes for the new segment
or whatever you specify with --stripes. So you need free space on
enough PVs to extend a striped LV with the same striping. A single new
PV won't do it.

One thing missing is support in pvmove to change the striping of an
existing segment so you could change it from say 3 to 4 stripes when
you add a new PV. Seems like you want that.

 --
 Len Sorensen

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-14 Thread Goswin von Brederlow
Daniel Tryba [EMAIL PROTECTED] writes:

 Tell me more about raid1 support inside lvm, because that's what I'm
 looking at for /.  I know about raid0 (lvm's striping) but can't find
 mirroring in lvm.  

 There is no such thing, it's lvm in a md device that does the trick.

There is such a thing. But only in unstable.

'lvcreate --help' shows:

[-m|--mirrors Mirrors [--nosync] [--corelog]]

That about the only docs about it so far. Unless the kernel docs say
something about it.

But even if it werent so new and shiny you still woudln't want it for
/. Keep that outside the lvm and use a normal raid1.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-14 Thread Goswin von Brederlow
[EMAIL PROTECTED] writes:

 In Debian, is it only the loss of / that requires a reinstall?  What
 happens if /var (especially /var/lib) or /usr get corrupted?  Doesn't
 that also make the system extremely difficult/time-consuming to restore?

 If I were to get a second 80 GB SATA drive so that I could raid1 /,
 might I just as well have the whole base system on raid1?  

Yes, that is best. The loss of / or /var is pretty much the same. A
lot of work to recover. /usr (excluding /usr/local) can be recovered
easily though. Just get the list of installed packages from
/var/lib/dpkg/ and reinstall all of them. But it is still nuisance to
have to do so.

 I would then have something like this:

 Disk a:
 Part: Size:   
 1 64MB
 5 remainder

 Disk b:
 Part: Size:
 1 64 MB
 5 remainder

 a1 and b1 raid1 to make md0 and mounted as /boot
   since can't resize md0, can anyone imagine /boot ever needing 
   more than 64 MB to hold 2 kernels (old and new)?
   Boot will be via grub installed on both drive's mbrs for
   auto-failover booting.
 a5 and b5 raid1 to make md0 which becomes pv0 of vg0
 lvs made for rest of base system including /
 This also ends up with swap in a lv on raid1 so drive crash shouldn't
 crash the system.

Unless it crashes the controler or driver (which can easily happen
with inexpensive non hardware raid) in the process the system will
keep working. If in doubt get a hotplug carrier for your drive and
pull one out while running.

 If this looks good, do the d-i menus let me start out with a degraded
 raid1 with only one drive, and add the second drive later?

I'm not sure and would say no if I have to guess. But try and if not
try creating the raid in degraded mode on the second console.

 Thanks to your patient help, I'm gradually getting my head around this
 new world.  Thank you.

 Doug.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-14 Thread Goswin von Brederlow
Lennart Sorensen [EMAIL PROTECTED] writes:

 You can only add drives to raid5 with a good hardware raid card I
 believe (fake raids generally suck at raid5 since it is rather cpu
 intensive and needs a dedicated xor engine).  Raid1 is simpler, and is
 all the boot loaders currently support (since with raid1 the disks are
 identical, the boot loader just ignores raid and reads from one disk).

Support was recently added to the kernel and mdadm for this. By the
time one gets another harddisk and needs this it will probably be
tested enough to be reliable.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Jean-Luc Coulon (f5ibh)

Le 13.10.2006 01:14:01, [EMAIL PROTECTED] a écrit :

On Thu, Oct 12, 2006 at 04:35:02PM +0200, Jean-Luc Coulon (f5ibh)
wrote:
 
 You suggest ext3 for the / system.  Why would I not just use JFS
for
 everything?
 It is often easier to repair have access to ext3 (which is
 ext2+journal) from a system you have booted from a live CD, just in

 case of a weird problem on the filesystem.

Would that live CD (or the etch install-cd) be able to work with a
weird
problem on the filesystem if its on raid1?  Since the installer can
install to jfs anyway, and my previous bootfloppies predate SATA so
won't work, is this an issue?


Well,

I've 2 SATA disks, identical, 80 GB
I've 2 partitions on each
The first partition is 0xfd (raid autodetect) this partition is 100MB
The second partition is 0xfd also this partition is the rest of the disk

I've build 2 raid array, one with the 100MB partitions, the other with  
the space left.


On the first md device, I've an XFS partition mounted on /boot
On the second md, I've a LVM2 with the following logical volumes (only  
one VG/PV)

/
/usr
/var
/opt
/swap
/video

All these logical volumes have XFS filesystem, included the root  
filesystem.


I use grub (installed on both disks) as a bootloader.

I have had a bad problem with my power sypply. This resulted in a  
corrupted root filesystem. I've repaired it from an Ubuntu live CD.


Regards

Jean-Luc


pgp3GfvtI6BIJ.pgp
Description: PGP signature


Re: LVM root?

2006-10-13 Thread dtutty
On Fri, Oct 13, 2006 at 06:43:33AM +0200, Goswin von Brederlow wrote:
 Lennart Sorensen [EMAIL PROTECTED] writes:
 
  You might want to stripe the volume for editing accross both
  disks. Lvm can do that without you having to resrot to raid0. Gives
  you more speed on file I/O. It's a per volume thing so you can keep
  /usr, /var, /home on the first disk and just stripe the editing LV
  when you get the 2nd disk.
 
  LVM can do that, but as soon as you start to add to the LVM later you
  loose it.  In general it isn't recommended to use the stripping features
  in LVM.
 
 You do? That would be a bug or at least missing feture then. Recently
 there has been some work on raid support inside lvm (raid1 support was
 added) so I don't think it is totaly deprecated.
 
  --
  Len Sorensen
 
 MfG
 Goswin
 
Tell me more about raid1 support inside lvm, because that's what I'm
looking at for /.  I know about raid0 (lvm's striping) but can't find
mirroring in lvm.  

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Daniel Tryba
 Tell me more about raid1 support inside lvm, because that's what I'm
 looking at for /.  I know about raid0 (lvm's striping) but can't find
 mirroring in lvm.  

There is no such thing, it's lvm in a md device that does the trick.

-- 

 When you do things right, people won't be sure you've done anything at all.

   Daniel Tryba


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread dtutty
In Debian, is it only the loss of / that requires a reinstall?  What
happens if /var (especially /var/lib) or /usr get corrupted?  Doesn't
that also make the system extremely difficult/time-consuming to restore?

If I were to get a second 80 GB SATA drive so that I could raid1 /,
might I just as well have the whole base system on raid1?  

I would then have something like this:

Disk a:
Part:   Size:   
1   64MB
5   remainder

Disk b:
Part:   Size:
1   64 MB
5   remainder

a1 and b1 raid1 to make md0 and mounted as /boot
since can't resize md0, can anyone imagine /boot ever needing 
more than 64 MB to hold 2 kernels (old and new)?
Boot will be via grub installed on both drive's mbrs for
auto-failover booting.
a5 and b5 raid1 to make md0 which becomes pv0 of vg0
lvs made for rest of base system including /
This also ends up with swap in a lv on raid1 so drive crash shouldn't
crash the system.


If this looks good, do the d-i menus let me start out with a degraded
raid1 with only one drive, and add the second drive later?

Thanks to your patient help, I'm gradually getting my head around this
new world.  Thank you.

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Lennart Sorensen
On Thu, Oct 12, 2006 at 07:04:59PM -0400, [EMAIL PROTECTED] wrote:
 On Wed, Oct 11, 2006 at 05:20:58PM -0400, dtutty wrote:
  On Wed, Oct 11, 2006 at 12:27:12PM -0400, Lennart Sorensen wrote:
   On Wed, Oct 11, 2006 at 06:04:43PM +0200, Daniel Tryba wrote:
Adding a disk creates an other copy of /, and with the newer
kernels a raid5 array can be expanded, so it can be used by the LVM.

  
 Where can I find documentation on what is currently possible with raid
 and lvm?  The software-raid howto says that resizing is impossible/very
 difficult.  

True but you generally have no need to resize a raid.  If you get more
disks, you make another raid and add a partition on that raid to your
existing LVM, and then you can resize your existing LVM volumes to take
up more space spread across all your disks.

 If I install without raid, how difficult is it to add raid1 to / when I
 add a second drive later on?  I had planned that my first upgrade (in a
 few months) would be a second 1GB ram stick then a TV tuner card, then a
 second larger drive for space/performance when removing commercials from
 old VHS tapes (and making DVDs).  By then, larger drives will be
 cheaper.

Simple answer: You reinstall.  There are ways to kind of do it, but it
is a lot of manual work, and somewhat prone to screwing up.

 The downside of raid1 to me has always been that if I want to add 'a'
 drive, I have to add 'a pair/set'.  I would like to be able to do this:
 
 Start with one drive with lvm
 Add a second drive and provide raid redundancy to / (e.g. raid1)
   and improve performance to the video-editing working directory
   (since I havn't got it installed, I don't know if this is /home
   or /var/tmp or what)
 Add a third or more drives to add capacity while getting raid
 redundancy.  This sounds like raid5.  Later-added drives will probably
 be larger capacity.

You can only add drives to raid5 with a good hardware raid card I
believe (fake raids generally suck at raid5 since it is rather cpu
intensive and needs a dedicated xor engine).  Raid1 is simpler, and is
all the boot loaders currently support (since with raid1 the disks are
identical, the boot loader just ignores raid and reads from one disk).

Technically you could install pretending to have raid1, by starting with
degraded mode, and then adding an identical disk later as if you were
replacing a broken disk.  Given the cost of disks though, it hardly
seems worth it.  I don't think 250GB disks are going to get that much
cheaper.  For example checking a local computer store's prices:
SATA II:
 80GB: $59
160GB: $72
250GB: $90
320GB: $125

Seems to me that the electronics and case and such must cover about the
first $50 of the drive price, with the capacity being the rest of the
cost.  A pair of 250GB drives is hardly expensive anymore.  Compared to
the cost of RAM, CPU, motherboard, video card, etc, you are nuts (in my
not so humble opinion) if you think you can justify not buying two hard
drives for raid1 anymore.

 If one can add a drive to raid5 and extend the pv onto it, then extend
 the LV and the filesystem on it, while maintaining data redundancy,
 that's almost perfect.  Perfect would be transparent data integrety
 verification.
 
 The question for me, on a limited budget, is how to start.  One disk,
 with everything in place for adding a second, larger disk later.

I always think if you are on a limited budget, stick with your current
machine until you can afford to buy something worth your money.  My main
machine is an athlon 700 with 768MB ram and a lot of disk space (running
raid1 on everything).  I have a mythtv system with an athlon 1700+ with
a pair of 120GB SATA drives running raid1.  The time it would take to
reinstall a machine and restore from backups is worth an awful lot more
than a few dollars for a second disk.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Lennart Sorensen
On Thu, Oct 12, 2006 at 07:14:01PM -0400, [EMAIL PROTECTED] wrote:
 Would that live CD (or the etch install-cd) be able to work with a weird
 problem on the filesystem if its on raid1?  Since the installer can
 install to jfs anyway, and my previous bootfloppies predate SATA so
 won't work, is this an issue?  
 
 Are there other issues in choosing ext3 vs jfs?

You might be able to install to JFS, but I am fairly sure you can't boot
from JFS, so using JFS for / (I still don't see the point of a fancy
filesystem for an almost static small filesystem) requires using a
seperate /boot with ext2 or ext3 anyhow.  Why not just have ext3 on /
and use JFS if you feel like it for the other volumes?

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Lennart Sorensen
On Fri, Oct 13, 2006 at 06:43:33AM +0200, Goswin von Brederlow wrote:
 Because it is way to fragile (fails half the time) and you can't use
 init=/bin/sh anymore.

init=/bin/sh works for me using initramfs and has never been a problem.
On 2.4 kernels it was a different story.  I used to build my own kernels
when I ran 2.4 to avoid the need for initrd.  With 2.6 there aren't any
of those problems and hence no reason to waste time building a kernel
that will be exactly the same in use as the one debian provides (and
gives me security updates for).

 Maybe I've been using the wrong tools back then (initramfs is somewhat
 new) but that never worked right for me.

Well always works for me.  yaird almost always failed for me so I don't
use it.

 Seems that some desktop users like to not shut down all their apps
 when they turn of the system for the night.

My machine is usually busy doing something all the time.  I also like
not to decrease the life span of my disks.

 You do? That would be a bug or at least missing feture then. Recently
 there has been some work on raid support inside lvm (raid1 support was
 added) so I don't think it is totaly deprecated.

Well the documentation for LVM certainly said there was no way to keep
the stripe setting if you added more disks to a VG.  Only the initial
set could have a stripe setting.  After all the striping was based on
what disks where available initially, and divided among them.  If you
add more volumes later, it would be silly to force it to rearrange all
your data just to stripe it across the new PVs.  Maybe they added such
an option for people to be silly.  Not imposible to do after all, just
seems silly.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Lennart Sorensen
On Fri, Oct 13, 2006 at 08:04:32AM -0400, [EMAIL PROTECTED] wrote:
 Tell me more about raid1 support inside lvm, because that's what I'm
 looking at for /.  I know about raid0 (lvm's striping) but can't find
 mirroring in lvm.  

Never heard of anything other than striping in LVM.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Lennart Sorensen
On Fri, Oct 13, 2006 at 08:36:17AM -0400, [EMAIL PROTECTED] wrote:
 In Debian, is it only the loss of / that requires a reinstall?  What
 happens if /var (especially /var/lib) or /usr get corrupted?  Doesn't
 that also make the system extremely difficult/time-consuming to restore?

What would cause such corruption?  I have a machine running the same
debian install I did on it in 1998, and it has simply upgraded ever
since.  Still runs perfectly.  Corruption simply doesn't seem to happen
if you stick with debian packages for everything (or keep other stuff in
/usr/local where it belongs) with no exceptions.

 If I were to get a second 80 GB SATA drive so that I could raid1 /,
 might I just as well have the whole base system on raid1?  

I raid1 everything.  Why make exceptions unless you really happen to be
working with large temporary data files that would be trivial to
regenerate, and for which you would rather have the speed of raid0 (in
which case make a raid0 fast data drive, and raid1 everything else)

 I would then have something like this:
 
 Disk a:
 Part: Size:   
 1 64MB
 5 remainder
 
 Disk b:
 Part: Size:
 1 64 MB
 5 remainder
 
 a1 and b1 raid1 to make md0 and mounted as /boot
   since can't resize md0, can anyone imagine /boot ever needing 
   more than 64 MB to hold 2 kernels (old and new)?

Or 3 or 4 for that matter.  No I can't imagine it needing to be bigger.

   Boot will be via grub installed on both drive's mbrs for
   auto-failover booting.

Remember to grub-install /dev/sda and /dev/sdb (or whatever your two
drives are).  The install only does the first one for you.

 a5 and b5 raid1 to make md0 which becomes pv0 of vg0
 lvs made for rest of base system including /
 This also ends up with swap in a lv on raid1 so drive crash shouldn't
 crash the system.

Why not just a2 and b2.  Logical partitions are a pain in the ass to
restore if you ever have to replace a disk.  If you just have primary
partitions, then simply dd'ing the first 512byte sector gives you the
boot sector and the partition table all at once.  If you have any
logical paritions you have to start doing the primary partitions, then
reread, then do the first 512bytes of the extended partition, and then
reread and than any further extended partitions inside those, and it
just becomes a mess.  Two primaries with the two raids are much simpler,
and even gains you a few MB of disk space since you don't have waste a
track for the extended partition table.  Logical partitions are a rather
wasteful design.

 If this looks good, do the d-i menus let me start out with a degraded
 raid1 with only one drive, and add the second drive later?

Don't know.  Never tried.

 Thanks to your patient help, I'm gradually getting my head around this
 new world.  Thank you.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Albert Dengg
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Oct 13, 2006 at 08:36:17AM -0400, [EMAIL PROTECTED] wrote:
 In Debian, is it only the loss of / that requires a reinstall?  What
 happens if /var (especially /var/lib) or /usr get corrupted?  Doesn't
 that also make the system extremely difficult/time-consuming to restore?
/var could definitly a problem since that's where the package database
is stored...
/usr on the other hand should not be a big problem (ok /usr/lib can make
things more difficult)...
but on the other hand...
you _should_ have a backup (i'dont always have one...)
and recently i restored a fucked up system (libc was broken after a crash
while the security update was underway)... so there are always ways how
to do things (though i could have lived without that expirience.

yours
albert

- -- 
Albert Dengg [EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFL6WrhrtSwvbWj0kRAmJuAJ9dLSCmjNjrCHysIGArUFP5mKgrKgCeNS8e
0GkELHCSgdx5CHeYklcnGvo=
=h7VX
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread dtutty
Sorry Len for the off-list post.  Wrong 'to', no excuses.
Doug.

On Fri, Oct 13, 2006 at 10:11:19AM -0400, Lennart Sorensen wrote:
 On Thu, Oct 12, 2006 at 07:14:01PM -0400, [EMAIL PROTECTED] wrote:
  Are there other issues in choosing ext3 vs jfs?
 
 You might be able to install to JFS, but I am fairly sure you can't boot
 from JFS, so using JFS for / (I still don't see the point of a fancy
 filesystem for an almost static small filesystem) requires using a
 seperate /boot with ext2 or ext3 anyhow.  Why not just have ext3 on /
 and use JFS if you feel like it for the other volumes?
 

Yes you can boot directly to JFS, have done it for a while now.  Rock
solid, even full filesystem checks are quite fast (which I think was one
of the design criteria) and since I live in the country with no UPS,
power-failure reslilience is an issue (I know, not only should I buy two
disks, but a UPS as well).  

JFS is also faster (very noticible on a 486, I'm sure less so on an
Athlon).  



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Albert Dengg
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Oct 13, 2006 at 10:07:29AM -0400, Lennart Sorensen wrote:
 You can only add drives to raid5 with a good hardware raid card I
 believe (fake raids generally suck at raid5 since it is rather cpu
 intensive and needs a dedicated xor engine).  Raid1 is simpler, and is
 all the boot loaders currently support (since with raid1 the disks are
 identical, the boot loader just ignores raid and reads from one disk).
there is code in the kernel to do just that and even though it is marked
experimental i've read on this list that it works...
though it is time consuming, especially when the system is under load...

... 
 I always think if you are on a limited budget, stick with your current
 machine until you can afford to buy something worth your money.  My main
 machine is an athlon 700 with 768MB ram and a lot of disk space (running
 raid1 on everything).  I have a mythtv system with an athlon 1700+ with
 a pair of 120GB SATA drives running raid1.  The time it would take to
 reinstall a machine and restore from backups is worth an awful lot more
 than a few dollars for a second disk.
well...
a full restore of one of my machine (acctually my mothers) took me 2
hours... (fucked up libs)...
and that only because i didn't have a live cd handy and no internet
connection so i had to do it with what i had avalible on the initrd...
i typically don't charge more then say 30€/hour (the disk new disk i
installed a few weeks early costed aboubt 60€)...
and most of the time i was just reading a book besides the machine
(which does not mean don't do raid...the machine does raid1 for nearly all 
data besides some more or less temporary files (they don't count...this
system _will_ most likly crash with a faulty disc since for the
reason of availability, 2 ide disks run on the same channel)

yours
albert

- -- 
Albert Dengg [EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFL6gJhrtSwvbWj0kRAvXqAJ9auHLTkCWAD4hVMFFPtCVlUIqdHACfbHGF
pxp5zAsu2051Tv/l4LRefNk=
=BcnJ
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Albert Dengg
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Oct 13, 2006 at 10:44:02AM -0400, [EMAIL PROTECTED] wrote:
 Sorry Len for the off-list post.  Wrong 'to', no excuses.
 Doug.
 
 On Fri, Oct 13, 2006 at 10:11:19AM -0400, Lennart Sorensen wrote:
  On Thu, Oct 12, 2006 at 07:14:01PM -0400, [EMAIL PROTECTED] wrote:
   Are there other issues in choosing ext3 vs jfs?
  
  You might be able to install to JFS, but I am fairly sure you can't boot
  from JFS, so using JFS for / (I still don't see the point of a fancy
  filesystem for an almost static small filesystem) requires using a
  seperate /boot with ext2 or ext3 anyhow.  Why not just have ext3 on /
  and use JFS if you feel like it for the other volumes?
  
 
 Yes you can boot directly to JFS, have done it for a while now.  Rock
 solid, even full filesystem checks are quite fast (which I think was one
 of the design criteria) and since I live in the country with no UPS,
 power-failure reslilience is an issue (I know, not only should I buy two
 disks, but a UPS as well).  
 
 JFS is also faster (very noticible on a 486, I'm sure less so on an
 Athlon).  
i personaly use xfs...though for /boot (/ is on lvm), i use ext3...
(historically grub couldn't read xfs and why change...)
besides, i can mount /boot with journal=data (and most of the time ro)
(performance does not count for /boot i think)

so a fs corruption for /boot is unlikly

yours
albert

- -- 
Albert Dengg [EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFL6jVhrtSwvbWj0kRAqVTAJ0cgNZIDJAWkD29mdULhZIqCajAawCfcoBr
KBd4jOeUvwLMNiz8uxWHDEM=
=8NjF
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread dtutty
On Fri, Oct 13, 2006 at 10:28:10AM -0400, Lennart Sorensen wrote:
 On Fri, Oct 13, 2006 at 08:36:17AM -0400, [EMAIL PROTECTED] wrote:
  In Debian, is it only the loss of / that requires a reinstall?  What
  happens if /var (especially /var/lib) or /usr get corrupted?  Doesn't
  that also make the system extremely difficult/time-consuming to restore?
 
 What would cause such corruption?  I have a machine running the same
 debian install I did on it in 1998, and it has simply upgraded ever
 since.  Still runs perfectly.  Corruption simply doesn't seem to happen
 if you stick with debian packages for everything (or keep other stuff in
 /usr/local where it belongs) with no exceptions.

That would be if / was on raid1 but /var wasn't and the drive that /var
was on failed and had to be replaced.

 
  If I were to get a second 80 GB SATA drive so that I could raid1 /,
  might I just as well have the whole base system on raid1?  
 
 I raid1 everything.  Why make exceptions unless you really happen to be
 working with large temporary data files that would be trivial to
 regenerate, and for which you would rather have the speed of raid0 (in
 which case make a raid0 fast data drive, and raid1 everything else)
 

Ok Len,  not only have you convinced me that this is the way to go (that
part wasn't too hard) but I have learned enough to be comfortable doing
it.  I'll order the second 80 GB drive today.
 
 Why not just a2 and b2.  Logical partitions are a pain in the ass to
 restore if you ever have to replace a disk.  If you just have primary
 partitions, then simply dd'ing the first 512byte sector gives you the
 boot sector and the partition table all at once.  If you have any
 logical paritions you have to start doing the primary partitions, then
 reread, then do the first 512bytes of the extended partition, and then
 reread and than any further extended partitions inside those, and it
 just becomes a mess.  Two primaries with the two raids are much simpler,
 and even gains you a few MB of disk space since you don't have waste a
 track for the extended partition table.  Logical partitions are a rather
 wasteful design.
 
Good point, wilco.

  Thanks to your patient help, I'm gradually getting my head around this
  new world.  Thank you.
 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Lennart Sorensen
On Fri, Oct 13, 2006 at 04:51:54PM +0200, Albert Dengg wrote:
 there is code in the kernel to do just that and even though it is marked
 experimental i've read on this list that it works...
 though it is time consuming, especially when the system is under load...

Neat.  I will have to play with that some day.

 well...
 a full restore of one of my machine (acctually my mothers) took me 2
 hours... (fucked up libs)...
 and that only because i didn't have a live cd handy and no internet
 connection so i had to do it with what i had avalible on the initrd...
 i typically don't charge more then say 30???/hour (the disk new disk i
 installed a few weeks early costed aboubt 60???)...
 and most of the time i was just reading a book besides the machine
 (which does not mean don't do raid...the machine does raid1 for nearly all 
 data besides some more or less temporary files (they don't count...this
 system _will_ most likly crash with a faulty disc since for the
 reason of availability, 2 ide disks run on the same channel)

Most machines have at least two ide ports.  Just spread every raid1 over
different controllers.  If you loose a disk, you may loose a disk from
two different raid1's but at least the system doesn't die.  I can't
think of any raid1 setup where you would have to put both disks on one
controller unless it is your only controller.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-13 Thread Albert Dengg
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Oct 13, 2006 at 11:45:44AM -0400, Lennart Sorensen wrote:
... 
 Most machines have at least two ide ports.  Just spread every raid1 over
 different controllers.  If you loose a disk, you may loose a disk from
 two different raid1's but at least the system doesn't die.  I can't
 think of any raid1 setup where you would have to put both disks on one
 controller unless it is your only controller.
the problem is that the ide cd recorder makes problems when not on the
bus aloneand basically, i don't really care...thats not the reason
for the raid...a reboot isn't a real problem for a workstation...the
bigger problem is that for some time now it sometimes restarts without
reason (just like some pressed the reset button)...i'll have to
investigate that...

and please, stop CC'ing me...i do read the list

yours
albert

- -- 
Albert Dengg [EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFL7qEhrtSwvbWj0kRAtNqAJ94LqxIHB4fJMz7Fw67VnrHEwKz0QCfUGH6
/bW5PnZlGTTpC/KWzIIP9rw=
=HGIc
-END PGP SIGNATURE-



Re: LVM root?

2006-10-12 Thread Lennart Sorensen
On Wed, Oct 11, 2006 at 05:20:58PM -0400, [EMAIL PROTECTED] wrote:
 I am lucky in that my backup-set size is small; Its been tight on a 100
 MB Zip disk and absolutley essential stuff (stuff I need to be able to
 access absolultly from anywhere, any time) fits on one floppy in gzipped
 plain-text.  My approach for the new root drive was to buy the most
 reliable drive I could.  If it dies, get a new drive and reinstall,
 restore from backup and carry on.  
 
 Your approach using Raid may be overkill for me, I don't know.  
 
 The board itself has hardware SATA raid available.  If I go for raid,
 then I'll ask here for the advantages/disadvantages.

Unless you have a high end server board, you do not have onboard
hardware raid.  You have onboard fake raid (which is software raid done
in the bios and the windows driver).  Linux's software raid is faster,
and more portable than fake raid, and fake raids don't have particularly
good support in linux yet.  The only time they make sense to use is on
dual boot systems where you want to use the fake raid for windows, in
which case having linux able to use it to can make life simpler.

 More to the point for me, though, is where can I get current howtos or
 guides on fixing problems when things are in raid or LVM?  Its a whole
 new world for me and the LDP HOWTOs are too out of date, and
 debian-reference doesn't cover it.  

The installer supports setting it all up.  It isn't very hard.  Create
two partitions on each disk (identical layout for each disk), set both
partitions to be used for software raid, then select the software raid
menu on the partition menu, create a raid1 on each pair of matching
partitions, then return to the partition menu and set the use type of
the large second raid to be used as a physical volume for lvm, go to the
lvm menu and setup your lvm's for swap, home, usr, and so on, then
return and set the use type of the first raid to / (root) (I recommend
ext3), and set the mount point and filesystem for each of the lvm's and
also set the swap lvm to be used for swap.  Then exit the partitioner
and complete the install.

 At this point I'm leaning to the single disk, regular / at 512 MB (/tmp
 is separate LV) and the rest PV.
 
 Since my / doesn't change much, when I add a second disk, just copy /
 and have a non-raid copy available to boot into.  When I do make
 changes, just copy it over; could even make it part of the regular
 backup strategy.  

raid is not a substitute for backups.  It is a way to prevent downtime
and data loss due to disk failure.  Preventing loss due to corruption or
user error is what backups are for.  Given I can buy 250GB disks for $90
(CDN), I can't justify not using raid1 given a disk failure would
require doing a reinstall which would take much longer than I could ever
imagine being willing to work for $90, and I would rather be using my
system than doing a reinstall.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread dtutty
Thanks Len,

comments embedded below.

On Thu, Oct 12, 2006 at 09:26:53AM -0400, Lennart Sorensen wrote:
 On Wed, Oct 11, 2006 at 05:20:58PM -0400, [EMAIL PROTECTED] wrote:
  
  The board itself has hardware SATA raid available.  If I go for raid,
  then I'll ask here for the advantages/disadvantages.
 
 Unless you have a high end server board, you do not have onboard
 hardware raid.  You have onboard fake raid (which is software raid done
 in the bios and the windows driver).  Linux's software raid is faster,
 
Board is an Asus M2N-SLI Deluxe (AM2), says it has hardware raid
(Raid0,raid1, raid0+1, raid 5, and JBOD via the onboard NVIDIA
MediaShield RAID controller.  This sounds like hardware raid to me and
is configured via the bios menus.

  More to the point for me, though, is where can I get current howtos or
  guides on fixing problems when things are in raid or LVM?  Its a whole
  new world for me and the LDP HOWTOs are too out of date, and
  debian-reference doesn't cover it.  
 
 The installer supports setting it all up.  It isn't very hard...

Can you give me either a URL or a thumbnail sketch of how to deal with a
disk failure if I set it up as you suggest?


You suggest ext3 for the / system.  Why would I not just use JFS for
everything?

Thanks,

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Jean-Luc Coulon (f5ibh)

Le 12.10.2006 16:09:26, [EMAIL PROTECTED] a écrit :

Thanks Len,

comments embedded below.

On Thu, Oct 12, 2006 at 09:26:53AM -0400, Lennart Sorensen wrote:
 On Wed, Oct 11, 2006 at 05:20:58PM -0400, [EMAIL PROTECTED]
wrote:
 
  The board itself has hardware SATA raid available.  If I go for
raid,
  then I'll ask here for the advantages/disadvantages.

 Unless you have a high end server board, you do not have onboard
 hardware raid.  You have onboard fake raid (which is software raid
done
 in the bios and the windows driver).  Linux's software raid is
faster,

Board is an Asus M2N-SLI Deluxe (AM2), says it has hardware raid
(Raid0,raid1, raid0+1, raid 5, and JBOD via the onboard NVIDIA
MediaShield RAID controller.  This sounds like hardware raid to me and
is configured via the bios menus.


*Real* hardware raid doesnt need an OS layer / driver to work.
This kind of raid relies on the BIOS *and* on a Windows driver.
It is more a raid feature enabled in the BIOS and managed by the  
Windows driver.

Linux can or not support this BIOS feature depending of the chipset.

Most of the time, you have to disable the raid in the BIOS and use pure  
software raid.




  More to the point for me, though, is where can I get current
howtos or
  guides on fixing problems when things are in raid or LVM?  Its a
whole
  new world for me and the LDP HOWTOs are too out of date, and
  debian-reference doesn't cover it.

 The installer supports setting it all up.  It isn't very hard...

Can you give me either a URL or a thumbnail sketch of how to deal with
a
disk failure if I set it up as you suggest?

In case if a raid1, if a disk fails you get a message from the system.
If you have spare disks (configured and installed as so in the raid),  
the raid is rebuilt on the spare disk. You will notice disk activity  
related to this mirroring. Then you can wait until you can shutdown  
your system and remove/replace the defective disk. Then restart the  
system and use mdadm to reinstall the disk in the array.
If you have no spare, the raid is degraded and you are running on the  
safe disk without any redundancy.
You have, the same way as before, to shutdown your system and remove /  
replace the defective disk.


In case of a raid0, you have no redundancy and the filesystem relying  
on this raid will die.


Remarks :
- SATA is told to be hotplug but most of the motherboards dont support  
hotplugging of disks on their SATA controller. This is why you have to  
shutdown your system.
- There are disk failures and there are controller failures. If both  
your disk are on the same controller, your system will crash.
- The swap has to be on the raid also (or on a logical volume of LVM  
which is built over the raid) otherwise, you will probably crash your  
system at the failure.


You can download and install mdadm : the doc files in  
/usr/share/doc/mdadm contain valuable informations.





You suggest ext3 for the / system.  Why would I not just use JFS for
everything?
It is often easier to repair have access to ext3 (which is  
ext2+journal) from a system you have booted from a live CD, just in  
case of a weird problem on the filesystem.


Regards

Jean-Luc


pgpiFUhHVc4Bx.pgp
Description: PGP signature


Re: LVM root?

2006-10-12 Thread Lennart Sorensen
On Thu, Oct 12, 2006 at 04:35:02PM +0200, Jean-Luc Coulon (f5ibh) wrote:
 *Real* hardware raid doesnt need an OS layer / driver to work.
 This kind of raid relies on the BIOS *and* on a Windows driver.
 It is more a raid feature enabled in the BIOS and managed by the  
 Windows driver.
 Linux can or not support this BIOS feature depending of the chipset.
 
 Most of the time, you have to disable the raid in the BIOS and use pure  
 software raid.
 
 In case if a raid1, if a disk fails you get a message from the system.
 If you have spare disks (configured and installed as so in the raid),  
 the raid is rebuilt on the spare disk. You will notice disk activity  
 related to this mirroring. Then you can wait until you can shutdown  
 your system and remove/replace the defective disk. Then restart the  
 system and use mdadm to reinstall the disk in the array.
 If you have no spare, the raid is degraded and you are running on the  
 safe disk without any redundancy.
 You have, the same way as before, to shutdown your system and remove /  
 replace the defective disk.

 In case of a raid0, you have no redundancy and the filesystem relying  
 on this raid will die.
 
 Remarks :
 - SATA is told to be hotplug but most of the motherboards dont support  
 hotplugging of disks on their SATA controller. This is why you have to  
 shutdown your system.

Actually most SATA controllers DO support hotplug.  The linux kernel
doesn't support SATA hot plug yet (although supposedly that is being
worked on).

 - There are disk failures and there are controller failures. If both  
 your disk are on the same controller, your system will crash.

If your controller fails, most likely the system will crash when the
driver gets very unexpected results.  This is a rather unusual case, and
very expensive to protect against.  Disk failures are much more common
and fortunately much easier to protect against.

 - The swap has to be on the raid also (or on a logical volume of LVM  
 which is built over the raid) otherwise, you will probably crash your  
 system at the failure.

That is for sure.

 You can download and install mdadm : the doc files in  
 /usr/share/doc/mdadm contain valuable informations.
 
 It is often easier to repair have access to ext3 (which is  
 ext2+journal) from a system you have booted from a live CD, just in  
 case of a weird problem on the filesystem.

Also grub only supports some filesystems for /boot, so it makes sense to
keep / as something supported and simple.  After all if your / is small
and doesn't change much why would you need any fancy filesystem for it?

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Lennart Sorensen
On Thu, Oct 12, 2006 at 10:09:26AM -0400, [EMAIL PROTECTED] wrote:
 Board is an Asus M2N-SLI Deluxe (AM2), says it has hardware raid
 (Raid0,raid1, raid0+1, raid 5, and JBOD via the onboard NVIDIA
 MediaShield RAID controller.  This sounds like hardware raid to me and
 is configured via the bios menus.

It is fake raid.  It is not hardware raid.  At best it may have an xor
engine in the chipset that the driver/bios can use to reduce the cpu
load.  Marketing people think anything done in the bios makes it
hardware.  They would claim a winmodem is entirely a hardware modem too.

 Can you give me either a URL or a thumbnail sketch of how to deal with a
 disk failure if I set it up as you suggest?

If a disk fails, mdadm will send an email about it (you can see it in
/proc/mdstat too).  You then shutdown at a convinient time, replace the
broken disk, boot up again and copy the partition table from the working
disk to the new disk (making the still working disk now be the first
disk makes this easier) using something like dd if=/dev/sda of=/dev/sdb
bs=512 count=1, and then reread the partition table with hdparm -z
/dev/sdb, then you ask mdadm to add the new partitions on the new sdb
using mdadm --add /dev/md0 /dev/sdb1; mdadm --add /dev/md1 /dev/sdb2

Then it will resync the mirror, and when done it will be all back to
normal.

With a hardware raid card (not fakeraid), you would be able to just
hotswap the broken drive, and the raid would start resyncing.  No user
intervension required.

 You suggest ext3 for the / system.  Why would I not just use JFS for
 everything?

Why would you use JFS for anything?

ext3 is solid, reliable, decent performance, and supported by everything
(including the boot loader).

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread dtutty
On Wed, Oct 11, 2006 at 05:20:58PM -0400, dtutty wrote:
 On Wed, Oct 11, 2006 at 12:27:12PM -0400, Lennart Sorensen wrote:
  On Wed, Oct 11, 2006 at 06:04:43PM +0200, Daniel Tryba wrote:
   Adding a disk creates an other copy of /, and with the newer
   kernels a raid5 array can be expanded, so it can be used by the LVM.
   
 
Where can I find documentation on what is currently possible with raid
and lvm?  The software-raid howto says that resizing is impossible/very
difficult.  

If I install without raid, how difficult is it to add raid1 to / when I
add a second drive later on?  I had planned that my first upgrade (in a
few months) would be a second 1GB ram stick then a TV tuner card, then a
second larger drive for space/performance when removing commercials from
old VHS tapes (and making DVDs).  By then, larger drives will be
cheaper.

The downside of raid1 to me has always been that if I want to add 'a'
drive, I have to add 'a pair/set'.  I would like to be able to do this:

Start with one drive with lvm
Add a second drive and provide raid redundancy to / (e.g. raid1)
and improve performance to the video-editing working directory
(since I havn't got it installed, I don't know if this is /home
or /var/tmp or what)
Add a third or more drives to add capacity while getting raid
redundancy.  This sounds like raid5.  Later-added drives will probably
be larger capacity.

If one can add a drive to raid5 and extend the pv onto it, then extend
the LV and the filesystem on it, while maintaining data redundancy,
that's almost perfect.  Perfect would be transparent data integrety
verification.

The question for me, on a limited budget, is how to start.  One disk,
with everything in place for adding a second, larger disk later.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread dtutty
On Thu, Oct 12, 2006 at 04:35:02PM +0200, Jean-Luc Coulon (f5ibh) wrote:
 
 You suggest ext3 for the / system.  Why would I not just use JFS for
 everything?
 It is often easier to repair have access to ext3 (which is  
 ext2+journal) from a system you have booted from a live CD, just in  
 case of a weird problem on the filesystem.

Would that live CD (or the etch install-cd) be able to work with a weird
problem on the filesystem if its on raid1?  Since the installer can
install to jfs anyway, and my previous bootfloppies predate SATA so
won't work, is this an issue?  

Are there other issues in choosing ext3 vs jfs?

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Goswin von Brederlow
Lennart Sorensen [EMAIL PROTECTED] writes:

 On Mon, Oct 09, 2006 at 03:01:24AM +0200, Goswin von Brederlow wrote:
 You need an initrd or initramfs even if you compile a custom kernel.

 Why would you bother building your own kernel for most machines?  Why
 would you not use an initramfs?

Because it is way to fragile (fails half the time) and you can't use
init=/bin/sh anymore.

 I like to boot into my / with init=/bin/sh, have an editor, netcat,
 the lvm tools and all that available to look around and fix things in
 case something does go wrong. With a standard initrd that is pretty
 much an impossibility and you need that for / on lvm.

 If the initramfs is done right (initramfs-tools seems to do the job
 well), then you should still be able to boot and get at your root
 filesystem and do that.

Maybe I've been using the wrong tools back then (initramfs is somewhat
new) but that never worked right for me.

 You can put swap on lvm. You should also think about suspend to disk,
 which needs enough swap to store all active memory. Twice your ram
 isn't a bad idea. Same as ram is pretty much a must.

 Why would anyone without a laptop care?

Seems that some desktop users like to not shut down all their apps
when they turn of the system for the night.

 You might want to stripe the volume for editing accross both
 disks. Lvm can do that without you having to resrot to raid0. Gives
 you more speed on file I/O. It's a per volume thing so you can keep
 /usr, /var, /home on the first disk and just stripe the editing LV
 when you get the 2nd disk.

 LVM can do that, but as soon as you start to add to the LVM later you
 loose it.  In general it isn't recommended to use the stripping features
 in LVM.

You do? That would be a bug or at least missing feture then. Recently
there has been some work on raid support inside lvm (raid1 support was
added) so I don't think it is totaly deprecated.

 --
 Len Sorensen

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Goswin von Brederlow
Lennart Sorensen [EMAIL PROTECTED] writes:

 On Wed, Oct 11, 2006 at 07:55:07AM -0400, [EMAIL PROTECTED] wrote:
 Isn't there a performance hit doing this?  If a programme is putting stuff
 in /tmp to otherwise reduce its memory footprint, does it make sense to
 circumvent that and put /tmp back in memory?  If a program is accessing

Older programms have odd defaults for what is too big of a memory
foodprint. Like sort starting to use tempfiles at 1MB. Compared to a
browser using 500MB that is laughable. So a lot of the time it is
perfectly fine to keep the tempfiles in ram. And if not then it swaps
them.

 both its /tmp file and another working file, with /tmp effectivly in
 swap, one has no controll over which spindle that page is on (assuming
 more than one disk) whereas if /tmp and /var/tmp are on a different
 spindle from the working directory they could be accessed at the same
 time.  

Then put your swap on a differen spindle from your working directory. :)
 
 I wouldn't want to use /tmp for a temporary iso file.  If I get the iso
 created and then have a power failure, I don't want it gone when
 rebooting cleans out /tmp.  I thought that's what /var/tmp is for.

Change your nautilus tmp dir to something more permanent then.
 
 Disk space is cheap.  Other than saving space in /tmp when its not
 needed, is there an advantage to having it use tmpfs instead of a
 'normal' device (partition, LV, whatever)?

Could be faster. Never measured it. Saving that extra partition or LV
was always good enough a reason for me.

Another reason is that tmpfs can never fail the fsck on boot or
otherwise become a corrupted filesystem and cause troubles on
boot. You get an instant fresh one every time. And you can't get I/O
errors from a broken disk.

 /var/tmp is for things you do NOT want to loose on a reboot, but which
 are otherwise still temporary files.  /tmp is for stuff that you really
 don't care about, and which a reboot should remove (I know there used to
 be a setting in debian that would clean /tmp on boot automatically).
 For generating CD images I think I would use my home dir or some other
 data storage location which makes sense for working on large files.
 /tmp isn't it and never was.

I always pipe the data directly to the burner. growisofs does it by
default. No temporary iso to create, waste of time. Nautilus should
realy do that too.

But even so, my /tmp is limited to 1G currently so creating an CD iso
in there is no problem. With 2G ram it probably wouldn't even swap
anything if I don't have galeon running.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Goswin von Brederlow
[EMAIL PROTECTED] writes:

 On Sun, Oct 08, 2006 at 10:15:22PM +0200, Jean-Luc Coulon (f5ibh) wrote:
 With regular partitions, if the partition table gets corrupted, its
 simple to fix if I have what it looked like before using sfdisk -d (I
 never ran into this problem after the table corruption that prompted me
 to keep this information on a floppy with other essential backups).

 What is the LVM equivalent of this?

There is /etc/lvm/backup/* and /etc/lvm/archive/* that keeps a backup
and a history of the metadata respectively for your volume group. If
your lvm meta data ever gets corrupted or you did something stupid
like erasing you $HOME logical volume you can use vgcfgrestore to put
it back in order.

 The whole HOWTO package is getting woefully out of date.  LVM-HOWTO
 doesn't cover recovering from errors, MULTI-DISK-HOWTO doen't cover LVM,
 and the various recovery howtos focus on bare-metal recovery not fixing
 a broken system.

 Or, it it the case now that LVM is so reliable that any errors will be
 hardware and unrecoverable anyway, requiring a bare-metal recovery.

It's been years without any corruption for me. I had hardware fail or
user errors where I needed to restore things but I can't remember lvm
screwing up its meta data once.

 Thanks,

 Doug.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Goswin von Brederlow
[EMAIL PROTECTED] writes:

 I am lucky in that my backup-set size is small; Its been tight on a 100
 MB Zip disk and absolutley essential stuff (stuff I need to be able to
 access absolultly from anywhere, any time) fits on one floppy in gzipped
 plain-text.  My approach for the new root drive was to buy the most
 reliable drive I could.  If it dies, get a new drive and reinstall,
 restore from backup and carry on.  

 Your approach using Raid may be overkill for me, I don't know.  

If you have 2 disks (and you said you would have anyway) there really
is no reason not to use raid.

If you have 2 disks for the extra size you don't want to use raid1 for
the main data though. Keep it to the essentials. So you give up say
10G of each disk for raid1 for the essential stuff and then raid0 for
the rest for the video data. On a (single) disk failure you would
loose your video data but not your system. Pop in a new disk, restore
the video data and just keep going.

It takes only a few minutes to think this through and then select it
in Debian-Installer but when a disk fails it saves you hours of
reinstalling. You give up 10G space for future hours of work. I find
that always worth it.

[Raid is no replacement for a real backup though.]

 The board itself has hardware SATA raid available.  If I go for raid,
 then I'll ask here for the advantages/disadvantages.

I doubt that. All the desktop boards with onboard raid have only
softraid. They can boot from the raid but it is just software support
in the bios and windows/linux has to have its own driver for it. The
proper linux software raid is better than emulating the bios software
so that is preferable.

 More to the point for me, though, is where can I get current howtos or
 guides on fixing problems when things are in raid or LVM?  Its a whole
 new world for me and the LDP HOWTOs are too out of date, and
 debian-reference doesn't cover it.  

 At this point I'm leaning to the single disk, regular / at 512 MB (/tmp
 is separate LV) and the rest PV.

 Since my / doesn't change much, when I add a second disk, just copy /
 and have a non-raid copy available to boot into.  When I do make
 changes, just copy it over; could even make it part of the regular
 backup strategy.  

 Thanks for the ideas,

You could also setup / as degraded raid1. That means you set it up
with one disk there and one missing. When you get your second disk you
just add it with mdadm --add /dev/md0 /dev/sdb1 and a few minutes
later the raid1 will be fully up.

Does anyone know if the D-I supports this from the menues?

 Doug.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Goswin von Brederlow
[EMAIL PROTECTED] writes:

 On Wed, Oct 11, 2006 at 05:20:58PM -0400, dtutty wrote:
 On Wed, Oct 11, 2006 at 12:27:12PM -0400, Lennart Sorensen wrote:
  On Wed, Oct 11, 2006 at 06:04:43PM +0200, Daniel Tryba wrote:
   Adding a disk creates an other copy of /, and with the newer
   kernels a raid5 array can be expanded, so it can be used by the LVM.
   
  
 Where can I find documentation on what is currently possible with raid
 and lvm?  The software-raid howto says that resizing is impossible/very
 difficult.  

 If I install without raid, how difficult is it to add raid1 to / when I
 add a second drive later on?  I had planned that my first upgrade (in a
 few months) would be a second 1GB ram stick then a TV tuner card, then a
 second larger drive for space/performance when removing commercials from
 old VHS tapes (and making DVDs).  By then, larger drives will be
 cheaper.

Not much. But it requires some fiddling around. What you do is you set
up raid1 on the second disk in degraded mode (one disk missing). Then
you copy your system over and reboot into the raid (still in degraded
mode). If everything works then you add the old disk to the raid.

 The downside of raid1 to me has always been that if I want to add 'a'
 drive, I have to add 'a pair/set'.  I would like to be able to do this:

 Start with one drive with lvm
 Add a second drive and provide raid redundancy to / (e.g. raid1)
   and improve performance to the video-editing working directory
   (since I havn't got it installed, I don't know if this is /home
   or /var/tmp or what)
 Add a third or more drives to add capacity while getting raid
 redundancy.  This sounds like raid5.  Later-added drives will probably
 be larger capacity.

Indeed, with raid1 you have to add disks in pairs, tripplets, however
many copies you have set up. So when you go from 2 disks to 3 disks
you convert from raid1 to raid5. The kernel has recently gotten
support to do this on-the-fly in the background so you don't even have
to umount and shutdown the raid anymore.

You do have a problem with the disk size though. The raid1 and raid5
will always be as small as the smallest disk. So when you get a bigger
disk later you would make a partition roughly as big as the old disks
for the raid and the rest you can use for something else.

For example I have a raid5 over all disks the size of the smallest. A
raid1 on the bigger disks for / and swap and whatever is still left
over I use for a debian mirror without raid. If a disk fails I loose
nothing that can't be easily replaced.

 If one can add a drive to raid5 and extend the pv onto it, then extend
 the LV and the filesystem on it, while maintaining data redundancy,
 that's almost perfect.  Perfect would be transparent data integrety
 verification.

 The question for me, on a limited budget, is how to start.  One disk,
 with everything in place for adding a second, larger disk later.

Sounds good to me.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-12 Thread Goswin von Brederlow
[EMAIL PROTECTED] writes:

 On Mon, Oct 09, 2006 at 03:01:24AM +0200, Goswin von Brederlow wrote:
  
 Also amd64 has /emul/ia32-linux/ taking up some space if you need
 32bit support libs.
 
 If I find I need them later (when did /emul get in the FHS?), can I put
 them on a separate LV mounted on /emul?

 Doug.

Sure. Or just mkdir /usr/emul  mount --bind /usr/emul /emul.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread dtutty
On Tue, Oct 10, 2006 at 09:35:54AM -0400, Lennart Sorensen wrote:
 
 I thought etch now defaulted to tmpfs for /tmp meaning putting it in ram
 where it is faster, and backed by swap if needed.
 

Isn't there a performance hit doing this?  If a programme is putting stuff
in /tmp to otherwise reduce its memory footprint, does it make sense to
circumvent that and put /tmp back in memory?  If a program is accessing
both its /tmp file and another working file, with /tmp effectivly in
swap, one has no controll over which spindle that page is on (assuming
more than one disk) whereas if /tmp and /var/tmp are on a different
spindle from the working directory they could be accessed at the same
time.  

I wouldn't want to use /tmp for a temporary iso file.  If I get the iso
created and then have a power failure, I don't want it gone when
rebooting cleans out /tmp.  I thought that's what /var/tmp is for.

Disk space is cheap.  Other than saving space in /tmp when its not
needed, is there an advantage to having it use tmpfs instead of a
'normal' device (partition, LV, whatever)?

Thanks,

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread Lennart Sorensen
On Wed, Oct 11, 2006 at 07:55:07AM -0400, [EMAIL PROTECTED] wrote:
 Isn't there a performance hit doing this?  If a programme is putting stuff
 in /tmp to otherwise reduce its memory footprint, does it make sense to
 circumvent that and put /tmp back in memory?  If a program is accessing
 both its /tmp file and another working file, with /tmp effectivly in
 swap, one has no controll over which spindle that page is on (assuming
 more than one disk) whereas if /tmp and /var/tmp are on a different
 spindle from the working directory they could be accessed at the same
 time.  
 
 I wouldn't want to use /tmp for a temporary iso file.  If I get the iso
 created and then have a power failure, I don't want it gone when
 rebooting cleans out /tmp.  I thought that's what /var/tmp is for.
 
 Disk space is cheap.  Other than saving space in /tmp when its not
 needed, is there an advantage to having it use tmpfs instead of a
 'normal' device (partition, LV, whatever)?

/var/tmp is for things you do NOT want to loose on a reboot, but which
are otherwise still temporary files.  /tmp is for stuff that you really
don't care about, and which a reboot should remove (I know there used to
be a setting in debian that would clean /tmp on boot automatically).
For generating CD images I think I would use my home dir or some other
data storage location which makes sense for working on large files.
/tmp isn't it and never was.

Given the cost of ram, in general you should not be swapping at all.
/tmp is not for storing large files you intend to keep (like iso
images).  It is used to state information for some running programs,
temp files for compiling (when not doing -pipe), etc.  Having them in
ram makes them faster, and means the disk isn't kept busy writing files
to the disk that are going to be removed again in 10 seconds (if you are
lucky the cache will avoid them being written at all I guess, assuming
it is even that smart about write caching).  Solaris has been putting
/tmp in ram for many years now, and many linux systems do it too.  It
just makes sense (unless you have an old machine with almost no ram).

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread dtutty
On Sun, Oct 08, 2006 at 10:15:22PM +0200, Jean-Luc Coulon (f5ibh) wrote:
 Le 08.10.2006 18:05:23, [EMAIL PROTECTED] a ?crit?:
 
 
 Obviously, I don't know how LV works internally.  If the root
 filesystem
 get corrupted, how do I fix it from a recovery shell (e.g. the install
 USB) if its on an LV?  If this is trivial, then is the thing to do to
 make all of the disk a PV then have LVs for everything?
 
 It is probably a confidence problem.
 A corrupted root file system is not better than a corrupted root over  
 LVM.

I'm confident that I don't know enough about LVM yet to rescue a mangled
system.  In the past, if something got corrupted (lightening strike,
powerfailure, disk failure, etc), I could pop in my grub disk, and
either boot my regular kernel or the one on the copy of /boot on a
diferent drive, tell it root=/dev/hda5 init=/bin/sh and start fixing it.

If that didn't work, I'd boot a rescue system (e.g. the install floppy
or CD) and manually fsck the partitions.

I suppose this is the key to my understanding and will make all clear:
Where do I tell the grub disk (USB stick?) to find the kernel and what
root= line do I give the kernel?  With / and /boot part of LVM, how does
grub find the kernel to boot?  The grub-howto and docs don't mention LVM
at all.

With regular partitions, if the partition table gets corrupted, its
simple to fix if I have what it looked like before using sfdisk -d (I
never ran into this problem after the table corruption that prompted me
to keep this information on a floppy with other essential backups).

What is the LVM equivalent of this?

The whole HOWTO package is getting woefully out of date.  LVM-HOWTO
doesn't cover recovering from errors, MULTI-DISK-HOWTO doen't cover LVM,
and the various recovery howtos focus on bare-metal recovery not fixing
a broken system.

Or, it it the case now that LVM is so reliable that any errors will be
hardware and unrecoverable anyway, requiring a bare-metal recovery.

Thanks,

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread dtutty
On Mon, Oct 09, 2006 at 10:35:03AM +0200, Manuele Rampazzo wrote:
 Ciao,
 
 Albert Dengg disse:
  On Mon, Oct 09, 2006 at 03:01:24AM +0200, Goswin von Brederlow wrote:
  [EMAIL PROTECTED] writes:
   Part.  mount  size
   ==
   1  /boot  32 MB
   5  /  200 MB
  Merge them and maybe give it some extra space.
  i would not merge them since grub does not support lvm at the moment...
 
 The example above was with / on a phisical partition... You _DON'T_ need
 lvm support in grub if you _DON'T_ put / (with /boot inside it) on lvm.
 
 Ciao,
 Manuele
 
Right, but I'm trying to decide if I should put / on LVM.  

To summarize what I've heard so far:

Advantage:  Able to resize.

Disadvantages:
-   Grub doesn't support LVM so need /boot on a regular
partition.

-   Difficult or impossible to boot up a rescue CD and
rescue a corrupted root fs.

If I do this:

1   /boot   32 MB
5   /   200 MB
6   PV1

and have everything else including swap and /tmp on LVs, since I only
would have at most 2 kernels (old and new debian standard package
kernels) during a kernel upgrade;

Would there be any disadvantage to this?


Doug.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread dtutty
On Mon, Oct 09, 2006 at 03:01:24AM +0200, Goswin von Brederlow wrote:
 
 Also amd64 has /emul/ia32-linux/ taking up some space if you need
 32bit support libs.
 
If I find I need them later (when did /emul get in the FHS?), can I put
them on a separate LV mounted on /emul?

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread dtutty
On Mon, Oct 09, 2006 at 09:20:06AM -0400, [EMAIL PROTECTED] wrote:
 
 Perhaps more to the point, make sure you have room for several
 kernels in /boot. You don't have to uninstall a kernel in order
 to install a new one.  If you are careful with your lilo, or grub,
 you can get a choice of kernels at boot time.  This in invaluable
 if the new kernel doesn't work!
 
 I've found 50MB for / to be too cramped at times.
 
I've found 100 MB a bit tight but 124 MB fine.  This is why I went with
200 MB since I've got the room.  I only ever have 2 kernels installed,
old and new).  Do 64-bit kernel modules and libs take more space than
32-bit?  

How much space to people find they need in / ( minus /usr, /var, /home,
/tmp, and swap)?



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread Mike Reinehr
Doug,

I'm certainly no expert on this, but I have been using LVM2 for a year or so, 
so ...

On Wednesday 11 October 2006 07:45, [EMAIL PROTECTED] wrote:
 On Sun, Oct 08, 2006 at 10:15:22PM +0200, Jean-Luc Coulon (f5ibh) wrote:
  Le 08.10.2006 18:05:23, [EMAIL PROTECTED] a ?crit?:
  Obviously, I don't know how LV works internally.  If the root
  filesystem
  get corrupted, how do I fix it from a recovery shell (e.g. the install
  USB) if its on an LV?  If this is trivial, then is the thing to do to
  make all of the disk a PV then have LVs for everything?
 
  It is probably a confidence problem.
  A corrupted root file system is not better than a corrupted root over
  LVM.

 I'm confident that I don't know enough about LVM yet to rescue a mangled
 system.  In the past, if something got corrupted (lightening strike,
 powerfailure, disk failure, etc), I could pop in my grub disk, and
 either boot my regular kernel or the one on the copy of /boot on a
 diferent drive, tell it root=/dev/hda5 init=/bin/sh and start fixing it.

First off, you're partially right, here. GRUB, so far as I know, does 
not yet 
support LVM, but that is not an insurmountable problem. As long as your boot 
partition and the included kernel boot files are on a regular partition you 
still will be able to boot an LVM root partition. All that's necessary is 
that LVM be included in the kernel (and the initrd).

 If that didn't work, I'd boot a rescue system (e.g. the install floppy
 or CD) and manually fsck the partitions.


Ditto above, plus, my experience is that both the new Debian installer 
as 
well as late model Knoppix live CD's include support for LVM. So, just boot 
from one of those. Mount the appropriate LVM volumes and go to work.

 I suppose this is the key to my understanding and will make all clear:
 Where do I tell the grub disk (USB stick?) to find the kernel and what
 root= line do I give the kernel?  With / and /boot part of LVM, how does
 grub find the kernel to boot?  The grub-howto and docs don't mention LVM
 at all.

Your boot partition must be on a regular file system. The root can be 
an LVM 
volume, so long as your kernel/initrd supports LVM.

 With regular partitions, if the partition table gets corrupted, its
 simple to fix if I have what it looked like before using sfdisk -d (I
 never ran into this problem after the table corruption that prompted me
 to keep this information on a floppy with other essential backups).

 What is the LVM equivalent of this?

Either the entire hard disk (no partitions) or individual partitions 
can be 
LVM physical volumes, so there's really no difference here. If your partition 
table becomes corrupted you still would have to fix it with sfdisk or dd.

 The whole HOWTO package is getting woefully out of date.  LVM-HOWTO
 doesn't cover recovering from errors, MULTI-DISK-HOWTO doen't cover LVM,
 and the various recovery howtos focus on bare-metal recovery not fixing
 a broken system.

 Or, it it the case now that LVM is so reliable that any errors will be
 hardware and unrecoverable anyway, requiring a bare-metal recovery.

As ever, the likelyhood of file corruption is directly proportional to 
the 
length of time since your last backup!

 Thanks,

 Doug.

HTH!

Cheers,

cmr
-- 
Debian 'Etch': Registered Linux User #241964

More laws, less justice. -- Marcus Tullius Ciceroca, 42 BC



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread Mike Reinehr
On Wednesday 11 October 2006 08:21, [EMAIL PROTECTED] wrote:
 On Mon, Oct 09, 2006 at 09:20:06AM -0400, [EMAIL PROTECTED] wrote:
  Perhaps more to the point, make sure you have room for several
  kernels in /boot. You don't have to uninstall a kernel in order
  to install a new one.  If you are careful with your lilo, or grub,
  you can get a choice of kernels at boot time.  This in invaluable
  if the new kernel doesn't work!
 
  I've found 50MB for / to be too cramped at times.

 I've found 100 MB a bit tight but 124 MB fine.  This is why I went with
 200 MB since I've got the room.  I only ever have 2 kernels installed,
 old and new).  Do 64-bit kernel modules and libs take more space than
 32-bit?

 How much space to people find they need in / ( minus /usr, /var, /home,
 /tmp, and swap)?

For what it's worth, I discovered last month that the new Debian installer--if 
you select LVM  automatic partitioning--will put every partition except 
for /boot under LVM and will size all of the partitions you mention above 
automatically. In fact, I was surprised at how small the LVM volumes were for 
some of these, but so far everything is working with their suggested sizes. I 
would suggest doing a trial install with the new Debian installer and see how 
it goes.

cmr
-- 
Debian 'Etch': Registered Linux User #241964

More laws, less justice. -- Marcus Tullius Ciceroca, 42 BC



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread Daniel Tryba
On Wed, Oct 11, 2006 at 09:15:33AM -0400, [EMAIL PROTECTED] wrote:
 Right, but I'm trying to decide if I should put / on LVM.  
 
 To summarize what I've heard so far:
 
 Advantage:Able to resize.

Which is a negligible advantage. How often is the need for this? Disk
space for / varies between 100Mb to 500Mb on my machines. Instal
with a generous 2Gb for / only and you never need to worry about it
filling up.

 Disadvantages:
   -   Grub doesn't support LVM so need /boot on a regular
   partition.
 
   -   Difficult or impossible to boot up a rescue CD and
   rescue a corrupted root fs.
[snip]
 Would there be any disadvantage to this?

/ is to valuable to lose. IMHO a single disk setup is a no go.

Just to add my 2cents:

new machines get (multiple (identical) disks with) 2 partitions on them:
1 - a small 2Gb (type fd)
2 - the rest (type fd)

The small partitions are combined in a md0 array raid1, the others in
whatever you like (most likely 5, 1 otherwise) md1 array.

/dev/md0 will be used for /.
/dev/md1 will be a pv for lvm.

This adds redundancy, plus any of the partition that make up the raid1
for / can be mounted on its own (but writing to one will break the
array). Adding a disk creates an other copy of /, and with the newer
kernels a raid5 array can be expanded, so it can be used by the LVM.

But this still creates a static sized /, which IMHO is no problem IF the
initial size is big enough.

-- 

 When you do things right, people won't be sure you've done anything at all.

   Daniel Tryba


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread Lennart Sorensen
On Wed, Oct 11, 2006 at 06:04:43PM +0200, Daniel Tryba wrote:
 Which is a negligible advantage. How often is the need for this? Disk
 space for / varies between 100Mb to 500Mb on my machines. Instal
 with a generous 2Gb for / only and you never need to worry about it
 filling up.

Certainly if you are already making / small, it probably is because you
don't intend to ever have it be big.

 / is to valuable to lose. IMHO a single disk setup is a no go.
 
 Just to add my 2cents:
 
 new machines get (multiple (identical) disks with) 2 partitions on them:
 1 - a small 2Gb (type fd)
 2 - the rest (type fd)
 
 The small partitions are combined in a md0 array raid1, the others in
 whatever you like (most likely 5, 1 otherwise) md1 array.
 
 /dev/md0 will be used for /.
 /dev/md1 will be a pv for lvm.
 
 This adds redundancy, plus any of the partition that make up the raid1
 for / can be mounted on its own (but writing to one will break the
 array). Adding a disk creates an other copy of /, and with the newer
 kernels a raid5 array can be expanded, so it can be used by the LVM.
 
 But this still creates a static sized /, which IMHO is no problem IF the
 initial size is big enough.

I like it.  It matches how I setup all my machines.  After having done
the work to recover data from a machine where the drive started to fail,
I can't justify not using raid1 on every machine anymore.  Disks are
just too cheap and my time has value to me.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-11 Thread dtutty
On Wed, Oct 11, 2006 at 12:27:12PM -0400, Lennart Sorensen wrote:
 On Wed, Oct 11, 2006 at 06:04:43PM +0200, Daniel Tryba wrote:
  with a generous 2Gb for / only and you never need to worry about it
  filling up.
  / is to valuable to lose. IMHO a single disk setup is a no go.
  
  new machines get (multiple (identical) disks with) 2 partitions on them:
  1 - a small 2Gb (type fd)
  2 - the rest (type fd)
  
  The small partitions are combined in a md0 array raid1, the others in
  whatever you like (most likely 5, 1 otherwise) md1 array.
  
  /dev/md0 will be used for /.
  /dev/md1 will be a pv for lvm.
  
  This adds redundancy, plus any of the partition that make up the raid1
  for / can be mounted on its own (but writing to one will break the
  array). Adding a disk creates an other copy of /, and with the newer
  kernels a raid5 array can be expanded, so it can be used by the LVM.
  
 
 I like it.  It matches how I setup all my machines.  After having done
 
 Len Sorensen
 

I am lucky in that my backup-set size is small; Its been tight on a 100
MB Zip disk and absolutley essential stuff (stuff I need to be able to
access absolultly from anywhere, any time) fits on one floppy in gzipped
plain-text.  My approach for the new root drive was to buy the most
reliable drive I could.  If it dies, get a new drive and reinstall,
restore from backup and carry on.  

Your approach using Raid may be overkill for me, I don't know.  

The board itself has hardware SATA raid available.  If I go for raid,
then I'll ask here for the advantages/disadvantages.

More to the point for me, though, is where can I get current howtos or
guides on fixing problems when things are in raid or LVM?  Its a whole
new world for me and the LDP HOWTOs are too out of date, and
debian-reference doesn't cover it.  

At this point I'm leaning to the single disk, regular / at 512 MB (/tmp
is separate LV) and the rest PV.

Since my / doesn't change much, when I add a second disk, just copy /
and have a non-raid copy available to boot into.  When I do make
changes, just copy it over; could even make it part of the regular
backup strategy.  

Thanks for the ideas,

Doug.




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-10 Thread Lennart Sorensen
On Sun, Oct 08, 2006 at 11:03:47AM -0400, [EMAIL PROTECTED] wrote:
 re /boot:  old habits die hard.  The wisdom I learned was that its less
 likely to get corrupted.  If that's not an issue anymore, then I can
 forget it.

I haven't seen any corruption in a long time running ext3.  Corrupt root
filesystem is also likely to be a bigger problem in general where a
seperate /boot wouldn't help at all.

 Re root:  200 MB is twice what I've ever needed, with /tmp, /usr, /var,
 and /home on separate partitions.  I doubled it so I wouldn't have to
 resize it later.  What would you suggest?

I didn't see /tmp listed anywhere, so I was wondering.

 If the swap on an LV doesn't add overhead, then it seems like a great
 idea.

No overhead I have ever noticed.  Any overhead would be completely
insignificant compared to the overhead of swapping to disk.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-10 Thread Lennart Sorensen
On Sun, Oct 08, 2006 at 09:57:25PM +0200, Manuele Rampazzo wrote:
 Maybe it's better to make a bigger /tmp, say 1 GB, because of some
 programs (for example, nautilus-cd-burner uses it to create a temporary
 iso file - OK, you can change it's tmp directory, but by default it uses
 /tmp)...

I thought etch now defaulted to tmpfs for /tmp meaning putting it in ram
where it is faster, and backed by swap if needed.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-10 Thread Lennart Sorensen
On Mon, Oct 09, 2006 at 10:21:42AM +0200, Albert Dengg wrote:
 i would not merge them since grub does not support lvm at the moment...
 (there are rumours about lvm2 support in grub2 though i havent
 tried/tested it

Keeping /etc of LVM makes it easier to start and fix your LVM if it ever
gets corrupted.  As a result keeping /boot and / together makes
perfectly good sense.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-09 Thread Albert Dengg
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, Oct 09, 2006 at 03:01:24AM +0200, Goswin von Brederlow wrote:
 [EMAIL PROTECTED] writes:
  Part.  mount  size
  ==
  1  /boot  32 MB
  5  /  200 MB
 
 Merge them and maybe give it some extra space. When you collect a few
 different kernels the /lib/modules dir grows on you. A bit of
 breathing room saves you from having to clean up on every kernel
 upgrade.
i would not merge them since grub does not support lvm at the moment...
(there are rumours about lvm2 support in grub2 though i havent
tried/tested it

yours
albert

- -- 
Albert Dengg [EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFKgaWhrtSwvbWj0kRAj5JAJ9N9FJRNQ6aOcXFQw/g654JiOSCmgCeLgxG
B5Fv9u6p8CHXC3jNZT2LnvQ=
=DBJ4
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-09 Thread Manuele Rampazzo
Ciao,

Albert Dengg disse:
 On Mon, Oct 09, 2006 at 03:01:24AM +0200, Goswin von Brederlow wrote:
 [EMAIL PROTECTED] writes:
  Part.  mount  size
  ==
  1  /boot  32 MB
  5  /  200 MB
 Merge them and maybe give it some extra space.
 i would not merge them since grub does not support lvm at the moment...

The example above was with / on a phisical partition... You _DON'T_ need
lvm support in grub if you _DON'T_ put / (with /boot inside it) on lvm.

Ciao,
Manuele

-- 
È ricercando l'impossibile che l'uomo ha sempre realizzato il
possibile. Coloro che si sono saggiamente limitati a ciò che appariva
loro come possibile, non hanno mai avanzato di un solo passo.
Michail Bakunin (1814 - 1876)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-09 Thread hendrik
On Mon, Oct 09, 2006 at 10:21:42AM +0200, Albert Dengg wrote:
 On Mon, Oct 09, 2006 at 03:01:24AM +0200, Goswin von Brederlow wrote:
  [EMAIL PROTECTED] writes:
   Part.  mount  size
   ==
   1  /boot  32 MB
   5  /  200 MB
  
  Merge them and maybe give it some extra space. When you collect a few
  different kernels the /lib/modules dir grows on you. A bit of
  breathing room saves you from having to clean up on every kernel
  upgrade.
 i would not merge them since grub does not support lvm at the moment...
 (there are rumours about lvm2 support in grub2 though i havent
 tried/tested it

Perhaps more to the point, make sure you have room for several
kernels in /boot. You don't have to uninstall a kernel in order
to install a new one.  If you are careful with your lilo, or grub,
you can get a choice of kernels at boot time.  This in invaluable
if the new kernel doesn't work!

I've found 50MB for / to be too cramped at times.

-- hendrik


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



LVM root?

2006-10-08 Thread dtutty
Hi,

I'm planning the install of amd64 on my new box (Athlon 3800+, 1 GB ram,
Asus M2N-SLI MB, one Seagate 7200 80 GB SATA drive).

What are the advantages to using LVM for root?

I'm concerned about methods of recovery if something goes wrong.

If I don't do LVM root, here's my current drive layout:

Part.  mount  size
==
1  /boot  32 MB
5  /  200 MB
6  swap   512 MB
7  PV1remainder (78 GB +)

VG1only need one volume group, currently containing PV1 only

LVusr  /usr   3 GB
LVvar  /var   15 GB
LVhome /home  10 GB
LVtmp  /tmp   200 MB

This leaves most of the VG as spare to be allocated as needed.

I plan to use JFS for all partitions.  I've been very happy with JFS in
the past.

The most memory I've ever had is 64 MB.  Now I've got a gig.  The only
time I've been memory bound has been thanks to Mozilla.  I'm assuming
that the transfering of VHS tapes to DVD (editing out the commercials)
will take more memory, but I'm unsure of how much swap I need.  When I
start video editing, I'll be adding a second drive but since that's for
working space for the editing, I don't know if I should put a swap
partition on it.

Can/should one put swap in an LV or is it no better than a swap file
then?

Please comment.

Thanks,

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-08 Thread Lennart Sorensen
On Sun, Oct 08, 2006 at 09:09:11AM -0400, [EMAIL PROTECTED] wrote:
 Hi,
 
 I'm planning the install of amd64 on my new box (Athlon 3800+, 1 GB ram,
 Asus M2N-SLI MB, one Seagate 7200 80 GB SATA drive).
 
 What are the advantages to using LVM for root?

Being able to resizeit later if needed I guess.

 I'm concerned about methods of recovery if something goes wrong.
 
 If I don't do LVM root, here's my current drive layout:
 
 Part.  mount  size
 ==
 1  /boot  32 MB

Why a seperate boot?  Why such a puny root partition?  Are you putting
tmp on tmpfs in ram?

 5  /  200 MB
 6  swap   512 MB
 7  PV1remainder (78 GB +)
 
 VG1only need one volume group, currently containing PV1 only
 
 LVusr  /usr   3 GB
 LVvar  /var   15 GB
 LVhome /home  10 GB
 LVtmp  /tmp   200 MB
 
 This leaves most of the VG as spare to be allocated as needed.
 
 I plan to use JFS for all partitions.  I've been very happy with JFS in
 the past.
 
 The most memory I've ever had is 64 MB.  Now I've got a gig.  The only
 time I've been memory bound has been thanks to Mozilla.  I'm assuming
 that the transfering of VHS tapes to DVD (editing out the commercials)
 will take more memory, but I'm unsure of how much swap I need.  When I
 start video editing, I'll be adding a second drive but since that's for
 working space for the editing, I don't know if I should put a swap
 partition on it.
 
 Can/should one put swap in an LV or is it no better than a swap file
 then?

I always put swap on a lv volume.  that way I can add to it latereasily,
or get rid of it if i don't need it.

 --
 Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-08 Thread dtutty
Having read this, I was unclear (its still morning here).  I'll try
to clarify.

Thanks,
Doug.
On Sun, Oct 08, 2006 at 09:09:11AM -0400, [EMAIL PROTECTED] wrote:
 Hi,
 
 I'm planning the install of amd64 on my new box (Athlon 3800+, 1 GB ram,
 Asus M2N-SLI MB, one Seagate 7200 80 GB SATA drive).
 
 What are the advantages to using LVM for root?
 
 I'm concerned about methods of recovery if something goes wrong.
 
 If I don't do LVM root, here's my current drive layout:

I mean current plan.  Without LVM, I'd just use partitions instead of
the LVs, leaving free space to move stuff around as I need more room.
 
 Part.  mount  size
 ==
 1  /boot  32 MB
 5  /  200 MB
 6  swap   512 MB
 7  PV1remainder (78 GB +)
 
 VG1only need one volume group, currently containing PV1 only
 
 LVusr  /usr   3 GB
 LVvar  /var   15 GB
 LVhome /home  10 GB
 LVtmp  /tmp   200 MB
 
 This leaves most of the VG as spare to be allocated as needed.
 
 I plan to use JFS for all partitions.  I've been very happy with JFS in
 the past.
 
 The most memory I've ever had is 64 MB.  Now I've got a gig.  The only
 time I've been memory bound has been thanks to Mozilla.  I'm assuming
 that the transfering of VHS tapes to DVD (editing out the commercials)
 will take more memory, but I'm unsure of how much swap I need.  When I
 start video editing, I'll be adding a second drive but since that's for
 working space for the editing, I don't know if I should put a swap
 partition on it.
 
 Can/should one put swap in an LV or is it no better than a swap file
 then?
 
 Please comment.
 
 Thanks,
 
 Doug.
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED]
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 
 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-08 Thread dtutty
On Sun, Oct 08, 2006 at 09:41:31AM -0400, Lennart Sorensen wrote:
 On Sun, Oct 08, 2006 at 09:09:11AM -0400, [EMAIL PROTECTED] wrote:
  Hi,
  
  I'm planning the install of amd64 on my new box (Athlon 3800+, 1 GB ram,
  Asus M2N-SLI MB, one Seagate 7200 80 GB SATA drive).
  
  What are the advantages to using LVM for root?
 
 Being able to resizeit later if needed I guess.
 
  I'm concerned about methods of recovery if something goes wrong.
  
  If I don't do LVM root, here's my current drive layout:
  
  Part.  mount  size
  ==
  1  /boot  32 MB
 
 Why a seperate boot?  Why such a puny root partition?  Are you putting
 tmp on tmpfs in ram?
 

re /boot:  old habits die hard.  The wisdom I learned was that its less
likely to get corrupted.  If that's not an issue anymore, then I can
forget it.

Re root:  200 MB is twice what I've ever needed, with /tmp, /usr, /var,
and /home on separate partitions.  I doubled it so I wouldn't have to
resize it later.  What would you suggest?

  5  /  200 MB
  6  swap   512 MB
  7  PV1remainder (78 GB +)
  
  VG1only need one volume group, currently containing PV1 only
  
  LVusr  /usr   3 GB
  LVvar  /var   15 GB
  LVhome /home  10 GB
  LVtmp  /tmp   200 MB
  
  This leaves most of the VG as spare to be allocated as needed.
  
  Can/should one put swap in an LV or is it no better than a swap file
  then?
 
 I always put swap on a lv volume.  that way I can add to it latereasily,
 or get rid of it if i don't need it.
 
If the swap on an LV doesn't add overhead, then it seems like a great
idea.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-08 Thread dtutty
On Sun, Oct 08, 2006 at 11:03:47AM -0400, dtutty wrote:
 On Sun, Oct 08, 2006 at 09:41:31AM -0400, Lennart Sorensen wrote:
  On Sun, Oct 08, 2006 at 09:09:11AM -0400, [EMAIL PROTECTED] wrote:
   Hi,
   
   I'm planning the install of amd64 on my new box (Athlon 3800+, 1 GB ram,
   Asus M2N-SLI MB, one Seagate 7200 80 GB SATA drive).
   
   What are the advantages to using LVM for root?
  
  Being able to resizeit later if needed I guess.
  
   I'm concerned about methods of recovery if something goes wrong.
 
In the LVM-HOWTO, which may be out-of-date by now, talks about the
difficulty of upgrading from LV1 to LV2 if one has used an LV for /.  
What are the prospects for future difficulties when LV3 comes along?

Obviously, I don't know how LV works internally.  If the root filesystem
get corrupted, how do I fix it from a recovery shell (e.g. the install
USB) if its on an LV?  If this is trivial, then is the thing to do to
make all of the disk a PV then have LVs for everything?

Thanks,

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-08 Thread Manuele Rampazzo
Hi,

[EMAIL PROTECTED] ha scritto:
 What are the advantages to using LVM for root?

you can resize it later if you need more space, but I think it's easier
to split your system with some separate filesystems than to (for
example) put everything in / on a LV and give it more space later.

 Part.  mount  size
 ==
 1  /boot  32 MB
 5  /  200 MB

I'd suggest a little more space (and to put /boot into /)... maybe 512
MB is OK

 LVusr  /usr   3 GB
 LVvar  /var   15 GB
 LVhome /home  10 GB
 LVtmp  /tmp   200 MB

Maybe it's better to make a bigger /tmp, say 1 GB, because of some
programs (for example, nautilus-cd-burner uses it to create a temporary
iso file - OK, you can change it's tmp directory, but by default it uses
/tmp)...

 Can/should one put swap in an LV or is it no better than a swap file
 then?

Yes, you can:

[EMAIL PROTECTED]:~$ grep swap /etc/fstab
/dev/vg00/swap  noneswapsw  0   0
[EMAIL PROTECTED]:~$

Bye,
Manu

-- 
È ricercando l'impossibile che l'uomo ha sempre realizzato il
possibile. Coloro che si sono saggiamente limitati a ciò che appariva
loro come possibile, non hanno mai avanzato di un solo passo.
Michail Bakunin (1814 - 1876)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LVM root?

2006-10-08 Thread Jean-Luc Coulon (f5ibh)

Le 08.10.2006 18:05:23, [EMAIL PROTECTED] a écrit :

On Sun, Oct 08, 2006 at 11:03:47AM -0400, dtutty wrote:
 On Sun, Oct 08, 2006 at 09:41:31AM -0400, Lennart Sorensen wrote:
  On Sun, Oct 08, 2006 at 09:09:11AM -0400, [EMAIL PROTECTED]
wrote:
   Hi,
  
   I'm planning the install of amd64 on my new box (Athlon 3800+, 1
GB ram,
   Asus M2N-SLI MB, one Seagate 7200 80 GB SATA drive).
  
   What are the advantages to using LVM for root?
 
  Being able to resizeit later if needed I guess.
 
   I'm concerned about methods of recovery if something goes wrong.

In the LVM-HOWTO, which may be out-of-date by now, talks about the
difficulty of upgrading from LV1 to LV2 if one has used an LV for /.
What are the prospects for future difficulties when LV3 comes along?


I've upgraded from LVM1 to LVM2 with / on a LV. Without any problem.



Obviously, I don't know how LV works internally.  If the root
filesystem
get corrupted, how do I fix it from a recovery shell (e.g. the install
USB) if its on an LV?  If this is trivial, then is the thing to do to
make all of the disk a PV then have LVs for everything?


It is probably a confidence problem.
A corrupted root file system is not better than a corrupted root over  
LVM.


Jean-Luc


pgpZLEBMPWXHH.pgp
Description: PGP signature


Re: LVM root?

2006-10-08 Thread Goswin von Brederlow
[EMAIL PROTECTED] writes:

 Hi,

 I'm planning the install of amd64 on my new box (Athlon 3800+, 1 GB ram,
 Asus M2N-SLI MB, one Seagate 7200 80 GB SATA drive).

 What are the advantages to using LVM for root?

Better ask what are the drawbacks?

The backups of meta data is stored in /etc/lvm by default so in case
of an error you won't have them if / is on lvm. You have to move them
to e.g. /boot.

Some kernels had deadlock issues with / on lvm when you tried to
resize or pvmove it.

You need an initrd or initramfs even if you compile a custom kernel.

 I'm concerned about methods of recovery if something goes wrong.

I like to boot into my / with init=/bin/sh, have an editor, netcat,
the lvm tools and all that available to look around and fix things in
case something does go wrong. With a standard initrd that is pretty
much an impossibility and you need that for / on lvm.

 If I don't do LVM root, here's my current drive layout:

 Part.  mount  size
 ==
 1  /boot  32 MB
 5  /  200 MB

Merge them and maybe give it some extra space. When you collect a few
different kernels the /lib/modules dir grows on you. A bit of
breathing room saves you from having to clean up on every kernel
upgrade.

Also amd64 has /emul/ia32-linux/ taking up some space if you need
32bit support libs.

 6  swap   512 MB

You can put swap on lvm. You should also think about suspend to disk,
which needs enough swap to store all active memory. Twice your ram
isn't a bad idea. Same as ram is pretty much a must.

 7  PV1remainder (78 GB +)

 VG1only need one volume group, currently containing PV1 only

 LVusr  /usr   3 GB
 LVvar  /var   15 GB
 LVhome /home  10 GB
 LVtmp  /tmp   200 MB

I always put tmp on tmpfs, which gets swapped out only when
neccessary. I make swap a little bit bigger than I think I need and
get double the value out of it as it can be used for tmpfs or swap as
needed.

 This leaves most of the VG as spare to be allocated as needed.

 I plan to use JFS for all partitions.  I've been very happy with JFS in
 the past.

 The most memory I've ever had is 64 MB.  Now I've got a gig.  The only
 time I've been memory bound has been thanks to Mozilla.  I'm assuming
 that the transfering of VHS tapes to DVD (editing out the commercials)
 will take more memory, but I'm unsure of how much swap I need.  When I
 start video editing, I'll be adding a second drive but since that's for
 working space for the editing, I don't know if I should put a swap
 partition on it.

You might want to stripe the volume for editing accross both
disks. Lvm can do that without you having to resrot to raid0. Gives
you more speed on file I/O. It's a per volume thing so you can keep
/usr, /var, /home on the first disk and just stripe the editing LV
when you get the 2nd disk.

 Can/should one put swap in an LV or is it no better than a swap file
 then?

Yes you cam. Not sure if there is any noticable disk whatever you
do. What takes time when swapping is the seeking of the disk and not
the few cycles the I/O layer takes to request or save data.

Also, when you actively have to use swap (not just storing dead memory
mozilla keeps allocated for hours or something there) then you have a
problem. Everything will take forever. If it takes forever and then
some you will hardly notice the difference. If swap becomes a problem
then buy more ram. (I have 1G too on my desktop and that's fine by the
way.)

 Please comment.

 Thanks,

 Doug.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]