Re: [SLUG] LVM

2010-06-16 Thread Adrian Chadd
You won't be able to. If you configured them up as a stripe (ie, no mirroring)
with interleaving every x megabytes on each disk, you'll basically end up
with a virtual hard disk with holes evenly spread out across 1/3rd of
the image. I don't know of any (easy) tools to recover from that.

I can think of what I'd write to try and recover -something- but it'd
involve writing a whole lot of rather hairy looking filesystem-scraping
code. I'm sure there are tools to do this kind of partial data recovery
but they're bound to be -very- expensive.



Adrian

On Wed, Jun 16, 2010, Gerald C.Catling wrote:
 Many thanks to all that responded to try to solve this LVM problem.
 I could not recover any data from the crashed system. I could not find any 
 method of mounting  drive 2 or 3 as individual drives and the system would 
 not 
 create a volume group without the now non-existant first drive.
 Once again, many thanks.
 I will have to try RAID 1.
 Gerald
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $24/pm+GST entry-level VPSes w/ capped bandwidth charges available in WA -
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-16 Thread Daniel Pittman
Adrian Chadd adr...@creative.net.au writes:

 You won't be able to. If you configured them up as a stripe (ie, no
 mirroring) with interleaving every x megabytes on each disk, you'll
 basically end up with a virtual hard disk with holes evenly spread out
 across 1/3rd of the image. I don't know of any (easy) tools to recover from
 that.

LVM, by default, is a boring old linear mapping, so he probably has two disks
worth of data ... starting a third (or whatever) of the way through the file
system.  So, no superblock on whatever.

 I can think of what I'd write to try and recover -something- but it'd
 involve writing a whole lot of rather hairy looking filesystem-scraping
 code. I'm sure there are tools to do this kind of partial data recovery
 but they're bound to be -very- expensive.

The 'testdisk' package available in Debian, and fully OSS, can do quite a lot
of data recovery without a file system.  It just block-scrapes the device.

[...]

 On Wed, Jun 16, 2010, Gerald C.Catling wrote:

 Many thanks to all that responded to try to solve this LVM problem.
 I could not recover any data from the crashed system. I could not find any
 method of mounting drive 2 or 3 as individual drives and the system would
 not create a volume group without the now non-existant first drive.  Once
 again, many thanks.  I will have to try RAID 1.

I strongly suggest that using some sort of RAID is absolutely worth the money.

I would also encourage you to try the testdisk tools to recover some of the
content of your device: you will probably get back more than nothing back, and
that might be worth the hassle.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-16 Thread Ben Donohue

Hi,

What is the problem with the disk? Is it not spinning up or is it making 
strange noises?

How is the raid connected?
Many times I've managed to get data off dead disks that won't start up.
Sometimes it's just a matter of getting the thing spinning and then get 
the data off.


If it is that the disk is just not starting to spin up, sometimes it is 
possible to turn the physical disk in your hand and apply the power.
Many times I've managed to get a disk up and running long enough to get 
the data off buy vigerously spinning the disk in the direction of the 
platters, this way and that, and then apply the power.

This loosens the disk, the disk spins up and I get the data off.

Not sure of your physical setup but if you can manage to spin the disk 
it works more than 50% of the time for me. (if it is a starting up problem)
I'm seriously suggesting this and it's not an old wives tale. I've done 
it heaps of times. Worth a shot as a last resort!


Ben

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-16 Thread Amos Shapira
On 16 June 2010 11:30, Gerald C.Catling gcsgcatl...@bigpond.com wrote:
 Many thanks to all that responded to try to solve this LVM problem.
 I could not recover any data from the crashed system. I could not find any
 method of mounting  drive 2 or 3 as individual drives and the system would not
 create a volume group without the now non-existant first drive.

Just to get the VG going without the missing drive, try vgreduce the
dead drive out of the VG.
This might allow you to access the rest of the PV's at least.

Cheers,

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html



Re: [SLUG] LVM

2010-06-16 Thread Amos Shapira
On 16 June 2010 16:26, Daniel Pittman dan...@rimspace.net wrote:
 LVM, by default, is a boring old linear mapping, so he probably has two disks
 worth of data ... starting a third (or whatever) of the way through the file
 system.  So, no superblock on whatever.

Why no superblock? The ext3 filesystem (and I guess the most usual
suspects) write multiple copies of the superblock across the entire
data partition just for such cases. It's tricky to find them (the only
way I saw documented was to execute mke2fs -n on a partition with
identical size and record the block numbers it emits. I'm sure there
must be ways to identify superblocks by their magic numbers somehow)
but the data should be there.


 I can think of what I'd write to try and recover -something- but it'd
 involve writing a whole lot of rather hairy looking filesystem-scraping
 code. I'm sure there are tools to do this kind of partial data recovery
 but they're bound to be -very- expensive.

 The 'testdisk' package available in Debian, and fully OSS, can do quite a lot
 of data recovery without a file system.  It just block-scrapes the device.

Right. I used it a couple of times and can tell that without file
names it's a chore to wade through all the data and try to find which
is important and which isn't. file is handy to do that (as in, e.g.,
run find ...| xargs file | grep ... and move all files of each type
to their own directories).

Cheers,

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-16 Thread Daniel Pittman
Amos Shapira amos.shap...@gmail.com writes:
 On 16 June 2010 16:26, Daniel Pittman dan...@rimspace.net wrote:

 LVM, by default, is a boring old linear mapping, so he probably has two disks
 worth of data ... starting a third (or whatever) of the way through the file
 system.  So, no superblock on whatever.

 Why no superblock? The ext3 filesystem (and I guess the most usual suspects)
 write multiple copies of the superblock across the entire data partition
 just for such cases.

Actually, they do.  My mistake.

[...]

 I can think of what I'd write to try and recover -something- but it'd
 involve writing a whole lot of rather hairy looking filesystem-scraping
 code. I'm sure there are tools to do this kind of partial data recovery
 but they're bound to be -very- expensive.

 The 'testdisk' package available in Debian, and fully OSS, can do quite a lot
 of data recovery without a file system.  It just block-scrapes the device.

 Right. I used it a couple of times and can tell that without file names it's
 a chore to wade through all the data and try to find which is important and
 which isn't. file is handy to do that (as in, e.g., run find ...| xargs
 file | grep ... and move all files of each type to their own directories).

No question there.  Thankfully, my only use for it has been recovering photos,
and those have nice internal meta-data to help with the process. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-16 Thread Nick Andrew
On Wed, Jun 16, 2010 at 02:06:32PM +0800, Adrian Chadd wrote:
 You won't be able to. If you configured them up as a stripe (ie, no mirroring)
 with interleaving every x megabytes on each disk, you'll basically end up
 with a virtual hard disk with holes evenly spread out across 1/3rd of
 the image. I don't know of any (easy) tools to recover from that.

If configured with striping (which I expect is improbable) then the data
will have holes through it and is probably unrecoverable.

However if configured with linear mapping (append) then it is
possible to make the volume group active by using the --partial flag
to vgchange:

   -P | --partial
  When  set, the tools will do their best to provide access to 
volume groups that are
  only partially available.  Where part of a logical volume is 
missing,  /dev/ioerror
  will  be  substituted,  and  you could use dmsetup (8) to set 
this up to return I/O
  errors when accessed, or create it as a large block device of 
nulls.  Metadata  may
  not  be  changed  with  this option. To insert a replacement 
physical volume of the
  same or large size use pvcreate -u to set the uuid to match the  
original  followed
  by vgcfgrestore (8).

If I was doing it then I'd do what the last sentence said: get a block
device of the exact same size as the lost one; use pvcreate -u to add
the LVM PV metadata and set the uuid to match; vgcfgrestore;
vgchange -a y $VGNAME to make the volume group active, then run fsck
on all filesystems.

If the VG was separated into several filesystems then any wholly on the
last two disks should be perfect; anything wholly on the first disk
is lost, and anything which straddles both may be unrecoverable or
partially recoverable (e.g. through use of e2fsck and a backup superblock).

Now if ZFS were used instead of ext2/ext3, then the filesystem should
be able to tell which particular files are OK, due to the data checksum.

 I can think of what I'd write to try and recover -something- but it'd
 involve writing a whole lot of rather hairy looking filesystem-scraping
 code. I'm sure there are tools to do this kind of partial data recovery
 but they're bound to be -very- expensive.

Or you could just run 'debugfs'.

Nick.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-15 Thread Michael Chesterton

On 15/06/2010, at 2:15 PM, Nick Andrew wrote:
 Due to heat, or what? That paper seems to concern itself primarily with the
 differences between PS (personal storage) drives and ES (enterprise storage),
 in order to justify why the SCSI drives have so much higher cost per bit.
 
 The only mention I could see about multiple disks affecting failure rate
 was A high density server rack with many disc drives grouped close together
 may experience much higher temperatures than a single drive mounted in a
 desktop computer. Nothing about whether multiple disks in a machine affect
 failure rate for any reason other than high temperature (which is usually
 controlled in server environments).

Google released a study to suggest heat didn't affect the life of a disk much.

I don't think multiples disks in a machine affect failure rate, it's just that 
the more
disks you have, the more likely you are of having a dud one that will fail. 

It doesn't matter how the disks are arranged, if a company has 1000 PCs with 
a single disk in each spread throughout Australia, they're more likely to see a 
disk fail than if you have one PC with one disk.


-- 

http://chesterton.id.au/blog/
http://barrang.com.au/linux/


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-15 Thread james
On Tuesday 15 June 2010 12:15:52 you wrote:
 On Tue, Jun 15, 2010 at 11:14:38AM +0800, james wrote:
  The stuff  below is interesting and a reference, but this highlights my
  favourite rant: Seagate's 'ATA more than an interface' says multiple
  disks in a machine *will* result in a higher failure rate, maybe much
  higher.
 
 Due to heat, or what? That paper seems to concern itself primarily with the
 differences between PS (personal storage) drives and ES (enterprise
  storage), in order to justify why the SCSI drives have so much higher cost
  per bit.

The bit that says:
Disk#1 seeks knocking Disk#2, Disk#3 off track
so
Disk#2 seeks knocking (mechanical coupling) Disk#1 off track
so
Disk#1 seeks again
etc

my own experience is that n-disk arrays fail more than n times 1 disk
but that is oh so subjective, and so subject to the ravages of stats. 

James

 The only mention I could see about multiple disks affecting failure rate
 was A high density server rack with many disc drives grouped close
  together may experience much higher temperatures than a single drive
  mounted in a desktop computer. Nothing about whether multiple disks in a
  machine affect failure rate for any reason other than high temperature
  (which is usually controlled in server environments).
 
  So raid is a less worse option than LVM. Heed the advice in slug talks
  about backup (Sorry Sonia and Margurite, I don't remember who presented
  them)
 
 Yes.
 
  It is possible, but not likely that *every* file on your disks is
  distributed over all 3 disks, so worst cast is that you lost 1/3 of every
  file you have.
 
 Only if the Logical Volume is defined with striping (the -i argument to
  lvcreate).
 
 Rule #1 is always ... make backups.
 
 After that:
 
 - RAID1 can reduce the impact of a single-drive failure
 
 - RAID5 will increase the impact of failures
 
 - When combining multiple disks into a large Volume Group (VG), it is
  possible to create Logical Volumes within the VG so that they do not span
  physical devices. That way, if a disk dies (or 2, in a RAID1 setup) the
  entire VG contents will not be lost, only those filesystems on the failing
  devices. Hence it is a good idea to make multiple filesystems sized
  according to need.
 
 - Make multiple types of backups: backup to HDD (on a different server),
  offsite backup, Internet backup, incremental backups, DVD backups,
  external HDDs are dirt cheap these days.
 
 - Separate data according to importance and increase the redundancy level
  for the most important data. Data which is unimportant or can be recreated
  need not be backed up at all. Precious data might have multiple backups to
  onsite, offsite and write-once media.
 
 Nick.
 
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-15 Thread Jake Anderson

On 15/06/10 17:10, james wrote:

On Tuesday 15 June 2010 12:15:52 you wrote:
   

On Tue, Jun 15, 2010 at 11:14:38AM +0800, james wrote:
 

The stuff  below is interesting and a reference, but this highlights my
favourite rant: Seagate's 'ATA more than an interface' says multiple
disks in a machine *will* result in a higher failure rate, maybe much
higher.
   

Due to heat, or what? That paper seems to concern itself primarily with the
differences between PS (personal storage) drives and ES (enterprise
  storage), in order to justify why the SCSI drives have so much higher cost
  per bit.
 

The bit that says:
Disk#1 seeks knocking Disk#2, Disk#3 off track
so
Disk#2 seeks knocking (mechanical coupling) Disk#1 off track
so
Disk#1 seeks again
etc

my own experience is that n-disk arrays fail more than n times 1 disk
but that is oh so subjective, and so subject to the ravages of stats.

James
   
in subjective land disks in arrays are probably used allot more heavily 
than those sitting in singles (IE server Vs Desktop usage)


I understand that server grade disks have accelerometers built in these 
days to pick up accelerations caused by other disks in the stack seeking.


I also recall seeing a video of a really large array (40+ disks in a 
dedicated storage box) in a data centre, they had pulled up a latency 
monitoring tool and then had a guy scream at the drives, this caused a 
noticeable spike in latency

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-15 Thread Tony Sceats
not really on topic, but a fun little site detailing some disk vibrating
issues

http://blogs.sun.com/brendan/entry/unusual_disk_latency



On Tue, Jun 15, 2010 at 3:10 PM, james j...@tigger.ws wrote:

 On Tuesday 15 June 2010 12:15:52 you wrote:
  On Tue, Jun 15, 2010 at 11:14:38AM +0800, james wrote:
   The stuff  below is interesting and a reference, but this highlights my
   favourite rant: Seagate's 'ATA more than an interface' says multiple
   disks in a machine *will* result in a higher failure rate, maybe much
   higher.
 
  Due to heat, or what? That paper seems to concern itself primarily with
 the
  differences between PS (personal storage) drives and ES (enterprise
   storage), in order to justify why the SCSI drives have so much higher
 cost
   per bit.

 The bit that says:
 Disk#1 seeks knocking Disk#2, Disk#3 off track
 so
 Disk#2 seeks knocking (mechanical coupling) Disk#1 off track
 so
 Disk#1 seeks again
 etc

 my own experience is that n-disk arrays fail more than n times 1 disk
 but that is oh so subjective, and so subject to the ravages of stats.

 James

  The only mention I could see about multiple disks affecting failure rate
  was A high density server rack with many disc drives grouped close
   together may experience much higher temperatures than a single drive
   mounted in a desktop computer. Nothing about whether multiple disks in
 a
   machine affect failure rate for any reason other than high temperature
   (which is usually controlled in server environments).
 
   So raid is a less worse option than LVM. Heed the advice in slug talks
   about backup (Sorry Sonia and Margurite, I don't remember who presented
   them)
 
  Yes.
 
   It is possible, but not likely that *every* file on your disks is
   distributed over all 3 disks, so worst cast is that you lost 1/3 of
 every
   file you have.
 
  Only if the Logical Volume is defined with striping (the -i argument to
   lvcreate).
 
  Rule #1 is always ... make backups.
 
  After that:
 
  - RAID1 can reduce the impact of a single-drive failure
 
  - RAID5 will increase the impact of failures
 
  - When combining multiple disks into a large Volume Group (VG), it is
   possible to create Logical Volumes within the VG so that they do not
 span
   physical devices. That way, if a disk dies (or 2, in a RAID1 setup) the
   entire VG contents will not be lost, only those filesystems on the
 failing
   devices. Hence it is a good idea to make multiple filesystems sized
   according to need.
 
  - Make multiple types of backups: backup to HDD (on a different server),
   offsite backup, Internet backup, incremental backups, DVD backups,
   external HDDs are dirt cheap these days.
 
  - Separate data according to importance and increase the redundancy level
   for the most important data. Data which is unimportant or can be
 recreated
   need not be backed up at all. Precious data might have multiple backups
 to
   onsite, offsite and write-once media.
 
  Nick.
 
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] LVM

2010-06-15 Thread Gerald C.Catling
Many thanks to all that responded to try to solve this LVM problem.
I could not recover any data from the crashed system. I could not find any 
method of mounting  drive 2 or 3 as individual drives and the system would not 
create a volume group without the now non-existant first drive.
Once again, many thanks.
I will have to try RAID 1.
Gerald
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] LVM

2010-06-14 Thread Gerald C.Catling
Hi Guy's,
I am a PCLinuxos user and I have seen references to LVM here ( at SLUG)
I have 3 drives LVM'd to give me 1.3TB of storage space on my server.
The first drive of this set has died.
I was wondering if any of you Guru's could suggest a method of getting any 
remaing data from the LVM drives, that is drive 2 and 3, that are left.
I have tried rebuilding the set, wg0, but the system want to reformat the 
drive wg0, just created. Is this formatting going to format the real drives 
and rather that just the LVM component?
Your help will be much appreciated.
Gerald
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-14 Thread Matthew Hannigan
On Mon, Jun 14, 2010 at 06:14:15PM +1000, Gerald C.Catling wrote:
 Hi Guy's,
 I am a PCLinuxos user and I have seen references to LVM here ( at SLUG)
 I have 3 drives LVM'd to give me 1.3TB of storage space on my server.
 The first drive of this set has died.
 I was wondering if any of you Guru's could suggest a method of getting any 
 remaing data from the LVM drives, that is drive 2 and 3, that are left.
 I have tried rebuilding the set, wg0, but the system want to reformat the 
 drive wg0, just created. Is this formatting going to format the real drives 
 and rather that just the LVM component?
 Your help will be much appreciated.
 Gerald
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

lvscan might get you started.  I've not done much recovery work with lvm
under linux, so I'm not really willing to suggest things that might make it
worse!

You might want to do a dd level backup of the drives just in case.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-14 Thread Daniel Pittman
Gerald C.Catling gcsgcatl...@bigpond.com writes:

 I am a PCLinuxos user and I have seen references to LVM here ( at SLUG)
 I have 3 drives LVM'd to give me 1.3TB of storage space on my server.
 The first drive of this set has died.

I am guessing that by LVM'd you mean concatenated together, no redundancy,
right?  So, basically, you lost one disk and you have lost (more or less) a
third of the data under the file system, etc.

I further assume that first means the one that contains the superblock, as
in the linearly first space on the disk.

 I was wondering if any of you Guru's could suggest a method of getting any
 remaing data from the LVM drives, that is drive 2 and 3, that are left.

I can identify three approaches:

One: Get the dead drive working long enough to actually recover the content
from the file system with all the data around.

That should work provided died means has a bunch of bad sectors rather
than will not respond to SATA commands.


Two: Use something that scans the disk and looks for file content, then
extracts it.  This is unlikely to bring much joy, but might be better than
nothing.

I have used some of the tools packaged in Debian before, especially
'testdisk', with reasonable success, on *simple* cases like recover JPEG/RAW
images from CF cards.  For a complex case like a 1.3TB file system, I
wouldn't hold much hope for getting a *lot* of content back.


Three: talk to the upstream file system developers, and see if they can help
identify a mechanism that might recover data without the first chunk.


I suspect those are in decreasing order of what you get back, and other than
the first that will be very little.


Er, and there is another option: pay a data recovery company to do this.  It
shouldn't cost more than a few thousand dollars for a fairly simple case, and
might have a better recovery rate than the alternatives if, say, disk one
*isn't* responding, but they can get it back talking for a bit without too
much trouble.

 I have tried rebuilding the set, wg0, but the system want to reformat the
 drive wg0, just created. Is this formatting going to format the real drives
 and rather that just the LVM component?

All LVM does, in this case, is rewrite the write command so that it talks to
the appropriate bit of the underlying physical device.  So, yes, because there
is no difference between the two.



Anyway, for the future: if you concatenate drives, which is all that LVM does,
you *increase* the chance of total data loss in your system by the number of
devices; in your case — three disks, triple the chances you lose.

So, the take-away lesson is that if you intend to do this take one of these
three approaches:

1. Format each device as a separate file system, rather than concatenating
   them, so that loss of one device only takes away one set of data, not all
   three.  Penalty: you now have a PITA job using all that space.

2. Keep good backups, so that when (and it is when, not if) you lose a
   device you recover much more gracefully.

3. Use some sort of redundancy: software RAID is a pretty decent choice, and
   is pretty inexpensive these days.  Certainly, I bet that the extra few
   hundred dollars for a second set of disks is less than the cost of trying
   to recover all that data.


Um, and sorry: it sucks that you are now probably going to lose all that data.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-14 Thread Nick Andrew
On Mon, Jun 14, 2010 at 06:14:15PM +1000, Gerald C.Catling wrote:
 I was wondering if any of you Guru's could suggest a method of getting any 
 remaing data from the LVM drives, that is drive 2 and 3, that are left.
 I have tried rebuilding the set, wg0, but the system want to reformat the 
 drive wg0, just created. Is this formatting going to format the real drives 
 and rather that just the LVM component?

Before doing anything else, take a backup of the contents of the drives.
Work on the backup if possible - e.g. buy a 1.5T external drive and copy
the data before touching anything.

LVM can bring up a volume group with one or more devices missing if it
knows the existing allocation of Physical Extents. Ext2/Ext3 keeps
copies of superblocks at various places within the filesystem so it
is possible to recover a corrupted superblock, which may help you to
obtain data from the remaining good disks.

Nick.
-- 
PGP Key ID = 0x418487E7  http://www.nick-andrew.net/
PGP Key fingerprint = B3ED 6894 8E49 1770 C24A  67E3 6266 6EB9 4184 87E7
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-14 Thread james
On Tuesday 15 June 2010 10:00:03 slug-requ...@slug.org.au wrote:
  I am a PCLinuxos user and I have seen references to LVM here ( at SLUG)
  I have 3 drives LVM'd to give me 1.3TB of storage space on my server.
  The first drive of this set has died.
 
 I am guessing that by LVM'd you mean concatenated together, no
  redundancy, right?  So, basically, you lost one disk and you have lost
  (more or less) a third of the data under the file system, etc.

The stuff  below is interesting and a reference, but this highlights my 
favourite rant: Seagate's 'ATA more than an interface' says multiple disks in 
a machine *will* result in a higher failure rate, maybe much higher.
So raid is a less worse option than LVM. Heed the advice in slug talks about 
backup (Sorry Sonia and Margurite, I don't remember who presented them)

It is possible, but not likely that *every* file on your disks is distributed 
over all 3 disks, so worst cast is that you lost 1/3 of every file you have.

James
  
 I further assume that first means the one that contains the superblock,
  as in the linearly first space on the disk.
 
  I was wondering if any of you Guru's could suggest a method of getting
  any remaing data from the LVM drives, that is drive 2 and 3, that are
  left.
 
 I can identify three approaches:
 
 One: Get the dead drive working long enough to actually recover the
  content from the file system with all the data around.
 
 That should work provided died means has a bunch of bad sectors rather
 than will not respond to SATA commands.
 
 
 Two: Use something that scans the disk and looks for file content, then
 extracts it.  This is unlikely to bring much joy, but might be better than
 nothing.
 
 I have used some of the tools packaged in Debian before, especially
 'testdisk', with reasonable success, on simple cases like recover JPEG/RAW
 images from CF cards.  For a complex case like a 1.3TB file system, I
 wouldn't hold much hope for getting a lot of content back.
 
 
 Three: talk to the upstream file system developers, and see if they can
  help identify a mechanism that might recover data without the first chunk.
 
 
 I suspect those are in decreasing order of what you get back, and other
  than the first that will be very little.
 
 
 Er, and there is another option: pay a data recovery company to do
  this.  It shouldn't cost more than a few thousand dollars for a fairly
  simple case, and might have a better recovery rate than the alternatives
  if, say, disk one *isn't* responding, but they can get it back talking for
  a bit without too much trouble.
 
  I have tried rebuilding the set, wg0, but the system want to reformat the
  drive wg0, just created. Is this formatting going to format the real
  drives and rather that just the LVM component?
 
 All LVM does, in this case, is rewrite the write command so that it talks
  to the appropriate bit of the underlying physical device.  So, yes,
  because there is no difference between the two.
 
 
 
 Anyway, for the future: if you concatenate drives, which is all that LVM
  does, you increase the chance of total data loss in your system by the
  number of devices; in your case — three disks, triple the chances you
  lose.
 
 So, the take-away lesson is that if you intend to do this take one of these
 three approaches:
 
 1. Format each device as a separate file system, rather than concatenating
them, so that loss of one device only takes away one set of data, not
  all three.  Penalty: you now have a PITA job using all that space.
 
 2. Keep good backups, so that when (and it is when, not if) you lose a
device you recover much more gracefully.
 
 3. Use some sort of redundancy: software RAID is a pretty decent choice,
  and is pretty inexpensive these days.  Certainly, I bet that the extra few
  hundred dollars for a second set of disks is less than the cost of trying
  to recover all that data.
 
 
 Um, and sorry: it sucks that you are now probably going to lose all that
  data.
 
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-14 Thread Daniel Pittman
james j...@tigger.ws writes:
 On Tuesday 15 June 2010 10:00:03 slug-requ...@slug.org.au wrote:

  I am a PCLinuxos user and I have seen references to LVM here ( at SLUG) I
  have 3 drives LVM'd to give me 1.3TB of storage space on my server.  The
  first drive of this set has died.

 I am guessing that by LVM'd you mean concatenated together, no
 redundancy, right?  So, basically, you lost one disk and you have lost
 (more or less) a third of the data under the file system, etc.

 The stuff below is interesting and a reference, but this highlights my
 favourite rant: Seagate's 'ATA more than an interface' says multiple disks
 in a machine *will* result in a higher failure rate, maybe much higher.

Without needing reference to a vendor study, you can work this out yourself
with some very basic probability math, and the MTBF values from the hard
disks.

Er, and watch out that things like bad batches of disks can result in failures
that are *not* independent events in probability terms.

 So raid is a less worse option than LVM.

They serve entirely different purposes, and have some cross-over; you would
see the same problem with a RAID0, or RAID-concatenated-space system as with
an LVM concatenated space system — or the same redundancy if you used the
LVM/DM RAID1 target as if you used the MD version of the same.

So, it isn't as simple as saying RAID is better than LVM without talking
about the additional details.

(...and, FWIW, my preference is *both* RAID and LVM. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2009-05-16 Thread Amos Shapira
2009/5/15 Steve Kowalik ste...@wedontsleep.org:
 Daniel Bush wrote:
 That sounds fraught.
 Are you sure I can't just go with the alternate cd which will walk me thru
 lvm and still give me a desktop kernel/system?


 Indeed you can. You can even download the alternate CD with jigdo!

Apparently yes, from googl'ing around now.

When I picked up this procedure about a year ago I figured from
searching around that it was a known shortcoming of Ubuntu
installation that it didn't support LVM.

Thanks for the correction.

--Amos
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2009-05-15 Thread Daniel Bush
2009/5/15 fos...@tpg.com.au

 Quoting Daniel Bush dlb.id...@gmail.com:

  2009/5/15 fos...@tpg.com.au

   - LVM is really cool and well worth the time to rad up on it.   I
   am now going to LVM my home system.
  
  
  I'm planning to do this as well.
  I was thinking back to Mary's backup post last year and thinking if I
  could do lvm snapshots with an external harddrive.  Still a bit new to
 lvm
  though.
  I think you have to install the alternate ubuntu cd to get lvm
  right?
  (unless you are using the server install instead of the desktop).




 The distinction between desktop and server in ubuntu is an install
 option not anything else.  To add lvm to your existing system just
 'apt-get install lvm2'.


Aware of this.  Just weighing up whether to do a clean install so that is
why I think I have to use alternate instead of desktop.



 To convert an existing setup to lvm you have to have some free space
 (partitions or whole harddisks to use).

 First create a volume group (chunk of hardisk spread across one or more
 harddrives)

 sudo lvm
 pvdiskscan
 pvcreate /dev/yourpartions
 vgcreate vg1  /dev/part1  /dev/part2


 create a logical volume somewhere in that volume group  (say 300 gig
 named yourname in vg1)

 lvcreate -L300G -n yourname  vg1

 You can then mksfs.ext3 /dev/vg1/yourname  (replace ext3 with whatever
 is appropriate) and then mount it.


Since I've got you on this subject and maybe others reading this:

I was working with a test server using vmware esx.  It runs on a virtual
disk which is just a file.  I decided to resize the file to a larger size
which created a whole bunch of extra space at the end of the disk.  I made
this an lvm partition (/dev/sda4) using fdisk and then I did something
stupid which was to run mke2fs directly on /dev/sda4.
I then tried to add this as a physical volume to my existing volume group
and then extend one of the existing logical volumes.
So far so dumb, right.

So now lvm tells me I've got X gigs and df -h tells me I've got Y gigs (the
old number).
I think all I have to do is resize the existing fs on the logical volume
(/dev/vg1/lv1) .  I'm thinking there won't be any trouble because even
though /dev/sda4 had some sort of file system added to it (even though it
was an lvm partition), it never got used.  But not sure if running mke2fs on
/dev/sda4 has/will bork something.  (This is just a test system)

On a separate issue:

Is it safe to grow a root/bootable ext3 partition or do I have to unmount it
- the resize2fs man page doesn't say anything but I read somewhere that I
had to unmount and use a rescue disk (maybe this was just for ext2)? And I
also assumed I had to remove the journal, resize, check and then add the
journal.

Is XFS a better solution for server lvm stuff and for growing? -  or maybe
even JFS ??

-- 
Daniel Bush

http://blog.web17.com.au
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2009-05-15 Thread Amos Shapira
2009/5/15 fos...@tpg.com.au
 The distinction between desktop and server in ubuntu is an install
 option not anything else.  To add lvm to your existing system just
 'apt-get install lvm2'.

 To convert an existing setup to lvm you have to have some free space
 (partitions or whole harddisks to use).

That's probably the way to convert an existing system to LVM.

If you want to install Ubuntu with LVM straight away then it's a bit
more tricky since the LVM package is not included in the installation
CD so you have to:

1. Boot live cd.
2. open shell
3. apt-get install lvm2
4. insmod dm_mod
5. create PV, VG, LV's. Remember that you need to keep /boot on a
regular partition since grub can't read LVM.

(I can't remember off the top of my head now whether the GUI installer
will support creation of LV's once it finds PV's and VG's. In any case
it will be able to recognize the LV's and allow creation of
filesystem/swap partitions on top of them).

6. install system from live to hard disk
7. mount -bind ... special filesystems (proc, sys, dev) under the
hard-disk mount point
8. mount /boot under the right mount point on hard disk

(the above two steps are required because the lvm package install
kernel modules and run initramfs)

9. chroot to the hard disk partition
10. apt-get install lvm2 again on the hard disk.

That's more or less it.

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2009-05-15 Thread Daniel Bush
2009/5/15 Amos Shapira amos.shap...@gmail.com

 2009/5/15 fos...@tpg.com.au
  The distinction between desktop and server in ubuntu is an install
  option not anything else.  To add lvm to your existing system just
  'apt-get install lvm2'.
 
  To convert an existing setup to lvm you have to have some free space
  (partitions or whole harddisks to use).

 That's probably the way to convert an existing system to LVM.

 If you want to install Ubuntu with LVM straight away then it's a bit
 more tricky since the LVM package is not included in the installation
 CD so you have to:

 1. Boot live cd.
 2. open shell
 3. apt-get install lvm2
 4. insmod dm_mod
 5. create PV, VG, LV's. Remember that you need to keep /boot on a
 regular partition since grub can't read LVM.

 (I can't remember off the top of my head now whether the GUI installer
 will support creation of LV's once it finds PV's and VG's. In any case
 it will be able to recognize the LV's and allow creation of
 filesystem/swap partitions on top of them).

 6. install system from live to hard disk
 7. mount -bind ... special filesystems (proc, sys, dev) under the
 hard-disk mount point
 8. mount /boot under the right mount point on hard disk

 (the above two steps are required because the lvm package install
 kernel modules and run initramfs)

 9. chroot to the hard disk partition
 10. apt-get install lvm2 again on the hard disk.

 That's more or less it.


That sounds fraught.
Are you sure I can't just go with the alternate cd which will walk me thru
lvm and still give me a desktop kernel/system?

-- 
Daniel Bush

http://blog.web17.com.au
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2009-05-15 Thread Daniel Bush
2009/5/15 Daniel Bush dlb.id...@gmail.com



 2009/5/15 fos...@tpg.com.au

 Quoting Daniel Bush dlb.id...@gmail.com:

  2009/5/15 fos...@tpg.com.au

   - LVM is really cool and well worth the time to rad up on it.   I
   am now going to LVM my home system.
  
  
  I'm planning to do this as well.
  I was thinking back to Mary's backup post last year and thinking if I
  could do lvm snapshots with an external harddrive.  Still a bit new to
 lvm
  though.
  I think you have to install the alternate ubuntu cd to get lvm
  right?
  (unless you are using the server install instead of the desktop).




 The distinction between desktop and server in ubuntu is an install
 option not anything else.  To add lvm to your existing system just
 'apt-get install lvm2'.


 Aware of this.  Just weighing up whether to do a clean install so that is
 why I think I have to use alternate instead of desktop.



 To convert an existing setup to lvm you have to have some free space
 (partitions or whole harddisks to use).

 First create a volume group (chunk of hardisk spread across one or more
 harddrives)

 sudo lvm
 pvdiskscan
 pvcreate /dev/yourpartions
 vgcreate vg1  /dev/part1  /dev/part2


 create a logical volume somewhere in that volume group  (say 300 gig
 named yourname in vg1)

 lvcreate -L300G -n yourname  vg1

 You can then mksfs.ext3 /dev/vg1/yourname  (replace ext3 with whatever
 is appropriate) and then mount it.


 Since I've got you on this subject and maybe others reading this:

 I was working with a test server using vmware esx.  It runs on a virtual
 disk which is just a file.  I decided to resize the file to a larger size
 which created a whole bunch of extra space at the end of the disk.  I made
 this an lvm partition (/dev/sda4) using fdisk and then I did something
 stupid which was to run mke2fs directly on /dev/sda4.
 I then tried to add this as a physical volume to my existing volume group
 and then extend one of the existing logical volumes.
 So far so dumb, right.

 So now lvm tells me I've got X gigs and df -h tells me I've got Y gigs (the
 old number).
 I think all I have to do is resize the existing fs on the logical volume
 (/dev/vg1/lv1) .  I'm thinking there won't be any trouble because even
 though /dev/sda4 had some sort of file system added to it (even though it
 was an lvm partition), it never got used.  But not sure if running mke2fs on
 /dev/sda4 has/will bork something.  (This is just a test system)

 On a separate issue:

 Is it safe to grow a root/bootable ext3 partition or do I have to unmount
 it - the resize2fs man page doesn't say anything but I read


What I meant to say was grow the root ext3 fs which is on the bootable
first partition  .

somewhere that I had to unmount and use a rescue disk (maybe this was just
 for ext2)? And I also assumed I had to remove the journal, resize, check and
 then add the journal.




 Is XFS a better solution for server lvm stuff and for growing? -  or maybe
 even JFS ??


Found this thread on file systems:
http://linux.derkeiler.com/Mailing-Lists/Debian/2008-01/msg01789.html
Guess I'm sticking with ext3 for the moment.


-- 
Daniel Bush

http://blog.web17.com.au
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2009-05-15 Thread Steve Kowalik
Daniel Bush wrote:
 That sounds fraught.
 Are you sure I can't just go with the alternate cd which will walk me thru
 lvm and still give me a desktop kernel/system?
 

Indeed you can. You can even download the alternate CD with jigdo!

Cheers,
-- 
Steve
Wrong is endian little that knows everyone but.
  - Sam Hocevar

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html



[SLUG] LVM

2009-05-14 Thread foskey
Quoting Daniel Bush dlb.id...@gmail.com:

 2009/5/15 fos...@tpg.com.au

  - LVM is really cool and well worth the time to rad up on it.   I
  am now going to LVM my home system.
 
 
 I'm planning to do this as well.
 I was thinking back to Mary's backup post last year and thinking if I
 could do lvm snapshots with an external harddrive.  Still a bit new to lvm
 though.
 I think you have to install the alternate ubuntu cd to get lvm
 right?
 (unless you are using the server install instead of the desktop).


The distinction between desktop and server in ubuntu is an install
option not anything else.  To add lvm to your existing system just
'apt-get install lvm2'.

To convert an existing setup to lvm you have to have some free space
(partitions or whole harddisks to use).

First create a volume group (chunk of hardisk spread across one or more
harddrives)

sudo lvm
pvdiskscan
pvcreate /dev/yourpartions
vgcreate vg1  /dev/part1  /dev/part2


create a logical volume somewhere in that volume group  (say 300 gig
named yourname in vg1)

lvcreate -L300G -n yourname  vg1

You can then mksfs.ext3 /dev/vg1/yourname  (replace ext3 with whatever
is appropriate) and then mount it.


Snapshots I cannot vouch for snapshots.  There is something different
lvm1 and lvm2 for snapshots and they cannot be used V1 to V2,  since I
am not using them I have not done any further reading.

I think gparted can be used to manage LVM after it is set up.   I cannot
verify this though.  You can add hard drives from the VG once you have
created it.

Ta
Ken
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM Re-mounting Hell

2006-07-05 Thread Sonia Hamilton
* On Mon, Jul 03, 2006 at 05:44:46PM +1000, Kevin Fitzgerald wrote:
 Hi All. Hoping someone can help.
 
 First the question, then the background
 
 Q: Can anyone help me re-mount LVM disks to a new machine

There's an article in Linux Journal June '06 that may help with this -
should be on the website by now.

--
Sonia Hamilton. GPG key A8B77238.
.
Complaining that Linux doesn't work well with Windows is like ... oh,
say, evaluating an early automobile and complaining that there's no
place to hitch up a horse. (Daniel Dvorkin)
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] LVM Re-mounting Hell

2006-07-03 Thread Kevin Fitzgerald
Hi All. Hoping someone can help. 

First the question, then the background

Q: Can anyone help me re-mount LVM disks to a new machine

Background
Machine 1, Fedora core 4, 3 disks in a LVM. SOmething went horribly
wrong and the box wont boot anymore (I suspect one of the disks has
failed) So I have built a 2nd machine with the hope of plugging in the
disks from Machine one, mounting them and retrieving some of the data
(Whatever is there).

Machine 2 has a single 13Gb Drive in it, also set up LVM.

I plug in one of the drives from the old machine and it comes up as
/dev/hdc with a single partition on it (hdc1). If i run a fdisk -l on
the box I get the following:

Disk /dev/hda: 13.0 GB, 13022324736 bytes
255 heads, 63 sectors/track, 1583 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot
Start
End Blocks Id System
/dev/hda1
*
1
13 104391 83 Linux
/dev/hda2
14 1583
12611025 8e Linux LVM

Disk /dev/hdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot
Start
End Blocks Id System
/dev/hdc1
*
1 30401
244196001 8e Linux LVM

All good. SO, trying to follow the directions I found on the web at
http://forums.teamphoenixrising.net/archive/index.php/t-32150.html
(three quarters of the way down the page) and I come up no good. I type
lvm and get the correct prompt. I type vgs and get

[EMAIL PROTECTED] ~]# vgs
 Couldn't find device with uuid 'eGeBrP-O0XF-K5qr-g76h-XYut-1bBQ-PbzVdK'.
 Couldn't find all physical volumes for volume group VolGroup01.
 Couldn't find device with uuid 'eGeBrP-O0XF-K5qr-g76h-XYut-1bBQ-PbzVdK'.
 Couldn't find all physical volumes for volume group VolGroup01.
 Volume group VolGroup01 not found
 VG #PV #LV #SN Attr VSize VFree
 VolGroup00 1 2 0 wz--n- 12.00G 32.00M

ANd then of course nothing goes right after that. Can anyone help me? What have I missed?

kev
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] LVM Re-mounting Hell

2006-07-03 Thread Gavin Carr
On Mon, Jul 03, 2006 at 05:44:46PM +1000, Kevin Fitzgerald wrote:
 Hi All. Hoping someone can help.
 
 First the question, then the background
 
 Q: Can anyone help me re-mount LVM disks to a new machine
 
 Background
 Machine 1, Fedora core 4, 3 disks in a LVM. 

What do you mean by that? Do you mean you are striping across the disks
using LVM i.e. a volume group/logical volume spanning all three disks?

If so, I suspect you are hosed. I _think_ LVM requires the entire volume
group for you to be able to bring it up on a new machine. Striping/RAID0
is dangerous in this respect - you're increasing your likelihood of 
volume failure by the number of disks in your stripe. You should only
use striped volumes for data you don't care about, or you should stripe
over an underlying mirror instead (which is RAID 10).

Cheers,
Gavin


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM Re-mounting Hell

2006-07-03 Thread Matthew Hannigan
On Tue, Jul 04, 2006 at 06:56:12AM +1000, Gavin Carr wrote:
 On Mon, Jul 03, 2006 at 05:44:46PM +1000, Kevin Fitzgerald wrote:
  Hi All. Hoping someone can help.
  
  First the question, then the background
  
  Q: Can anyone help me re-mount LVM disks to a new machine
  
  Background
  Machine 1, Fedora core 4, 3 disks in a LVM. 
 
 What do you mean by that? Do you mean you are striping across the disks
 using LVM i.e. a volume group/logical volume spanning all three disks?

Well the problem is not hopeless if you have the vg information
and any single lv is entirely on any single disk.

The recovery could still be messy though.
We need more information.

 If so, I suspect you are hosed. I _think_ LVM requires the entire volume
 group for you to be able to bring it up on a new machine. Striping/RAID0
 is dangerous in this respect - you're increasing your likelihood of 
 volume failure by the number of disks in your stripe. You should only
 use striped volumes for data you don't care about, or you should stripe
 over an underlying mirror instead (which is RAID 10).

Nod.

FWIW note that even in raid 10 or raid 01 the probability 
of data loss increases a lot as the number of disks increases.

Matt

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM problems

2006-03-26 Thread Daniel Pottumati
Hi Dave,
this is what I get:

# lvremove /dev/storage/
  Volume group storage not found
# lvremove /dev/storage/storage
  Volume group storage not found
# lvremove /dev/mapper/storage-storage
  Volume group mapper not found
Thanks..

Regards;
Daniel Pottumati

IT Services Unit
Economic and Financial Studies Division
Macquarie Unversity
 David Kempe [EMAIL PROTECTED] 03/24/06 6:21 PM 
Daniel Pottumati wrote:
 # vgremove storage
 returns: 
 Volume group storage not found or inconsistent.
   Consider vgreduce --removemissing if metadata is inconsistent.


had this problem just yesterday.
what about lvremove /dev/storage?

dave

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] LVM problems

2006-03-23 Thread Daniel Pottumati
Hi,

Having a few issues with LVM when trying to create VG:

# vgcreate storage /dev/hda16
returns:  /dev/storage: already exists in filesystem

# vgremove storage
returns: 
Volume group storage not found or inconsistent.
  Consider vgreduce --removemissing if metadata is inconsistent.

# vgreduce --removemissing storage
gives:  Volume group storage not found

Could be something stupid, but I don't know.
Your help is much appreciated.
Thanks
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM problems

2006-03-23 Thread Shane

Does vgdisplay actually display a VG called 'storage'?

Did you previously have a VG called 'storage' and remove it or  
something similar?


Shane

On 24/03/2006, at 12:20 PM, Daniel Pottumati wrote:


Hi,

Having a few issues with LVM when trying to create VG:

# vgcreate storage /dev/hda16
returns:  /dev/storage: already exists in filesystem

# vgremove storage
returns:
Volume group storage not found or inconsistent.
  Consider vgreduce --removemissing if metadata is inconsistent.

# vgreduce --removemissing storage
gives:  Volume group storage not found

Could be something stupid, but I don't know.
Your help is much appreciated.
Thanks
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM problems

2006-03-23 Thread Daniel Pottumati
No, vgdisplay doesn't show nothing about storage.

I did, I had a partition, I did a pvcreate on it, then vgcreate.
I then did a mkfs.reiserfs on it, and that's when I lost the pv.

But storage is still in /dev/storage/storage and
/dev/mapper/storage-storage

Thanks
Daniel

Regards;
Daniel Pottumati

IT Services Unit
Economic and Financial Studies Division
Macquarie Unversity
 Shane [EMAIL PROTECTED] 03/24/06 5:24 PM 
Does vgdisplay actually display a VG called 'storage'?

Did you previously have a VG called 'storage' and remove it or  
something similar?

Shane

On 24/03/2006, at 12:20 PM, Daniel Pottumati wrote:

 Hi,

 Having a few issues with LVM when trying to create VG:

 # vgcreate storage /dev/hda16
 returns:  /dev/storage: already exists in filesystem

 # vgremove storage
 returns:
 Volume group storage not found or inconsistent.
   Consider vgreduce --removemissing if metadata is inconsistent.

 # vgreduce --removemissing storage
 gives:  Volume group storage not found

 Could be something stupid, but I don't know.
 Your help is much appreciated.
 Thanks
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM problems

2006-03-23 Thread Shane
If the /dev/storage/storage stuff is still there, but you don't have  
the LV or VG when using the display commands, thats why it's failing.


I don't know enough about it to say for certain how to proceed from  
here to get rid of /dev/storage/storage, hopefully someone else can  
help out?


I think if you can't reclaim 'storage' you might be able to just name  
the VG something else and proceed as long as nothing tries to use  
'storage'.


Shane


On 24/03/2006, at 5:42 PM, Daniel Pottumati wrote:


No, vgdisplay doesn't show nothing about storage.

I did, I had a partition, I did a pvcreate on it, then vgcreate.
I then did a mkfs.reiserfs on it, and that's when I lost the pv.

But storage is still in /dev/storage/storage and
/dev/mapper/storage-storage

Thanks
Daniel

Regards;
Daniel Pottumati

IT Services Unit
Economic and Financial Studies Division
Macquarie Unversity

Shane [EMAIL PROTECTED] 03/24/06 5:24 PM 

Does vgdisplay actually display a VG called 'storage'?

Did you previously have a VG called 'storage' and remove it or
something similar?

Shane

On 24/03/2006, at 12:20 PM, Daniel Pottumati wrote:


Hi,

Having a few issues with LVM when trying to create VG:

# vgcreate storage /dev/hda16
returns:  /dev/storage: already exists in filesystem

# vgremove storage
returns:
Volume group storage not found or inconsistent.
  Consider vgreduce --removemissing if metadata is inconsistent.

# vgreduce --removemissing storage
gives:  Volume group storage not found

Could be something stupid, but I don't know.
Your help is much appreciated.
Thanks
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html





--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM problems

2006-03-23 Thread David Kempe

Daniel Pottumati wrote:

# vgremove storage
returns: 
Volume group storage not found or inconsistent.

  Consider vgreduce --removemissing if metadata is inconsistent.



had this problem just yesterday.
what about lvremove /dev/storage?

dave
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] LVM and software RAID

2005-09-22 Thread Raphael Kraus
G'day,

We're trying to set-up a host with software RAID (mirroring) and LVM at
work (for a backup server).

Just trying to install Debian with RAID partitions is proving painful.

Anyone done this before? Any recommendations, tips, suggested methods?

I'm thinking I should install to one drive putting a plain boot
partition and then a LVM on it, leaving the second alone. Once install
is complete create the RAID with the second drive marked as failed. (I
can remember doing this a while back, but I think RAID has changed on
Linux now.)

Sorry if I'm answering my own questions here. I've struggled with
Debian's installer and seen my colleague spend longer on it, whilst
being head down in programming. Consciousness seems to have clicked a
bit more since relaxing.

Look forward to seeing you all again on Friday week.

Raphael
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM and software RAID

2005-09-22 Thread Trevor D. Manning
* Raphael Kraus ([EMAIL PROTECTED]) wrote:
 G'day,
 
 We're trying to set-up a host with software RAID (mirroring) and LVM
 at work (for a backup server).
 
 Just trying to install Debian with RAID partitions is proving painful.
 
 Anyone done this before? Any recommendations, tips, suggested methods?

Perhaps this could help:

http://www.epimetrics.com/topics/one-page?page_id=421topic=Bit-head%20Stuffpage_topic_id=120

Perhaps you could also find this useful:

http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID_mirror_and_LVM2_on_top_of_RAID

-- 
- Trevor Manning
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM and software RAID

2005-09-22 Thread Ricky
don't think you need LVM for your /boot partition (in fact, it may not even
be possible), if you want to be super safe just partition it with 100MB or
something

assuming you've created 4 partitions, and used fdisk to change its type to
Linux raid auto (0xfd)

mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/hda{9,10,11,12}

this should give you a raid 5 with partitions 9 to 12, now to format md0 to
an ext3 file system

mke2fs -j -b 4096 /dev/md0

watch the progress in /proc/mdstat, of course, you have to do all the usual
mount stuff and update fstab

LVM is a diff beast, you need to first create a LVM partition, then put it
into volume groups, then create the logical volumes

fdisk to change type to Linux LVM, thenpvcreate /dev/hda9

to group them into a group, say vdisk  vgcreate vdisk /dev/hda9
/dev/hda10

now to create a logical volume named datalvcreate -L 512M -n data vdisk

make sure you read the man manual for all these commands, its too much the
discuss all the parameters here

never did try mixing RAID with LVM, can anyone comment on their
experience..

Cheers

- Original Message - 
From: Raphael Kraus [EMAIL PROTECTED]


 G'day,

 We're trying to set-up a host with software RAID (mirroring) and LVM at
 work (for a backup server).

 Just trying to install Debian with RAID partitions is proving painful.

 Anyone done this before? Any recommendations, tips, suggested methods?

 I'm thinking I should install to one drive putting a plain boot
 partition and then a LVM on it, leaving the second alone. Once install
 is complete create the RAID with the second drive marked as failed. (I
 can remember doing this a while back, but I think RAID has changed on
 Linux now.)

 Sorry if I'm answering my own questions here. I've struggled with
 Debian's installer and seen my colleague spend longer on it, whilst
 being head down in programming. Consciousness seems to have clicked a
 bit more since relaxing.

 Look forward to seeing you all again on Friday week.

 Raphael
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] lvm == raid0

2005-09-10 Thread ashley maher
I'm just about to do an install.

I have two scsi disks I wish to combine. Am I correct in understanding
striping lvm will give the same performance as raid0?

Some references suggest striping lvm on scsi drives do not match the
performance of raid0.

Comments (good references) appreciated.

Regards,

Ashley

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] lvm == raid0

2005-09-10 Thread Glen Turner

ashley maher wrote:

I'm just about to do an install.

I have two scsi disks I wish to combine. Am I correct in understanding
striping lvm will give the same performance as raid0?

Some references suggest striping lvm on scsi drives do not match the
performance of raid0.

Comments (good references) appreciated.


I hate RAID jargon.  The cheat sheet is:

 0  striping
 1  mirroring
 4  parity on seperate disk (useful if the parity disk is NVRAM)
 5  parity distributed across all disks

There is another disk function -- concatenation -- which has
no RAID level.

You can stack these -- RAID10 is mirrored (1) disks which
are then striped (0).  And concatenations of RAID5 disks
are pretty common at the high end.

So as you can see both md and lvm can stripe.  Your choice
should be guided by the reason you are striping. If you
are doing it to build bigger volumes then doing it with
lvm makes sense as then the lvm tools can better adminstrate
the result.  If you are striping for performance then md makes
more sense.

I'd run bonnie++ over both and make sure the results weren't
wildly different before making a choice.  Then I'd go with
LVM because you might want to add a third disk in the future.

The other thing you should do is consider exploiting the fact
you now have two disk heads.  So if you are running a fixed
application you can tune the application disk access paths
to the disks.  As a trivial example, Apache does two things,
reads content and writes logs. So you'd put those on seperate
disks, so the disk head is always at the end of the log file.
You'll find professional database systems run lots of small
fast disks to do this sort of optimisation (eg, run the
transaction log on a pair of mirroring 15,000 RPM disks,
split the index and data across differing RAID arrays, etc).

The other thing that's not clear from your e-mail is what
your backup and recovery strategy is.  This often determines
the RAID configuration.  And given two disks I'd be mirroring
rather than going for increased capacity.

Cheers,
Glen
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html