Hi guys,

The new server was coming along nicely. Stressing the word "was" coming along nicely. I have the pleasure of starting over again.

I'm not sure what caused the problem so if any of you have had similar experiences with this setup, I'd like to hear about it so I can try something different.

The quick setup:

Server has two 160 GB SATA drives that are put in RAID1 configuration with a few regular partitions and one large 140 GB partition that is used with LVM2.

Got a couple of logical volumes for /usr, /home, /var, /srv, and /tmp

Everything seemed to be fine last night when I was last doing things and I shut down the system. I unmounted the partitions, de-activated LVM, then de-activated RAID1 before I shut the machine down (I booted from a livecd).

I start it up again today and I could no longer mount the /usr logical volume. I ran reiserfsck on it and had to fix a number of things like the superblock and whatever else.

After things were fixed I was able to mount the /usr volume. But it came up empty. Not a single file or directory remained. Not good.

I'm not sure what to blame: LVM or reiser file system, a combination of the two, or something else.

Searching on google for file corruption in conjunction with LVM and/or RAID yields some results, but it does for every filesystem you can think of, so it's hardly useful. It seems that with high I/O reiserfs on LVM sometimes causes corruption. I wasn't doing anything what I'd classify as high I/O.

So I have to start over again. I can try again with the previous setup, or maybe try the ext3 file system instead of reiserfs.

Any ideas, suggestions?

--
Gerard Beekmans

/* If Linux doesn't have the solution, you have the wrong problem */

--
http://linuxfromscratch.org/mailman/listinfo/lfs-chat
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to