WOW! That was a lot of information...

Hi Chris:
        I had to go through the painful process of recovering lost files on an
ext FS (twice!) and it really is a PITA. Eventually I discovered that my 
problems
were card-specific  to having a kernel built for dual-processors. Nevertheless,
if it werent for the fact that the data was really, really, important to me
(digital photos, personal coding libraries, etc..) I wouldnt have bothered.
Allow for spending bunches of time in inode hell:(

Interesting... I think the dual proc status of this server caused some
issues too.

[snip]
If you find you need to go to these lengths then i'll be happy to get back with 
the particulars
of what exactly i did before blowing away the FS. (the drive was good and still 
runs today)

I know I keep saying it... but I can't even pretend I comprehend what
you went through. Damn!!

This is where I am... the raid seems to stay stable for now. I did
rebuild it again today as it freaked or something...

The last three lines [in dmesg] after the rebuild:
3w-xxxx: scsi2: AEN: INFO: Rebuild complete: Unit #1.
hfs: unable to find HFS+ superblock
VFS: Can't find ext3 filesystem on dev sda1.

I tried to mount it... in other words.

Then:
# fdisk -l

Disk /dev/ida/c0d0: 36.4 GB, 36406394880 bytes
255 heads, 63 sectors/track, 4426 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

        Device Boot      Start         End      Blocks   Id  System
/dev/ida/c0d0p1   *           1          13      104391   83  Linux
/dev/ida/c0d0p2              14        4426    35447422+  8e  Linux LVM

Disk /dev/sda: 501.9 GB, 501998288896 bytes
255 heads, 63 sectors/track, 61031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       61031   490231476   83  Linux

sda1 is the raid in question.
pvscan doesn't come up with anything wrt this raid.

I do see lots of issues with inodes here, but I don't know what that
means, nor do I really understand your descriptions of what you did
with inode repairs:
# e2fsck -n /dev/sda1
e2fsck 1.38 (30-Jun-2005)
Couldn't find ext2 superblock, trying backup blocks...
/dev/sda1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Inode 7 has illegal block(s).  Clear? no

Illegal block #32304 (469831937) in inode 7.  IGNORED.
Illegal block #32309 (1082130496) in inode 7.  IGNORED.
Illegal block #32318 (2149711872) in inode 7.  IGNORED.
Illegal block #32373 (1073741824) in inode 7.  IGNORED.
Illegal block #32379 (536870919) in inode 7.  IGNORED.
Illegal block #32380 (1426063360) in inode 7.  IGNORED.
Inode 7, i_blocks is 147464, should be 147584.  Fix? no

/dev/sda1: e2fsck canceled.

/dev/sda1: ********** WARNING: Filesystem still has errors **********

Obviously I'm not going to try to blindly "Fix" it right now... that's
just knowing enough to be dangerous.

It's like I've just hit a major wall here...
As for the data.... there's a LOt of it, and while most of it is
limited in importance, I believe a good deal of it is very important,
just thankfully not currently mission critical.
Chris
--
  c  h  r  i  s  .  m  o  r  a  n  @  g  m  a  i  l  .  c  o  m
b  u  t   y  o  u    k  n  o  w    t  h  a  t    a  l  r  e  a  d  y

http://www.uvm.edu/~cmoran Dare risk the ChrisPedia

Reply via email to