Having just finished various dist-upgrades of my desktop from Intrepid
to Lucid, and healed a sick RAID in the process, after booting to Lucid
I am presented with a warning that disk failure is imminent. The disk
in question is not part of a RAID; it is a 1 TB disk that I added
separately. The disk has always been empty, so there is no worry about
losing stuff. 

The warning comes from Palimpsest, the new GUI disk utility that comes
with Lucid. I launched Palimpsest as root so I could use its GUI to see
if I could fix the drive. After unmounting it I clicked on the Check
Filesystem button. Palimpsest then reported in a popup window
that the disk was clean, while the main window still insisted that disk
failure was imminent. According to the Smart Data window the
Reallocated Sector Count is in red and labeled as failing, also the
Airflow Temperature, which it reports as 49C. 

While wondering about these reports I reformatted the drive as ext4 (the
Lucid default, evidently). The reformat went fine without errors.

At that point I concluded that Palimpsest was given to prevarication.
But just to be sure I decided to run e2fsck on the disk. This gave me
a strange error message:

Superblock invalid, trying backup blocks ...
Bad magic number in super-block while trying to open /dev/sdc
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate
superblock:
        e2fsck -b 8193 <device>

First, note that e2fsck thinks it is looking for an ext2 filesystem. I
read all the way through the e2fsck man page looking for an option to
specify a particular filesystem, but couldn't find it.

Second, "8193" is the number to use if the block size is 1K, 16384 for
filesystems with 2K block sizes, and 32768 for 4K block sizes. So
before I try to fix the superblock I need to figure out what the block
size is. The Palimpsest GUI doesn't say, and I don't know how to find
out from the command line. 

This is not an urgent matter, but if the drive is really failing I'd
like to know so I can return it to Maxtor for replacement.

Any suggestions welcome.
_______________________________________________
PLUG mailing list
[email protected]
http://lists.pdxlinux.org/mailman/listinfo/plug

Reply via email to