The density of bits on the magnetic surface has exploded since disks
were 5-10MB, especially in the last 10 years. Other storage
technologies haven't been able to keep up. You can't even back up your
disks any more because they're so huge. Ok there's DLT - far too
expensive for home use, and another disk. DVDs only cut it up to a
limit. A 200GB hard disk is over that limit. The high magnetic bit
density implies it's unreliable. Other factors like mechanical failure
and had crashes haven't changed in 20 years. And the lot has got
cheaper - you couldn't buy any size hard disk for 200 bucks 20 years
ago. Cheaper bigger better faster cheaper -> unreliable. The fact that
disks blow up hasn't changed a bit.

Forget about that bad sector scanning and bad blocks handling - it's
all in the disk already. Youy can't even influence it. From my last
discussion about how to make a program which salavages as many
still-readable sectors as possible from a damaged disk it emerged that
when the disk returns an I/O error, it has already tried several
hundred times. No point trying a second time yourself, unless you alter
other parameters - head positioning geometry via temperature for
example. However the more you try and read faulty areas, the more you
risk taking even more particles off the surface and spreading them
around.

Disks now have more or less substantial integrated self-monitoring
(this SMART stuff). They can measure whether head positioning is
mechanically wonky, whether spin-up is impaired, and similar things.
You can kickstart these tests externally too. If disks go *bang* and
keel over dead then there's no warning. I've also had plenty of warning
from the disks's selftest, more of a problem is interpreting the signs
I found. You want package smartmontools from sourceforge.

Volker

-- 
Volker Kuhlmann                 is possibly list0570 with the domain in header
http://volker.dnsalias.net/             Please do not CC list postings to me.

Reply via email to