On 4/17/08, Michael Tinsay <[EMAIL PROTECTED]> wrote:
>
> With disks these days, by the time you start getting bad sectors, disk failure
> ain't that far behind.

in normal operation.. disk drives have a life span by its given mean
time between failure (MTBF) by the manufacturer... or even more than
the given MBTF..

some disks most specially the scsi ones have a spare sectors to remap
bad sector to its spare sectors... that is why scsi is more expensive
than the other drives  because of the features you dont see from the
other drives.. the only time you see bad sector when all of its spare
sectors has been used...

in normal operation.. by the time you see bad sectors... it indicates
that your drive is tearing or wearing out due to aging because you are
reaching to its MTBF.

but sometimes you see bad sectors earlier than as you expected...
because of the following reasons...

1. mechanical failure
2. particles entering inside your disk because of bad air filter of the disk
3. external vibration while the disk is active reading or writing
4. wrong cable used like a 80 pin cable drive inserted by a 40 pin cable
5. erratic internal power supply, avr or ups that cant give a constant
supply of a given voltage..
6. power surge or spike
7. excessive heat of a disk
8. and others..

by the time you see a bad sector.. you have time to back up your
important data..

back to migs problem.. he doesnt need reliability... raid 0 is a good
candidate for him..

take note when talking about reliability... even raid mirroring cant
provide you reliability because i encountered the two mirrored disks
were hit by a power spike caused by an erratic UPS...

fooler.
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
http://lists.linux.org.ph/mailman/listinfo/plug
Searchable Archives: http://archives.free.net.ph

Reply via email to