Just want to reiterate what a bad idea it is to:

a) make your own seat of the pants algorithm to determine how many bad
sectors is "too many" based on no significant data.

b) do so when you can't even read the raw number correctly (due to
varying format of raw values).

My wife's 120G laptop drive has 10 bad sectors, but palimpsest still
reads this as 655424.  (The 0x0a is the low order byte in intel byte
order see https://bugzilla.redhat.com/show_bug.cgi?id=498115#c61 for
details, still fails in Fedora 16, gnome-disk-utility-3.0.2.)  The 1024
factor *still* sees the disk as failing - it does not address the
underlying problem of not having a reliable raw value, and not knowing
the design parameters or even the type of technology.

Please, please, just use the vendor numbers.  The only thing you could
add would be to keep a history, and warn of *changes* in the value (but
don't say "OH MY GOD YOUR DISK IS ABOUT TO DIE!" unless the scaled value
passes the vendor threshold).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/438136

Title:
  palimpsest bad sectors false positive

To manage notifications about this bug go to:
https://bugs.launchpad.net/libatasmart/+bug/438136/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
  • [Bug 438136] Lennart-poettering
    • [Bug 438136] Stuart Gathman

Reply via email to