Marco Peereboom wrote:
> >> >> On Monday 04 May 2009 17:56:43 L. V. Lammert wrote:
> >> >> > What is the best way to do a surface analysis on a disk?
> 
> >> 2009/5/5 Tony Abernethy <t...@servacorp.com>:
> >> > There is, in the e2fsprogs package, something called badblocks.
> 
> > On Thu, May 07, 2009 at 01:10:56AM +0200, ropers wrote:
> >> I also would recommend badblocks(8), but I would recommend
> >>   badblocks -svn
> >> instead of badblocks -sw.
> >>
> >> badblocks -svn also (s)hows its progress as it goes along, 
> but does a
> >> (v)erbose (n)on-destructive read/write test (as opposed to 
> either the
> >> default read-only test or the destructive read/write test). You can
> >> check an entire device with badblocks, or a partition, or 
> a file. The
> >> great thing about using badblocks to check a partition is that it's
> >> filesystem-agnostic. It will dutifully check every bit of 
> its target
> >> partition regardless of what's actually on it. And if you give
> >> badblocks -svn an entire storage device to test, it will 
> not even care
> >> about the actual partition scheme used. Because this 
> read/write test
> >> can trigger the disk's own built-in bad sector relocation, 
> this means
> >> you can even have a disk that you can't read the partition 
> table from,
> >> and running badblocks -svn over it may at least temporarily fix
> >> things. And I've used badblocks -svn e.g. to check old Macintosh
> >> floppies. Who cares that OpenBSD doesn't know much about the
> >> filesystem on those? badblocks does the job anyway.
> 
> >> Oh, and of course it would probably be prudent to do a 
> backup before
> >> read/write tests, even though badblocks is 
> well-established and (with
> >> -n) supposed to be non-destructive. Supposed to... ;-) 
> I've never been
> >> disappointed but YMMV.
> 
> 2009/5/7 Marco Peereboom <sl...@peereboom.us>:
> > You people crack me up.  I have been trying to ignore this 
> post for a
> > while but can't anymore.  Garbage like badblock are from 
> the era that
> > you still could low level format a drive.  Remember those fun days?
> > When you were all excited about your 10MB hard disk?
> >
> > Use dd to read it; if it is somewhat broken the drive will 
> reallocate
> > it.  If it is badly broken the IO will fail and it is time 
> to toss the
> > disk.  Those are about all the flavors you have available.  Running
> > vendor diags is basically a fancier dd.
> 
> Why do you consider badblocks garbage?
OK, I'll take a nibble. (flames invited where I've got anything wrong)

You use OpenBSD where sloppy doesn't quite do what you need to be done.
This is a world where a false sense of security is not your friend.
"This disk is good because it passed badblocks" is NOT valid.
I've got too many "rescued" disks that will probably keep on working.
probably: better then 50%. (but it sounds good)
depending on lots of probables is really instant death.

IF badblocks passed a disk as clean, and there were good reason to 
beleieve that that disk was actually clean, and that it would STAY
clean, then it (badblocks) would be a good program.
Unfortunately, there is not much of anything that badblocks, or the
vendors' programs CAN do that is much of an assurance of reliability.
You might get some idea from the reliability of "reconditioned" 
drives versus the reliability of actually new drives. And the vendors
have better tools (if such as better tools actually exist).

WITHOUT going into HW or OS handling of bad sectors, simply rename
files or directories something like BAD_STUFF and NEVER delete 'em.
There are exotic ways of increasing risk by keeping the most of the
not-failed-yet neighbors as supposedly good sectors.
You can do much of that by partitioning to avoid places with a lot
of bad stuff. With the prices and capacities of modern disks, all
of this must assume that you have lots of time and need something to
occupy that time. Watching grass grow is probably more exciting.

For a new disk (one that does not need to go into production soon)
you can run a very long winded excercise. Seroing and reading 
probably as effective and certainly faster than 0xAA 0x55 0xFF 0x00

There SHOULD be good data forthcoming from the SMART stuff.
BUT, so far I've haven't heard noises from that corner, just wise-
cracks about vendor diags. Presumably, SHOULD does not imply IS.
IF you have anything resembling money, and do not have lots of 
free time on your hands, the best advice seems to be to replace 
quickly anything that shows any sign of trouble.
(This might be an actual good use of benchmarks ;-)

Reading will reallocate sectors.
The sector after the reallocation will be readable.
The contents of this now readable sector will be the orginal contents 
if the drive managed to successully eventually read those original 
contents, seems like whatever the drive can fake in some cases. 
Seems like with NO indication of problems in some cases at least.
Very hard to be certain at this level (using inferior OSes)

Short answer, is that AFTER a long and complicated process, there
is no reason to believe that the contents of the now-readable disk
are the original contents that should be on that disk.
My own experience is that by the time there is reason to suspect,
it is odds on that the now readable contents are NOT the originals.

Shorter answer. Best to trust people who know more about this than I do.
(and I've got time and have messed with a bunch of broken disks)

> 
> I remember now that we talked about this before over a year ago, when
> I first asked about using badblocks on OpenBSD. Back then I eventually
> surmised that using dd to do the same thing as badblocks -svn would be
> possible but a lot more cumbersome, cf.:
> http://kerneltrap.org/mailarchive/openbsd-misc/2008/4/19/1499524
> 
> Am I/was I mistaken, and if so, where?
> 
> Thanks and regards,
> --ropers

Reply via email to