My personal feeling is that it is crazy to do this.  You are portraying
in static terms, as though there are 200000000 bad bytes of 2400000000,
but I say if the disk was manufactured 5 years ago that means that over
100000 bytes have gone bad every day.  Would you actually use this?  My
worry would be that today's 200 sectors would lie somewhere in my data.
My approach is that any media that has any physical errors is destroyed
and thrown away immediately.  Once you get physical errors on a device,
you are walking on thin ice with heavy boots. -Tom

On Mon, 24 Sep 2001, Piet van Unen wrote:

> Date: Mon, 24 Sep 2001 11:59:49 +0200
> From: Piet van Unen <[EMAIL PROTECTED]>
> To: tom <[EMAIL PROTECTED]>
> Subject: [tomsrtbt] e2fsck and bad sectors
>
> Dear All,
>
> I would like to put linux on a Samsung harddisk with a lot of bad sectors.
> Testing with tomsrtbt installed on a 100MB partition of the disk learned
> that more than 2.2 GigaByte  of 2.4 can be used without problems.
> I could not find a low level format program to be used. So I used Ontrack
> under DOS. This utility puts the bad clusters in the FAT table and marks
> them. Then I can use the DOS partition as clean partition. But the FAT table
> is on the partition!!!
> Under tomsrtbt I used e2fsck -cf . It tells me about a lot of bad sectors
> and says: Unrecoverable error.
> Is there for linux a utillity that works like the Ontrack program  to
> isolate the bad sectors? Then I could use the disk to put a linux
> distribution on it.
>
> --Piet
>
>

Reply via email to