Rich Freeman wrote:
> On Wed, Jun 25, 2014 at 9:15 AM, Dale <rdalek1...@gmail.com> wrote:
>> thegeezer wrote:
>>> On 06/25/2014 08:49 AM, Dale wrote:
>>>> thegeezer wrote:
>>> this says there are 104 pending sectors i.e. bad blocks on the drive
>>> that have not been reallocatd yet
>> Wonder why it hasn't?  Isn't it supposed to do that sort of thing itself?
>>
> It can't relocate the sectors until it successfully reads them, or
> until something else writes over them.
>
> However, the last few drives I've had this happen to never really
> relocated things.  If I scrubbed the drives mdadm would overwrite the
> unreadable sectors, which should trigger a relocation, but then a day
> or two later the errors would show up again.  So, the drive firmware
> must be avoiding relocation or something.  Either that or there is a
> large region of the drive that is failing (which would make sense) and
> I was just playing whack-a-mole with the bad sectors.  In any case, if
> the drive is under warranty I've yet to have a complaint returning it
> with a copy of the smartctl output showing the failed test/etc.  With
> advance replacement I can keep the old drive until the new one
> arrives.

I'm going to bet this drive is out of warranty.  I'm pretty sure it is
over 2 years since I bought it. 

Once I replace that drive, I'll dd the thing and see what it does then. 
It'll either break it or give me a fresh start to play with and see how
long it lasts.


>> I usually just run the test manually but I sort of had family stuff
>> going on for the past year, almost a year anyway.  Sort of behind on
>> things although I have been doing my normal updates.
> rc-update add smartd default
>
> I don't know that I even had to configure it - it is set to email
> root@localhost when there is a problem.  I also run mdadm to monitor
> raid.
>
> I don't think anybody makes a monitor for btrfs, though my boot is
> mirrored across all my btrfs drives using mdadm so a drive failure
> should be detected in any case.  I need to check up on that, though -
> I'd like an email if something goes wrong with btrfs storage.

I'm using lvm here.  I also don't have a mail server set up which is why
I run them manually.   I usually do it once a month or so but had some
family issues to pop up. 


>> I ordered a drive.  It should be here tomorrow.  In the meantime, I
>> shutdown and re-seated all the cables, power too. I got the test running
>> again but results is a few hours off yet.  It did pass the short test
>> tho.  I'm not sure that it means much.
> Short test generally doesn't do much - you need the long ones.  I'd be
> shocked if it passed with offline uncorrectable sectors.
>
> And do check on your warranty.  You can migrate all your data to the
> new drive, and then replace the old one as a backup disk.  Either use
> it with raid, or as an offline backup.  If you want to do raid you can
> set up mdadm with a degraded raid1 so that you can copy your data over
> from your old drive, and then when it is replaced you just partition
> the new one, add it to the raid, and watch it rebuild automatically.
>
> Rich
>
>

I figured the short test wouldn't say much.  I am backing up some of the
stuff tho.  I do have a 750GB drive that was empty.  It won't save it
all but it is a start.  Test should have been done by now but I guess
the copy process is slowing it down.  I'm getting this so far:

# 1  Extended offline    Self-test routine in progress 70%    
16387         - 

< dale twiddles his thumbs >

Thanks much.

Dale

:-)  :-) 

Reply via email to