> > Ryan Coleman wrote:
> >>> Ryan Coleman wrote:
> >
> > Oh, I completely forgot to ask...
> >
> > Does the RAID still operate even though one disk is bad?
> >
> > After all, that is the purpose of RAID-5. stripe, with parity. One
> > fails, the other two (or N) keep right on going...
> >
> > Or, is it a RAID-5 card that you put into operation as a RAID-0 span?
> >
> > If the latter is the case, good luck ;)
> No, I'm not that stupid. :) My old job, we had the big LaCie drives and
> one of the 4 250Gs in it would fail and they were f*ed. I went to replace
> the drive right away so I wouldn't be in that situation.
> When I went to rebuild in the BIOS it failed at 2%, no matter what 250G
> drive I put in to fill the spot.
        I had that happen on a 4 disk (36G each) raid-5 (I forget the
controller). No matter what disk I put in to replace a failed one, it
wouldn't "take". 3 drives, exact model, different production dates...
None took. 

        I futzed and futzed and finally decided to declare the cage
bad and think of backout procedures. About 2 hours after I had set
another machine up to take its place, it started giving spurious
errors and fell over.

        I pulled the machine out of the datacenter, cleared out
the raid config, and went to rebuild with just the 3 drives. Wouldn't
build a fresh raid-5 from just the 3 disks. After the "Which one of
these things is not like the other", I found that apparently one of
the disks still was working, but causing heck if I put another disk
in the slot next to it. 

        A year later, and I finally decided to buy a few more disks
off ebay to see if my final theory is right. I win (hopefully) the
auction in 5 days... If the cage really is bad, I previously sourced
a new case/cage, and decided even though its a 4G Dual Xenon system
I probably could get a new system cheaper thats faster.

freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to