> I am running the backups every night, but afterwards when I unmount the 
> raid array and make a fsck.ext3 over it, every time I get errors (bad 
> blocks here, some inode errors there and so on). Next night I can use the 
> array quite fine, but on the day after that I get some filesystem errors 
> again.
>

Bad drives?  I don't trust drives until they've run for at least 2 wks
after I purchase them - I've had them die in that time.
 
> Now I do not really know where to search for the problem. Is it Linux's 
> software raid5 implementation, which is buggy? Is it one or more of the 
> IDE disks? Or is this normal for large (~400gb) ext3 partitions?
>

No.  I can't speak to ext3 partitions, but I've run a few XFS partitions
and haven't seen more than 1 drive fail on a 4 disk raid, and 1 out of
a 99 drive array (fibre channel).  You might want to change the filesystem
to RiserFS of something else a bit more robust than ext3.

Are these drive WD per chance?  If so, I'd suggest looking for some
Samsung or Seagate drives.

Bob
-  
 

--
[email protected] mailing list

Reply via email to