Richard Jones writes:
> Benno Senoner wrote:
> >
> > Hi,
> > does anyone of you know how long it takes,
> > to e2fsck (after an unclean shutdown) for example a soft-raid5 array of
> > a total size of about 40-50 GB
> > ( example : 6 disk with 9GB (UW SCSI) )
> > assume the machine is a PII300 - PII400
> >
> > assume that the raid-array is almost filled with data (so that e2fsck
> > takes longer)
>
> This is affected by so many different factors,
> that it's really impossible for me to give an
> estimate for your machine.
Indeed.
> However, as a guide,
> our machine was:
>
> P-II 233 MHz
> 256 MB RAM
> 6 * UltraDMA drives with measured throughput
> of 16 MBytes/sec
> RAID space: 42 GB after formatting
>
> with the drive about 20% full we had fsck times of
> 20 mins and 33% full of about 30 mins.
The primary differences between yours and mine (which took 13 minutes
to fsck when 70% full) are (1) mine used SCSI which has tagged
queuing and (2) mine had 10000 RPM disks which also improves seek
times. It looks as though Benno's resembles mine a bit more closely.
> In both cases,
> mounting a clean filesystem took about 2 mins.
Mounting mine when clean takes 4 seconds. I wonder if you used a 1k
block size for your filesystem. That greatly increases the time to
check the bitmaps upon mounting (though you can turn this off with
mount -o check=none). It also greatly decreases the performance of
the filesystem.
> Which just goes to show that ext2 is not a suitable
> filesystem for large disk arrays.
Not at all. You just have to be more careful with choosing hardware
and tuning software with larger arrays. As a matter of interest,
what figures does bonnie give on your array? I would guess that the
fsck time is probably mostly affected by random seeks. I get 282/sec.
> Roll on journalling
> in 2.3, I say!
That will be nice but you can use ext2 as-is on large systems if care
is taken at the design/tuning stage.
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services