Wolfgang writes:

> > A question: A program uses io.fstrg/iob.fmul to load files in
> > smaller chunks for scanning. The files could be of any size on
> > any media (first of all hard disks). What, theoretically, is the
> > smallest efficient buffer size to use? (Im thinking *speed* here.)
> > Eg 512 bytes, as a whole sector can be loaded in at once? Or
> > allocation unit size? Or any arbitrary size that best suits my
> > program?
>
> If you're thinking "speed", then the larger the buffer, the
> better - reading the data in small chunks will always cost more
> time. If at all possible use a buffer for the entire file & (scatter)
> read it in.

Yes, that is understood. It is in situations where the whole file cannot be
read at once, Im thinking about. (Besides, on a multitasking machine it is
probably not very polite to grab huge buffers ;)

> "Sector" (512 bytes) sized buffers don't make that much sense
> IMHO, since the file data doesn't occupy the whole of the first
> sector (there's the file header), so reading the first 512 bytes from
> a file will read from 2 separate sectors.

This brings us to the heart of the question: What would be a sensible size?
First one block of 512-64b and then subsequent blocks of 512 bytes (or
multiples thereof)?

Per





Reply via email to