Wolfgang Lenerz writes:

> I'd still simply grab just as much memory I can use.
> If speed is of the essence, as you said in your requirements, then
> the user will probably also know to let the machine alone (tell him!)
> and not have too many other progs trying to get memory at the
> same time. If notn, then speed is not that essential, after all.
>
> So I'd still go for as much memory as I can get and read in the
> entire file.
>
> If that can't be done (not enough space):
>
> Ultimately, it will then be the read operations that slow everything
> down.
>
> Now, considering that iob.fmul & fstrg use D2 to indicate how many
> bytes they should get, and since D2 only can be word sized, you
> can, at most, read $fffff bytes in one go.
>
> If nothing else, I'd use that as my buffer size....

... in which case youd be in serious trouble.. ;)

Actually, it isnt quite that straightforward. I ran a program that scans
files (and does unspeakable things to them, but that is of no concern
here). It works like this: If the whole file fits in the buffer then the
whole file is loaded (using iof.load), otherwise chunks to the size of the
available buffer are loaded piece by piece, (using iob.fmul) until the
whole file has been processed.

I ran this program for every file on the hard disk, using increasing
buffer sizes. The table below shows how I fared:


        2^n  size   no.  s     remarks
        --- ----- ----- --- ---------------
         x:   xxx   xxx  xx Primer run
         7:   128   282  81
         8:   256   543  79
         9:   512   722  80
        10:  1024   769  79
        11:  2048  1063  80
        12:  4096   977  82
        13:  8192   620  85
        14: 16384   290  88
        15: 32768   220  90 Actually 32766
        16: 65536   233  ??
        --- ----- ----- --- ---------------
                   5719     Total files
        --- ----- ----- --- ---------------


2^n  -  I only tried buffer sizes of this series. The numbers 7..16 are
        the n.

size -  these are the buffer sizes tested.

no.  -  these, incidentally, are the number of files on the hard disk that
        would fit entirly into a buffer of that size (plus all the smaller
        ones), eg there are 1063 files larger than 1024b and <= 2048.
        233 are larger than 65536b.

s    -  is the number of seconds to scan all the files on the disk with
        the given buffer size.

remark  iob.fmul does not accept sizes greater than $7FFF, so I reduced
        the size to $7FFE to complete this experiment.

As far as I know, nothing my program does should be affected by the size
of the buffer, apart from filling it in the first place. So my findings
would seem to indicate that a buffer size of between 256 bytes! and 1k are
optimal for this kind of thing. This is strange enough, considering that
iob.fmul is called more frequently the smaller the buffer. What surprises
me is why were not seeing the benefits of iof.load in this (or at least I
dont). Anyone got a theory?

Per





Reply via email to