Sergei Stenkov asked of me:
> Max Nilson wrote:
>
> > this works nicely for high speed data importing. Biggest file I have
> > tested with was 10MB worth of stock item data, and that only
> > took about three and four seconds to walk.
>
> I too believe that memory access is the fastest way, but the
> problem is that i have to run files of up to 1GB with a norm
> of about 100Mb. Shurely that would drain the memory in no time?
Not at all! That's the joy and wonder of memory mapped files. What you are
really doing is reserving a chunk of virtual memory, and then telling the
memory manager layer that it can page the contents in from disk into that
memory from the file you created the memory mapping with. If RAM gets low
all the normal memory management stuff happens and bits of your file can
be paged out, and you never notice.
You can also use the CreateHandle flags to hint on your access patterns to
the file, and the memory manage can do read ahead and more optimal RAM
page reuse based on this hint.
And this is faster than a block read loop because you have very tight
locality in your code and all the file read stuff is handled by the OS, so
in theory your entire loop can sit in the cache and the only slow bit is
the paging in of chunks of your file, which should be happening ahead of
your reading is the file system is doing its job properly. That's a big
ask on Win9x, but the NT level code does this very nicely.
Cheers, Max.
---------------------------------------------------------------------------
New Zealand Delphi Users group - Delphi List - [EMAIL PROTECTED]
Website: http://www.delphi.org.nz
To UnSub, send email to: [EMAIL PROTECTED]
with body of "unsubscribe delphi"