I realize this isn't the subject but I just had to ask about this
after reading your reply. I was always under the impression that Windows
did a file validity check when ever you copied or moved a file from on disk
to another. I don't mean to a different directory...as I realize in those
cases the file is usually just being re-mapped, but from one hard disk to
another or to a floppy I thought this was part of the process. Is this not
the case, and if it is why would you need to do your own byte by byte check?
from Robert Meek dba Tangentals Design CCopyright 2006
"When I examine myself and my methods of thought, I come to the conclusion
that the gift of Fantasy has meant more to me then my talent for absorbing
positive knowledge!"
Albert Einstein
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Cosmin Prund
Sent: Thursday, March 09, 2006 2:22 AM
To: 'Delphi-Talk Discussion List'
Subject: RE: Thoughts on File Access
My 2 cents (and I do like memory map files):
If you need high performance RANDOM access to a file use a memory mapped
file. This way you'll be making good use of it's smart caching system and
you'll be able to code as if the whole file is in memory at once.
If you need high performance SEQUENTIAL access to a file you might be able
to do better with normal file-access API's as they don't do caching and you
don't need caching. I've got no idea on how much overhead TFileStream ads to
this basic setup. Also I did notice TFileStream is a bit dumb when it comes
to reading small amounts of data in sequence. It does no read-ahead so
you'll end up implementing your own read-ahead buffer.
Further on: If your needs are truly out of the ordinary you'll need to look
at the advanced flags in CreateFile. You can open a file for reading without
system caching. You'll then be forced to read/write data in multiples of the
system's page size but the documentation implies this is the fastest
possible way to read data out of a file!
Personal examples:
(1)
I used a memory mapped files in a text-search application because they allow
high speed, pre-buffered access to a file. I decided to use a memory mapped
file for sequential access in the hope of getting most of the speed benefits
of doing smart read-ahead without the trouble of implementing my own thing
and reading files with no system cache.
(2)
I used a "no system cache" file access in a routine that tests a floppy
disk. It essentially copies a file to the floppy and then reads the file
back using this method and compares it to the original file, to make sure
nothing strange happened there. I can tell you for sure it reads directly
from floppy because it makes the floppy spin like mad! If I do normal reads
of a file from a floppy immediately after the file has been written Windows
would usually serve my file from it's cache, not from the disk.
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:delphi-talk-
> [EMAIL PROTECTED] On Behalf Of Jim Burns
> Sent: Wednesday, March 08, 2006 7:32 PM
> To: Delphi Talk
> Subject: Thoughts on File Access
>
> Hi all,
>
> Just a request for thoughts here. I'm at a decision point and I'd like
> some
> fresh thoughts on file access methods. Specifically, using standard file
> I/O methods be it CreateFile/ReadFile/WriteFile in the Win32 API directly,
> or in the guise of a TFileStream, as compared to Memory-Mapped files.
>
> I've used them all so I'm looking for subtle distinctions here. But once
> I
> make my decision I need to move forward so I'd just like to cover the
> bases.
>
>
> I guess the biggest point I'm struggling with is that someone I've read in
> the past suggested they liked Memory mapped files because they could use
> existing memory related functions rather than file handles. Not sure I
> see
> this as a significant point.
>
> One point that's been made in the past is not needing to handle the file
> I/O
> with memory mapped files, not having to manually handle file buffering.
> But
> then again managing the mapped view window across the file itself isn't
> much
> different than managing a file buffer in my mind.
>
> Richter ("Advanced Windows, 3rd ed.") points out that with memory mapped
> files one can access huge files, mapping say an 18EB file into a 32-bit
> address space. But TFileStream is (at D7) at least smart enough to use
> Int64's for such things to that despite being a signed value leaves
> something like 9,223,372,036,849,999,999 on the positive side. 32-bit
> Windows itself may still be using 32-bits for such things but even so
> functions like SetFilePointer provide a low and high value for effectively
> doubling the function's reach. So where's Richter's real advantage for
> memory mapped files here?
>
> Ignoring such things as sharing across process boundaries, data
> commitment,
> coherence of multiple views, memory files with out physical storage, and
> other advanced concepts, speaking just in terms of file access, file I/O,
> be
> it convenience, performance, or whatever, can anyone make a case for
> memory-mapped files over other methods?
>
>
> TIA,
>
> Jim
>
>
> ------------------------------------------------------------------------
> Jim Burns, <mailto:[EMAIL PROTECTED]>
> Technology Dynamics
> Pearland, Texas USA
> 281 485-0410 / 281 813-6939
>
> __________________________________________________
> Delphi-Talk mailing list -> [email protected]
> http://www.elists.org/mailman/listinfo/delphi-talk
__________________________________________________
Delphi-Talk mailing list -> [email protected]
http://www.elists.org/mailman/listinfo/delphi-talk
__________________________________________________
Delphi-Talk mailing list -> [email protected]
http://www.elists.org/mailman/listinfo/delphi-talk