On Mar 2, 2009, at 2:33 PM, Emmanuel Lecharny wrote:
You will create huge buffers. If your files are 2Gb big, it will suck up all your memory. Just split the file in small chunks, it works exactly the same way.
Would it? I would think that the IO system would behave as a memory- mapped file... paging data in as it needed it and out as it's done...

Mem-mapped file would take up address space... so on 32-bit machine, you would have the problem of using up address space if you allocate big 2GB buffers...

Example:

reading file into buffers
 loop over file - read 64 MB buffers
 send buffer

- have to wait for file to be read entirely for each chunk
- 64MB of actual memory will be in use for the phase, potentially getting paged out to disk



reading file using mem-buffers
 loop over file - mapping 64 MB buffers
 send buffer

- address space for 64MB is used up, but is purely disk-backed
- if "paging" were to occur, nothing would write to disk since OS is
  using file-backed memory already

..... my argument falls apart if Java doesn't use memory-mapped files for its mappedbytebuffers...

Reply via email to