On Tue, 07 Dec 2010 21:37:51 +0100, Philip Winston <[email protected]> wrote:

One final piece of warning: when you use mmap beyond the extend of your
RAM, you will end swapping out many data (shared libraries, other
processes) that might be important for the performance of your computer.

That is too bad you can't tell the OS to page-back your own file instead.  That is if I have 24GB of RAM with 12GB available, and I mmap a 100GB file, I'd like it kind of churn through that 12GB of RAM with my stuff and leave everyone else alone.  But I guess you don't have that control.

-Philip

You can always munmap something as well, which would be page-backing it.

I've been facing similar issues, like scanning through 500GB HDF5 files with 32GB available RAM.
That works ok with HDF5, but I've implemented my own memory management strategy to tell
it which parts to keep in memory and which parts to unload (of course, using random access to
the datasets). Turned out that the "remove the least-used-object" strategy is not necessarily
the best one (as the OS would follow with pages), but some classification on similarity of objects
that are kept or to be discarded from memory seems much more efficient.

       Werner


--
___________________________________________________________________________
Dr. Werner Benger Visualization Research
Laboratory for Creative Arts and Technology (LCAT)
Center for Computation & Technology at Louisiana State University (CCT/LSU)
211 Johnston Hall, Baton Rouge, Louisiana 70803
Tel.: +1 225 578 4809 Fax.: +1 225 578-5362
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to