Mapping a huge file does not required a corresponding amount of real ram. The memory mapped file lab is an overview of what is involved. The point of memory mapping is that it is mapped to the virtual address space (which is quite large in 64bit). Mapping a 60gb file is immediate and takes no real ram. You can then use normal J mechanisms to read small portions of that huge file as required.
On Sat, Jul 14, 2012 at 9:08 AM, Konrad Hinsen <konrad.hin...@fastmail.net>wrote: > bill lam writes: > > > On a cursory browse on that website, it seems that HDF5 is > > implemented using java, is it true? Who will start java vm? by the > > c wrapper or tarded by another process? > > HDF5 is written in C. There's a Java wrapper (using JNI) and based on > that wrapper the HDF Group has written a generic viewer for HDF5 files > in Java, which is probably what you saw. > > Using HDF5 from C or Fortran is a bit cumbersome, but it's a > straightforward C API, just very big. When possible I prefer to use > Python for working with HDF5 files, which is very easy. > > > BTW I think you cannot process a 60GB file using Jmf unless you also got > > at least 60GB RAM available. > > Right, and that's why memory-mapping is not a universal solution for > me. I never read my HDF5 files entirely in memory either. HDF5 lets > me read arbitrary subarrays, with roughly the same functionality as > J's "from". > > Konrad. > ---------------------------------------------------------------------- > For information about J forums see http://www.jsoftware.com/forums.htm > ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm