Matthew Brand-2 wrote:
> 
> Has anybody been actively using mapped files greater than the RAM size
> in a past or present project?
> 
> The code below successfully creates a mapped file with size 64GB, much
> larger than my RAM. However, if I want to fill the file/noun with say
> (i.size) or (size $ 'abc') then "out of memory" results because J
> tries to evaluate those in RAM before pushing the result into the
> mapped file.
> 
> Is there a way to overcome this type of problem other than writing
> messy code to break the varname =. i.size problem into smaller pieces?
> 

I don't think so.  The r.h.s of the expression 

varname =. i.size        NB. this line gives error "out of memory" 

has to evaluated first, and since the result does not fit into RAM
you get the error message.

The only way to handle this automatically would be 
is to write some special code that would do the breaking up that 
you talk about.

This is reasonable, since what you want is to automatically use
hard disk as if it were RAM, and it isn't.  VAS is virtual exactly
because of this, and some segmenting of arrays larger than RAM
into RAM-size chunks must happen somewhere in the program/OS.



> Note 'does not work'
> load 'jmf'
> size =.  <. 2^36   NB. size of file 64GB.
> filename =. jpath '~temp\big.jmf'
> createjmf_jmf_ filename;size
> map_jmf_ 'varname';filename
> varname =. i.size        NB. this line gives error "out of memory"
> unmap_jmf_ 'varname'
> )
> 
> When J creates its temporary variables does it explicitly request that
> the memory is allocated in physical RAM rather than anywhere else in
> the VAS? Is there a configuration option to allow all variables to be
> created anywhere in the VAS rather than only in physical RAM?
> 
> I don't understand how J or any other application can limit itself to
> physical RAM. 
> 
Because of the speed.  If all of the above were really used, generic 
use of huge arrays as it is done in most languages could seriously degrade 
execution speed (good old "swapping" problem).  If, however, you 
do linear or "chunk"-based access and processing, performance is fine.

It is like with dealing with processor cache. We have a hierarchy of
memories:
L1, L2, L3 then "L4" (RAM) and "L5" (HD) caches, according to their speed.



> My understanding, which is likely erroneous, is:
> that the OS only shows the application the VAS which has 8 %~ 2^64
> locations of size 8 bytes each;
> that an application makes requests to the OS for working space to be
> allocated;
> that the OS returns a pointer to a working space it reserves in the VAS;
> that the OS then handles, in the background, all of the mapping
> between the VAS segment allocated to the application and the actual
> memory addresses in the physical storage devices that are utilized
> (RAM and hard disk);
> that it is entirely up to the OS whether a given VAS address is mapped
> to a hard disk location or a physical RAM location;
> that the OS may alter the mapping dynamically especially if the
> requested working space does not fit into the RAM all at once, i.e. it
> might use swap files.
> 
> I guess my question boils down to:
> If the above is correct then why does J ever return "out of memory" in
> a 64-bit VAS, or what is incorrect?
> 

My guess is that malloc needed to do i.size does not care about 
the size of VAS but only about the size of RAM.


-- 
View this message in context: 
http://www.nabble.com/Large-mapped-files-and-VAS-again-tp20870863s24193p20894896.html
Sent from the J Programming mailing list archive at Nabble.com.

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to