gary wrote: > I never had a grass run take more than about 2Gbytes of DRAM. Isn't > there a hard limit on the memory used by grass?
Only that imposed by the OS. Most modules try to avoid using excessive amounts of memory. Wherever possible, modules process data row-by-row, only keeping as much in memory as is strictly necessary. Modules which need to perform non-linear I/O normally have mechanisms to avoid having to read the entire map into memory (e.g. tile/row cache, multiple passes). r.proj used to read the entire area of interest into memory, but the version in 6.3-CVS uses a tile cache (it estimates the amount of memory required, but this can be overridden with the memory= option). > There is an option > during the compilation process for large files, so I assume the memory > allocation isn't completely dynamic. LFS (large file support) is a consequence of the historical Unix API using "long" for file offsets, which limits you to 2GiB on a 32-bit system. Although recent standards define a type "off_t" which can be larger than a long, legacy code may store offsets in a "long". To prevent such code from corrupting data, files whose size cannot fit into a "long" will only be opened if the caller specifically allows it. The --enable-largefile configure option causes specific libraries and modules to indicate that large files may be opened. The ANSI stdio functions (fseek, ftell) use "long" for file offsets, so they cannot handle files >2GiB on a 32-bit system. -- Glynn Clements <[EMAIL PROTECTED]> _______________________________________________ grassuser mailing list [email protected] http://grass.itc.it/mailman/listinfo/grassuser

