List,

A memory question .. on Linux 2.6 i connect with mapfraw to a pdl
byte file with 8000 columns and 300000 rows in order to collect
column statistics, slicing one row at a time and then undef'ing
it. As the loop runs, physical memory grows until it equals the
virtual memory (as seen with 'top'). To limit physical memory
I tried to reconnect for every 10000 rows, but that causes a
segfault, however that works for smaller files (maybe because 2
large mapfraw's exceed max memory due to subroutine scope). Have
I overlooked a way to plow through large pdl files while freeing
memory for data that will no longer be visited? can files larger
than available ram be sliced/accessed quickly?

You can tell that I am no PDL-buff .. i just use a small subset
of its features and have put them in a library that has worked
good for a long time, but now data sizes are growing quickly and
I wonder about size limits ..

Greetings, Niels L


_______________________________________________
Perldl mailing list
[email protected]
http://mailman.jach.hawaii.edu/mailman/listinfo/perldl

Reply via email to