Sorry to go off topic, but I would like to get the
perspective of the very knowledgable people on
this list.
Background: I am writing a program using the "R"
language (a high level statistical language, similar
to the "S" language). I am able to dynamically load
*.so.0 libraries (written in C) to do computationally
intense tasks in R functions. I am doing all the
memory management using calloc() and free() and am
absolutely, 100% sure there are no memory leaks. The
function that calls the C code is repeated many
times over.
Problem: When I run the program on an i386 linux
machine, the memory aggregates as the number of times
the function runs within the R program, until I exit
the R interpreter, at which time all the memory is released.
However, running the recompiled (for sparc64 linux)
program on a recompiled (for sparc64 linux) version
of R, there is no such memory aggregation (or at least
it is minimal compared to i386).
Question: Could this be a 32bit vs. 64bit thing (I
did not write any architecture specific C code, standard
ANSI-C with double type arrays and matrices)? Is this
a difference between how memory is handled between
the two ports? I know that there could be a difference
in the way the R program is written for the two
architectures, but aside from that, is there something
at the OS level that could be one explanation for this?
Again, sorry to go off-topic, but I enjoy reading and
respect the opinions of the contributors to this list.
Many thanks,
Chris
P.S. email me personally if this question is too off
the deep end for the list.