On Sun, Oct 24, 2010 at 12:44 AM, braingateway <[email protected]> wrote:
> > I agree with you about the point of using memmap. > That is why the behavior > is so strange to me. I think it is expected. What kind of behavior were you expecting ? To be clear, if I have a lot of available ram, I expect memmap arrays to take almost all of it (virtual memroy ~ resident memory). Now, if at the same time, another process starts taking a lot of memory, I expect the OS to automatically lower resident memory for the process using memmap. I did a small experiment on mac os x, creating a giant mmap'd array in numpy, and at the same time running a small C program using mlock (to lock pages into physical memory). As soon as I lock a big area (where big means most of my physical ram), the python process dealing with the mmap area sees its resident memory decrease. As soon as I kill the C program locking the memory, the resident memory starts increasing again. cheers, David _______________________________________________ NumPy-Discussion mailing list [email protected] http://mail.scipy.org/mailman/listinfo/numpy-discussion
