On Jan 26, 2006, at 4:34 AM, Francesc Altet wrote:

Hi Russel,

A Dimecres 25 Gener 2006 18:24, Russel Howe va escriure:
     I am not sure if this is a numarray problem or a pytables
problem, but this is the simplest program I have been able to create
that illustrates the problem.  The attached program causes my machine
to use all available memory (1G ram + 1G swap), it is eventually
killed by the kernel.  Is there some other call I should make to
pytables to free the memory used by a file?  My application actually
processes many large files in sequence, and after just a few the
machine runs out of memory.
     The problem does not seem to happen for a program that creates a
large number of files, the files I am working on here are actually
converted from a different format, the conversion program runs fine.

Well, I think this should be a leak in PyTables (the VLArray objects
has not been checked very toughfully for leaks). I'll look into this
as soon as I get some time.

Thanks for reporting this!


A little more information, I reduced the number of times the vlarray is summed to 4 and ran the test program in valgrind. The relevant errors seem to be

==28085== 101294136 bytes in 2005 blocks are possibly lost in loss record 39 of 40 ==28085== at 0x1B9042FC: malloc (in /usr/lib/valgrind/ vgpreload_memcheck.so)
==28085==    by 0x1C73528D: test_vltypes_alloc_custom (H5VLARRAY.c:33)
==28085==    by 0x1BFCE92A: H5T_vlen_seq_mem_write (H5Tvlen.c:451)
==28085==    by 0x52BFD1AF: ???
==28085==
==28085==
==28085== 289279580 (288541620 direct, 737960 indirect) bytes in 5782 blocks are definitely lost in loss record 40 of 40 ==28085== at 0x1B9042FC: malloc (in /usr/lib/valgrind/ vgpreload_memcheck.so)
==28085==    by 0x1C73528D: test_vltypes_alloc_custom (H5VLARRAY.c:33)
==28085==    by 0x1BFCE92A: H5T_vlen_seq_mem_write (H5Tvlen.c:451)
==28085==    by 0x52BFD1AF: ???
==28085==

I am still trying to figure out why the backtrace cuts off, but using gdb, it appears that the call originates in H5VLARRAYread H5VLARRAY.c line 501. This is in _readArray in hdf5Extension.pyx:1951. I think the leaked data is rdata[i].p since PyBuffer_FromReadWriteMemory does no memory management. The documentation for numarray.array indicates that it makes a copy by default, but I haven't figure out when it is safe to free the memory, any attempt I have made causes a segfault in the sum ufunc, indicating that either the documentation is wrong or the copy is lazy. Any hints?
Russel




-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Pytables-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to