On 2010-05-27 22:58+0100 Andrew Ross wrote: > On Thu, May 27, 2010 at 04:39:55PM -0400, Hazen Babcock wrote: >> Andrew Ross wrote: >>> On Thu, May 27, 2010 at 03:49:22PM -0400, Hazen Babcock wrote: > >>>> Are we sure that we are closing the QApplication properly? >>> >>> I think so, although most programs would only create a QApplication once >>> at the beginning of the program then delete it at the end, so resource >>> leaks might not be obvious. >> >> I believe that is what we are doing too in these tests. >> >>> You could try valgrind with things like --track-fds to see if there is a >>> leak of file descriptors? >> >> Attached, not sure what to make of it, except that we don't close test.png? >> >> Let me know if there is anything else I can run that would be informative. > > Well clearly the output files should be properly closed. I've now fixed > that. You could try again, although I don't think that is the problem. > You could try again to check. Potentially more interesting are the open > pipes. > > I notice that most drivers just call fclose( pls->OutFile ) which > unconditionally closes the output file descriptor. Usually this is > right, but I would argue that if the output is to stdout then this > is not the right thing to do. If people agree I would suggest adding a > plCloseFile to parallel plOpenFile which checks if the output is to > stdout before calling fclose.
It is good to get this all cleaned up, but this appears to me to be a long shot. (Of course, the alternative explanation I discuss below might also be a long shot so we have to keep an open mind and fix what we can.) The reason why I think it is a long shot is valgrind results were specifically saying that very few open file descriptors were left at exit, and in any case when x10c exits, doesn't it restore all resources that are left at exit time (e.g., by closing the still open file descriptors)? So for this scenario to work, I think you would have to argue that while x10c is running it is consuming a lot of resources (a high watermark) that are mostly restored to the system before exit time and which are completely restored to the system by the exit. However, if each run of x10c simply has a high watermark in any resource that exceeds system limits why doesn't the first run show the issue rather than the ~900th? Anyhow, I think it is more likely that some system resource is being permanently consumed (rather than a high watermark) by each run of x10c so that Hazen's system runs out of resources by the ~900th run of x10c. If my above hypothesis is correct that x10c returns all resources it directly uses at exit, then the only ones that could accumulate are the ones it indirectly consumes by calling on-going system servers such as X. And this interpretation seems to be supported by Hazen finding that exiting another application (firefox) allowed a hung x10c to complete, although it than hung again the next time it was called. Hazen, have you double-checked yet that isGUI is false for your latest tests? Of course, even if isGUI is false, QApplication may partially or totally ignore that so it still may be using up some X resource shared with firefox. Alternatively, the consumed resource shared with firefox may not be in X, but in some other system server. Anyhow, the next time you run the test, I suggest you look carefully at the results made by striking the "M" key while in top to sort the top results by memory usage. If I do that here, the top memory hog is the X server PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3733 root 20 0 650m 225m 6424 S 0 11.2 191:27.66 Xorg and I would be interested in your top equivalent memory hogs when running the test. My system has 6GB of virtual memory (swap partition) and 2GB of real memory so the above 650M total and 225M resident consumed by Xorg is not an issue on my platform at all. Alan __________________________ Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); PLplot scientific plotting software package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project (lbproject.sf.net). __________________________ Linux-powered Science __________________________ ------------------------------------------------------------------------------ _______________________________________________ Plplot-devel mailing list Plplot-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/plplot-devel