I have tried using umem (First set some environment variables, then run the 
program,  then "gcore pid_of_program", then "mdb gcore_xxx" )  to detect memory 
leak with a simple test program, but it didn't find any memory leak but it 
should find. 

I also found another issue when using umem. That is after the program runs for 
a pretty long time,  then I use "gcore" to generate a core file, then I use mdb 
to check the core file:  the result of "::umalog" seems only show the recent 
memory operations information, those memory operations which happened earlier 
were lost.

Come back to the original dtrace issue. I don't want to use file system or pipe 
( in nature it use file system ? )  because in performance test, the 
application has many many memory operations (malloc or free), and the D script 
will output a big amount of information (ustack is big ).  This big amount 
information need to be analyzed by another program (since D script itself 
cannot save stack information keyed by some value e.g. returned address ).   
The analysis program store the memory allocation infomation <returned address, 
stack>,  when free() is called the corresponding information is deleted. When 
application terminates, the leaked memory information is saved in the analysis 
program.

So the critical issue is how to efficiently pass the output data from the D 
script to the analysis program.


--
This message posted from opensolaris.org
_______________________________________________
dtrace-discuss mailing list
[email protected]

Reply via email to