Yikes!

Me think that this is no way to debug a problem. Why collect data that
you will never use! Just collect what you are interested in. DTrace
aggregates are excellent for this type of debugging. So you don't
spend the rest of your life going through 15 GB of text data.

Also you may want to looks DTrace speculation. Why do manually
when you can program it out.

Also there are nice tools like libumem that can probably help you
find the memory issue you are tracking see
http://blogs.sun.com/ahl/entry/solaris_10_top_11_20

Hope this helps

Angelo


On Nov 7, 2007, at 1:45 PM, Pierre-Olivier Gaillard wrote:

> I am investigating a memory problem (access to memory that was
> released previously) that Purify can't analyze.
>
> As I don't know the program I am analyzing, I thought I could generate
> a big trace of all function calls and search through it.
>
> The resulting file might be way too big though : I did a trace with
> all malloc and free calls and it already takes 155MB. But on the other
> hand, if malloc and free represent 1% of all calls I will get a 15GB
> trace file which is not that much (I have filled 120GB hard drives
> with Linux Trace Toolkit traces and the result was helpful).
>
> Thanks,
>
>  Pierre-Olivier
>
>
>
>
> On Nov 7, 2007 1:28 PM, Michael Schuster <[EMAIL PROTECTED]>  
> wrote:
>> Pierre-Olivier Gaillard wrote:
>>> Thanks a lot. That sounds helpful.
>>> How many probes can I have (on a Sparc)?
>>> Would a couple millions be OK?
>>
>> I can't answer that specific question, but I am wondering what you're
>> trying to achieve with the amount of data these probes potentially  
>> accumulate.
>>
>> Michael
>> --
>> Michael Schuster
>> Recursion, n.: see 'Recursion'
>>

_______________________________________________
dtrace-discuss mailing list
[email protected]

Reply via email to