The issue is most likely that he's doing printf for every record.
The userland dtrace process is doing this:

   + read kernel buffers from each cpu into user space.
   + process kernel buffers in user space (in this case eventually
      doing a printf). If the buffers contain lots of data, he's likely
      not finishing the printfs before the buffers in the kernel
      overflow.

Jim Litchfield
-------------------

Ryan wrote:
I'm trying to collect function calls in Firefox in
the time order.

I want just the XUL library but I must not miss any
functions since I will be feeding the list to the linker for reordering
of symbols.

I collect about 11Gb of function names but I can't
manage to eliminate drops despite increasing the buffer size and switch
rate, e.g.

grep CPU time.order
__ZL15SelectorMatchesR17RuleProcessorDataP13nsCSSSelec
tordtrace: 684061 drops on CPU 0
__ZNK19ndtrace: 205307 drops on CPU 0

Should I further increase the buffer size? Any other
suggestions?

In reality, I just want the first invocation of each
function but I think keeping track of calls will take too much memory in DTrace, e.g. one associative array for the time order, indexed on timestamp or similar, plus another array indexed on the function
name.

How big could the associative arrays possibly be compared to the buffer sizes 
you're considering? Elfdump could probably give you an upper bound, but I'd 
just try it and then look for alternatives if it fails...

Or, use the tick provider to get the most common functions, then finish off the 
list with the output of elfdump. There's no guarantee that *every* function 
would get called during a run anyway, and by definition anything the profiler 
missed shouldn't impact performance much.

Regards,
Ryan


_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to