Hi,
I have used dtrace in the past for diagnosing IO/filesystem related issues.
So I am a bit familiar with dtrace scripts
Now I want to diagnose performance problems in the java area.
May be memory leak etc.
The hotspot provider is not present.
How can I get them?
Does someone have any
BTW I tried to understand the info on several websites but it has not become
clear to me how to get the dvm or djvm providers.
Regards Hans
--
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org
Hi Joel,
Has anyone tried to access member fields of C++ objects from within
DTrace?
Can this be done?
It can be done, but not easily. You'll need to figure out the byte
offset
and manually pull out the member.
Adam
--
Adam Leventhal, Fishworks
What is the alternative then?
A USDT probe for every kind of field access?
Thanks, Joel
On Sep 9, 2009, at 6:38 PM, Adam Leventhal wrote:
It can be done, but not easily. You'll need to figure out the byte
offset
and manually pull out the member.
---
fastest mac firefox!
What is the hex number after the method, e.g. +0x32f below?
It matches arg0 and I suspect it's the c++ this pointer, aka hidden
first argument.
I would like to confirm, though.
Thanks, Joel
---
XUL`PresShell::ProcessReflowCommands(int)+0x32f
On Wed, Sep 9, 2009 at 1:42 PM, Joel Reymontjoe...@gmail.com wrote:
What is the hex number after the method, e.g. +0x32f below?
It matches arg0 and I suspect it's the c++ this pointer, aka hidden first
argument.
It matches arg0 to which probe? These look like function offsets.
Chad
I
On Wed, Sep 09, 2009 at 06:42:22PM +0100, Joel Reymont wrote:
What is the hex number after the method, e.g. +0x32f below?
It matches arg0 and I suspect it's the c++ this pointer, aka hidden
first argument.
No, it's far too small for that. It's the instruction pointer offset inside
the
On Sep 9, 2009, at 6:56 PM, Chad Mynhier wrote:
It matches arg0 to which probe? These look like function offsets.
The pid$target::*:entry one.
---
fastest mac firefox!
http://wagerlabs.com
___
dtrace-discuss mailing list
Joel Reymont wrote:
What is the hex number after the method, e.g. +0x32f below?
The entries in a stack trace are the addresses of the instructions
that are being executed, expressed as a symbolic address plus an
offset. The +0x32f you mention is the offset.
If you disassemble the object code,
What is the alternative then?
As I said: find the byte offsets for the members you're interested in.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org
This is another variation, a simpler one...
./d firefox-bin menu-construct.d startup.d stop-js.d
dtrace -Zqw -x dynvarsize=64m -x evaltime=preinit -p 26656 -s menu-
construct.d -s startup.d -s stop-js.d -s sigcont.d
elapsed: 6.4350621ms
cpu: 3.4218030ms
count : 20 times
Total:
On Sep 9, 2009, at 12:47 PM, Joel Reymont wrote:
I'm trying to time a function within Firefoxwith a pid$target probe
[2]. e.g
dtrace -Zqw -x dynvarsize=64m -x evaltime=preinit -p 13954 -s menu-
construct.d -s sigcont.d
elapsed: 12.7942481ms
cpu: 7.7911194ms
count : 40 times
I'm
I added a couple of static probes to Firefox to measure actual work
done. I could have used a pid$target probe with a function name but
work is done within an if statement, which is where I placed the
static probes.
I'm wondering about my use, though. Is the following significantly
more
What about the expense of matching on the function name?
You are saying that the kernel trap overwhelms that, right?
On Sep 9, 2009, at 10:20 PM, Chad Mynhier wrote:
No, it's not significantly more efficient. Both function entry/return
probes and USDT probes involve replacing an instruction
To be more explicit, my concerns are about the effect of function
name entry/return probes vs static probes on elapsed time,
as measured by timestamp.
I'm viewing all my timings as a percentage of total elapsed time
and I'm wondering if there's less overhead in using static probes
in this
On Sep 9, 2009, at 2:02 PM, Joel Reymont wrote:
James,
On Sep 9, 2009, at 9:58 PM, James McIlree wrote:
I think the extra startup time you are seeing is related to dtrace
processing
the newly loaded libraries.
Can this be worked around by using evaltime=postinit or, perhaps,
On Sep 9, 2009, at 2:10 PM, Joel Reymont wrote:
I'm wondering about my use, though. Is the following significantly
more efficient than pid$target::FunName:entry and return?
The cost of the actual probe firing is the same.
The *eval-time* cost of finding out which probes exist and
James,
I thought preinit was the standard evaltime. Am I wrong?
Is it different on Snow Leoapard?
Thanks, Joel
You can pay this cost all at once (let dtrace do all the work up
front, with the standard evaltime),
or in small pieces as individual libraries are loaded (-
On Sep 9, 2009, at 3:18 PM, Joel Reymont wrote:
James,
I thought preinit was the standard evaltime. Am I wrong?
Is it different on Snow Leoapard?
I believe it is postinit on SnowLeopard.
James M
___
dtrace-discuss mailing list
19 matches
Mail list logo