>
> prstat -mL is giving a by-thread summary, which is
> going to be better than
> either prstat -m, pfilestat, or anything else that
> tries to represent
> multiple thread info by-process.
Brendan, thanks for the quick reply. Are you suggesting that a single threaded
application should reflect "closer" numbers? If so, then I'm still curious
about the differences I'm seeing. For example, the following are
(representative) snapshots of both commands for the same (single-threaded)
process:
prstat -mL -p 10997 5
===================
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
10997 40011 14 6.1 0.0 0.0 0.0 0.0 79 1.1 730 19 3K 0 postgres/1
[Note: LAT = 1.1% here]
./pfilestat 10997
==============
STATE FDNUM Time Filename
read 4 0% /zones/dbzone05/<snip>
read 17 0% /zones/dbzone05/<snip>
write 20 0% /zones/dbzone05/<snip>
read 19 0% /zones/dbzone05/<snip>
read 23 0% /zones/dbzone05/<snip>
write 19 1% /zones/dbzone05/<snip>
sleep-w 0 2%
waitcpu 0 4%
running 0 13%
sleep 0 76%
[snip]
Total event time (ms): 4999 Total Mbytes/sec: 1
[Note: waitcpu is 4% here.]
I'm fine with the answer that they are measuring things differently. I'm just
curious as to what accounts for the 1% vs 4% difference.
Again, I'm not really complaining. I think these scripts are going to be a
tremendous help in getting started. I'm just concerned that I'm missing
something about the internal workings of DTrace. For example, something like
"the sched:::* probes are done in a different context than the application and
therefore the numbers don't match".
Bob
--
This message posted from opensolaris.org
_______________________________________________
dtrace-discuss mailing list
[email protected]