Hi Peter, > 1) Section 2 talks about args[0] and args[1], yet the examples use > arg0 and arg1. This may be just cosmetic, but it might be worth > being consistent. Also, should the description be specific about > what data type is given for args[0] and args[1], or is that implicit > by saying they are program counter values?
Oops. I shouldn't refer to the typed argument array here as the arguments are not presented through it. That should be arg0 and arg1. Thanks. > 2) Example 3 (and hence 4) sounds unclear to me given the discussion > last week. It still leaves open the interpretation that all the L2 > cache misses that are being counted are all caused by the executable > "brendan". Given the discussion last week, it would be more clear to > describe it being a sampling of what the brendan executable was doing > each time the L2 cache miss counter hit the target. It might be > useful to add a second clause to the example 3 script to count events > that happened when some other executable was executing. This makes > the "brendan" counts a more effective drill-down into the total set of > L2 cache miss events. Did I understand last week's discussion correctly? > cpc:::BU_fill_req_missed_L2-all-0x7-10000 > /execname == "brendan"/ > { > @[ufunc(arg1)] = count(); > cpc:::BU_fill_req_missed_L2-all-0x7-10000 > /execname != "brendan"/ > { > @["OtherExecutable"] = count(); > } > } I didn't alter these examples because I added a paragraph in section "B1 - Probe Format" which contains an example to cover this off. I explicitly mention the fact that the events may not be all generated by the executable. Also bear in mind that this document is for architecture review and it's a different thing to the user guide where I'll be a lot more verbose. > 3) It might also be helpful to have an example that keeps a running > total of some performance counter, and then periodically samples that > counter during some other event of interest. I.e. we could use the > dtrace tools to keep a running count of L2 cache misses, and then wake > up each several msec and sample both who is running and what the > current counts are. (In such an application, we would just be using > dtrace as a quick way of enabling and disabling the specific counters > we want to track.) The user guide chapter will have more and different examples. I'll have a play around with that idea but I'm not sure how useful it is to correlate values that we've been counting with a piece of data such as the current onproc thread. Still, you never know till you've tried it. Thanks. Jon. > > Peter > > Jon Haslam wrote: >>> Tracing Fans, >>> >>> I know it's been a long time in coming but the CPU Performance >>> Counter (CPC) provider is almost here! The code is currently in >>> for review and a proposed architecture document is attached here >>> for review. >> >> Many thanks to all those that gave me feedback on this proposal. >> A revised version is attached which we'll hopefully submit shortly. >> The additions I've done to the original are really just to try and >> be a bit more verbose about the behaviour of the provider. For >> those that are interested additions were made to Section >> "B1 - Probe Format" and "B3 - Probe Availability". Also the >> default minimum overflow rate has been lowered from 10000 to >> 5000. >> >> If any of the changes make you violently ill, please let me know. >> >> Jon. >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> dtrace-discuss mailing list >> dtrace-discuss@opensolaris.org _______________________________________________ dtrace-discuss mailing list dtrace-discuss@opensolaris.org