On Nov 7, 2007 11:24 PM, Neelam <[EMAIL PROTECTED]> wrote:

> Actually currently, I am running DTrace on a microbenchmark. This is just
> to validate the numbers for more complex workloads. In the microbenchmark, I
> just have a single mutex lock. Actually when I introduced some delay to make
> the critical section bigger, the difference between gethrtime() measurement
> and DTrace was significantly reduced.
>
> And can someone explain more about aggregations. I have couple of them in
> my bigger scripts and I use various strings to differentiate different
> numbers.
>
> Thanks for the help,
> Neelam
>
> When you do something like:
>
>   @foo["this case here"] = aggfunc1(args2);
>   @foo["case two here"] = aggfunc2(args2);
>
> Dtrace has to do a substantial amount of work *each time these lines are
> run* to hash the string into the correct aggregation bucket.  A better
> approach is:
>
>
When you do something like:

  @foo["this case here"] = aggfunc1(args2);
  @foo["case two here"] = aggfunc2(args2);

Dtrace has to do a substantial amount of work *each time these lines are
run* to hash the string into the correct aggregation bucket.  A better
approach is:


  @case1 = aggfunc1(args1);
  @case2 = aggfunc2(args2);

...
END {
   printa("Case one: [EMAIL PROTECTED]", @case1);
   printa("Case two: [EMAIL PROTECTED]", @case2);
}

Which removes all of the string processing.


Cheers,
- jonathan
_______________________________________________
dtrace-discuss mailing list
[email protected]

Reply via email to