Hi

I'm trying to write a DTrace script (or a set of scripts) that will
adopt what it is tracing of over time, based on previously traced data.
For example, my DTrace script may start by collecting some basic
statistics about a set of events, storing them in an aggregation keyed
by event type, and after a while, I would like to start collecting more
detailed information only about the event types that occur most
frequently.  There are various reasons why I would like this kind of
dynamic behavior:

- Collecting the detailed information may be more expensive, as it
 requires copying large data structures from the program process space.
 Thus, I do not want to collect it for *all* event types.

- My TDTrace script can affect what information is made available by the
 traced program, by writing data in the program space and asking to
 collect detailed information only about the events of interest.

- I may want to stop the program and attach a debugger only when a
 particular event type occurs.

[Note that I cannot simply stop the program and run it again with a
different script, as event types tend to change from run to run.]

Now, while I can easily use an aggregation to collect the basic
information, and throw out (using trunc()) unimportant event types,
DTrace does not allow accessing this data and using it in the script's
predicates.  Note that in my case, even checking whether a key is in an
aggregation would be sufficient -- I do not have to access the actual
aggregated values.

My questions:

- Is there any way for a DTrace script to test whether a particular key
  belongs to an aggregation?

- If not, do you have any other idea how can I achieve this kind of
  dynamically adapting tracing behavior? Here are two ideas that I have
  in mind that I would love to get your feedback on:

  - Use some kind of feedback loop: after truncating the aggregation,
    print it, and trace the "write" system calls (that implements the
    printouts) to "catch" the printed keys, and store them in some
    global variables used by the script.

    This option looks quite complicated to me, and I have no idea if it
    can actually work, especially since I'm running the script using
    the Java DTrace API, so printouts are actually processed by the
    java program...

  - Generate another script that runs in parallel and collect
    information about the events type of interest: with this option, we
    print the truncated aggregation (or scan it using the java DTrace
    API), generate another script that only traces the events we're
    interested in, and run the latter in parallel to the first script
    (that keeps aggregating the basic statistics for all event types).

    I think that this may work, but I'm worried about one issue, which
    is having two scripts with permissions to run destructive
    operations operating on the same process.  Will DTrace allow two
    scripts, both enabling identical pid<x> probes, to run in parallel
    with permissions to run destructive operations?  (The two
    destructive operations that I need are copyout() and stop().)

Thanks,
Yossi


_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to