I'm not sure I understand what is being asked here, but I'll take a shot...

Note it is virtually impossible to write a piece of software that is guaranteed
to have sufficient space to buffer a given amount of data when the rate
and size of the data flow is unknown. This is one of the robustness features
of dtrace - it's smart enough to know that, and smart enough to let the user
know when data can not be buffered.

Yes, buffers are allocated per-CPU. There are several buffer types, depending
on the dtrace invocation. Minimally, principle buffers are allocated per CPU
when a dtrace consumer (dtrace(1M)) is executed. Read;
http://wikis.sun.com/display/DTrace/Buffers+and+Buffering

The "self->read" describes a thread local variable, one of several variable
types available in DTrace. It defines the variable scope - each kernel thread
that's on the CPU when the  probe(s) fires will have it's own copy of a
"self->" variable. 

There is only one kernel dispatcher, not one per CPU. There are per-CPU run
queues managed by the dispatcher.

As for running a DTrace script for hours/days/weeks, I have never been down that
road. It is theoretically possible of course, and seems to be a good use of 
speculative buffers or a ring buffer policy.

We can not guarantee it will execute without errors ("dynamic variable drops", 
etc).
We can guarantee you'll know when errors occur. 

How can such guarantees be made with a dynamic tool like dtrace?
Does your customer know up-front how much data will be traced/processed/
consumed, and at what rate?

Read this;
http://blogs.oracle.com/bmc/resource/dtrace_tips.pdf

Thanks
/jim

On Jul 1, 2011, at 9:30 AM, Scott Shurr wrote:

> Hello,
> I have a customer who has some dtrace questions.  I am guessing that someone 
> knows the answer to these, so I am asking here.  Here are the questions:
> 
> **********
> In this document, we will describe how we assume that DTrace uses its memory. 
> Most assumptions result from [1]. We want these assumptions to be validated 
> by a DTrace expert from Oracle. This validation is necessary to provide us 
> confidence that DTrace can execute for a long period of time (in the order of 
> weeks) along with our software, without introducing errors due to e.g. 
> “dynamic variable drops”. In addition, we described a problem we experience 
> with our DTrace script, for which we want to have support from you.
> 
> [1] Sun Microsystems inc, “Solaris Dynamic Tracing Guide”, September 2008.
> Quotes from Solaris Dynamic Tracing Guide [1], with interpretation:
> •    “Each time the variable self->read is referenced in your D program, the 
> data object referenced is the one associated with the operating system thread 
> that was executing when the corresponding DTrace probe fired.”
> o    Interpretation: Per CPU there is a dispatcher that has its own thread, 
> when it executes the sched:::on-cpu and sched:::off probes.
> •    “At that time, the ring buffer is consumed and processed. dtrace 
> processes each ring buffer in CPU order. Within a CPU's buffer, trace records 
> will be displayed in order from oldest to youngest.”
> Interpretation: There is a principal buffer per CPU
> 
> 1) Impact on Business
> We have a number of assumptions that we would like to verify about DTrace.
> 
> 2) What is the OS version and the kernel patch level of the system?
> SunOS nlvdhe321 5.10 Generic_141444-09 sun4v sparc SUNW,T5240
> 
> 3) What is the Firmware level of the system?
> SP firmware 3.0.10.2.b
> SP firmware build number: 56134
> SP firmware date: Tue May 25 13:02:56 PDT 2010
> SP filesystem version: 0.1.22 
> **********
> Thanks
> <oracle.jpg>
> 
> Scott Shurr| Solaris and Network Domain, Global Systems Support
> Email: scott.sh...@oracle.com
> Phone: 781-442-1352
> Oracle Global Customer Services
> 
> Log, update, and monitor your Service Request online using My Oracle Support
> 
> 
> _______________________________________________
> dtrace-discuss mailing list
> dtrace-discuss@opensolaris.org

_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to