On 05/05/2010 03:00, Darren Reed wrote:
On  2/05/10 02:32 AM, Alan Maguire wrote:
As per the subject line, the updated spec is attached for
the above fasttrack. In the interim, the project team have
worked offline with Kacheong and Darren to address their
concerns. We have also incorporated SACK, PMTU and
congestion window info into the tcpsinfo_t at Nico's
suggestion. Thanks!

Thanks, Alan, that looks great.

I do have one question though (but only indirectly about this case)...

When systems are under high load...
- will dtrace buffers be expanded to meet demand?
I'm not 100% sure of the details regarding how buffers
work in DTrace, but here's my understanding.

Buffers are needed for data recording, but the size of the
buffers required depends on what sorts of data recording
actions the user is doing. If the user is printf()ing large
amounts of packet data on a per-packet basis for example
(I suspect this  is the case you have  in mind?) the principal
buffer could indeed be filled before such data is displayed
(with printf(),  the issue is the interim between recording and
reading out the data for display as I understand it).

There are multiple buffering policies - "switch" (the default),
"fill" and "ring". See

http://wikis.sun.com/display/DTrace/Buffers+and+Buffering

..for more details on each.

If the user expects buffer exhaustion will be an issue, they
can set the buffer size explicitly with the bufsize option.

- will performance be slowed to ensure there is no loss with respect to dtrace probes?
As I understand it, the DTrace probes will always fire,
but if buffers are full (in the default "switch" buffering policy case)
data drops are reported as something like the following:

 dtrace: 11 drops on CPU 0

For the "fill" policy, tracing is halted when the buffer is full.
- is there any scope for being able to communicate that dtrace has missed a packet (or two), should that happen if all of the buffers are full? (This is only important for TCP)
Data drops are counted on a per-CPU for the default "switch"
buffering policy and the user is notified when such drops occur.

One thing I should probably note is that while it's certainly possible
to trace TCP data on a per-segment basis, I've found that I rarely end
up using scripts that are heavy on data-recording actions like this
in practice. Most of the use cases I've found involve asking questions
about how TCP activity relates to other system abstractions (e.g.
which processes or zones are sending a lot of TCP traffic?) or
they involve aggregating data reasonably coarsely (e.g what is
the average connection/first-byte latency by host? What is the mean
round-trip time from transmission to ACK averaged per connection etc).
In such cases, drops are rarely if ever encountered.

Thanks!

Alan

Darren


_______________________________________________
opensolaris-arc mailing list
[email protected]

Reply via email to