It depends on the number of activities that DTrace will be recording.
It a small number, the load will be light. If a large number, the load will
be more. If a large number, you'd want to look at manipulating aggsize and
aggrate. It also depends on how many CPUs there are.

You might perhaps reduce the load by having @r... and @w... aggregations
for the read and write cases. You avoid use of a key that is essentially of no
use. Another win is that you will have more effective space available for
storing aggregation data.The trade off is that you're now doubling the
number of extractions from the kernel by the userland DTrace process as well
as doubling the kernel memory consumption.

Jim Litchfield
============
Roman Naumenko wrote:
Works!

Is there much overhead to have this script running from time to time on a loaded storage server? It's for 10-20 targets on 1GigE interfaces working with 8 disks raid10 array.

--
Roman

_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to