No, i'm not thinking about the numbers like good or bad for now. Because of
that little bug in the first script, i'm just trying to realize if the numbers
are OK. ;-)
Like Max said, all the IO's time can be greater than the tracing period. The
only problem was the two days of the first script,
Marcelo Leal wrote:
Marcelo Leal wrote:
Hello all...
Thanks a lot for the answers! I think the problem
is almost fixed. Every dtrace documentation says to
use predicates to guarantee the relation between the
start/done probes... Max was the only one paying
attention reading
Marcelo Leal wrote:
Hello all...
Thanks a lot for the answers! I think the problem
is almost fixed. Every dtrace documentation says to
use predicates to guarantee the relation between the
start/done probes... Max was the only one paying
attention reading the docs. ;-)
Actually,
Marcelo Leal wrote:
Hello all...
Thanks a lot for the answers! I think the problem is almost fixed. Every
dtrace documentation says to use predicates to guarantee the relation between
the start/done probes... Max was the only one paying attention reading the
docs. ;-)
Actually, this
I think (us) is microseconds. There is one division by 1000 on the source
code...
Leal
[http://www.eall.com.br/blog]
--
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org
Hi Marcelo,
Marcelo Leal wrote:
I think (us) is microseconds. There is one division by 1000 on the source
code...
Oops. You're right. I did not see that. (That might explain
the 4-8 nanosecond I/Os, which I did think seemed pretty fast.
They are actually 4-8 microsecond). So, you want
Hi Marcelo,
Marcelo Leal wrote:
Ok, but that is a bug, or should work like that?
We can not use dtrace on multiple processors systems?
Sorry, but i don't get it...
I don't consider this a bug. I think it depends on what you are trying
to measure.
The script you are using measures
Sorry, but i do not agree.
We are talking about a NFSv3 provider, and not about how many cpu's there are
on the system. I do not have the knowledge to discuss with you the aspects
about the implementation, but as a user point of view, i think that numbers
don't make sense. If the fact that the
No bug here - we can absolutely use DTrace on MP systems,
reliably and with confidence.
The script output shows some nasty outliers for a small percentage
of the reads and writes happening on the server. Time to take a closer
look at the IO subsystem. I'd start with iostat -znx 1, and see what
Hello Jim!
Actually i can repeat it... every time i did run some d script to collect some
data i got some (how do you call it? nasty :) values. Look:
Fri Dec 5 10:19:32 BRST 2008
Fri Dec 5 10:29:34 BRST 2008
NFSv3 read/write distributions (us):
read
value -
HmmmSomething is certainly wrong. 11 writes at 137k - 275k seconds
(which is where your 1.5M seconds sum is coming from) is bogus.
What version of Solaris is this ('uname -a' and 'cat /etc/release')?
Your running this on an NFS server, right (not client)?
Is this a benchmark? I ask because
Hello Jim, this is not a benchmark. The filenames i did change for privacy...
This is a NFS server, yes.
# uname -a
SunOS test 5.11 snv_89 i86pc i386 i86pc
# cat /etc/release
Solaris Express Community Edition snv_89 X86
Copyright 2008 Sun Microsystems, Inc. All
Also (I meant to ask) - are you having performance problems, or
just monitoring with the NFS provider scripts?
Thanks,
/jim
Marcelo Leal wrote:
Hello Jim, this is not a benchmark. The filenames i did change for privacy...
This is a NFS server, yes.
# uname -a
SunOS test 5.11 snv_89 i86pc
Hi,
I have looked at the script, and there is no correspondence between
start and done.
So, I am not sure how this script is supposed to work.
I think there should be a predicate in the done probes...
The way the script is written, it assumes that for any start, the done
that fires after it is
Oops, that would be a nice test, but something i cannot do. ;-)
[http://www.eall.com.br/blog]
Leal.
--
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org
Some kind of both... ;-)
I was investigating a possible performance problem, that i'm not sure if is
the NFS server or not.
So, i was faced with that weird numbers. I think one thing is not related with
the other, but we need to fix whatever is the problem with the script or the
provider, to
D'oh! Good spot Max - feeling sheepish that I missed that.
Marcelo - add the predicate to the done probes as per Max's
message, and let's see where that takes us
Thanks,
/jim
[EMAIL PROTECTED] wrote:
Hi,
I have looked at the script, and there is no correspondence between
start and
On Tue, Dec 09, 2008 at 05:04:49PM +0100, [EMAIL PROTECTED] wrote:
Hi,
I have looked at the script, and there is no correspondence between
start and done.
So, I am not sure how this script is supposed to work.
I think there should be a predicate in the done probes...
The way the script is
Hello,
Are you referring to nfsv3rwsnoop.d?
The TIME(us) value from that script is not a latency
measurement,
it's just a time stamp.
If you're referring to a different script, let us
know specifically
which script.
Sorry, when i did write latency, i did assume that you will know
Hello Jim,
- cut here ---
Qui Dez 4 19:08:39 BRST 2008
Qui Dez 4 19:18:02 BRST 2008
- cut here ---
NFSv3 read/write distributions (us):
read
value - Distribution - count
2 |
36 hours... ;-))
Leal.
--
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org
Are you referring to nfsv3rwsnoop.d?
The TIME(us) value from that script is not a latency measurement,
it's just a time stamp.
If you're referring to a different script, let us know specifically
which script.
/jim
Marcelo Leal wrote:
Hello there,
Ten minutes of trace (latency), using the
22 matches
Mail list logo