Re: [dtrace-discuss] How to dig deeper

2008-12-05 Thread przemolicc
On Fri, Dec 05, 2008 at 05:40:19AM -0800, Hans-Peter wrote:
 Ok thanks a lot sofar.
 
 We have planned down time for remounting the filesystems this weekend.
 We will see what happens.

You can remount it online:

mount -o remount,forcedirectio /mount point
mount -o remount,noforcedirectio /mount point

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





















--
Wygraj telefon komorkowy!
Sprawdz   http://link.interia.pl/f1fc0

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] lint and statically defined tracing for user applications

2008-12-05 Thread David Bustos
Quoth David Bustos on Mon, Dec 01, 2008 at 02:20:45PM -0800:
 I added some static probes to my daemon, but lint complains:
 
   warning: name used but not defined: __dtrace_configd___import__request__bad 
 in file_object.c(2134) (E_NAME_USED_NOT_DEF2)
 
 Is there a standard way to fix this?

Apparently not.  Let me propose one.

I changed my provider definition file from

-- configd_sdt.d 
provider configd {
probe import__request__bad(string);
};
-

to

-- configd_sdt.c 
#ifndef lint

provider configd {
probe import__request__bad(string);
};

#else

/* ARGSUSED */
void
__dtrace_configd___import__request__bad(unsigned long a)
{
}

#endif
-

In the Makefile, I added -C to the dtrace -G options and configd_sdt.c
to the lint arguments.  Note that lint requires the .c extension
rather than the .d extension.


David
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is the nfs dtrace script right (from nfsv3 provider wiki)?

2008-12-05 Thread Jim Mauro
Are you referring to nfsv3rwsnoop.d?

The TIME(us) value from that script is not a latency measurement,
it's just a time stamp.

If you're referring to a different script, let us know specifically
which script.

/jim


Marcelo Leal wrote:
 Hello there,
  Ten minutes of trace (latency), using the nfs dtrace script from nfsv3 
 provider wiki page, i got total numbers (us) like:
  131175486635
   ???

  thanks!
   
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] Trying to identify writer and/or reason for iowrite.

2008-12-05 Thread Robert Alatalo
Hello,

Running the iotop dtrace script, we see many entries like

UIDPID   PPID CMD  DEVICE  MAJ MIN DBYTES
0  3  0 fsflush  md3  85   3 W  512
...
...
0  0  0 schedssd22   118 176 W 80538112
0  0  0 schedssd18   118 144 W 80585728
0  0  0 schedssd24   118 192 W 80730624
0  0  0 schedssd19   118 152 W 80762880
0  0  0 schedssd23   118 184 W 80764928
0  0  0 schedssd25   118 200 W 80965632
0  0  0 schedssd20   118 160 W 81029120
0  0  0 schedssd21   118 168 W 81132032

In the iostat we see things like

device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
...
...
ssd180.0  157.20.0 17914.5  9.2 13.3  142.9  63  75
ssd190.0  161.40.0 17887.7  9.6 13.7  144.6  65  76
ssd200.0  166.40.0 17922.0  8.3 12.7  126.1  58  74
ssd210.0  157.60.0 17880.0  9.3 13.4  144.1  64  75
ssd220.0  153.30.0 17867.7  9.2 13.4  147.3  63  75
ssd230.0  154.30.0 17905.6  8.8 13.0  141.5  61  74
ssd240.0  160.42.1 17915.4  9.2 13.4  141.3  63  75
ssd250.0  160.70.0 17934.5  9.7 13.8  145.8  66  76


Can anyone suggest a different script, or help modify the iotop as 
included in the tookkit so as to either print stack traces or better 
yet, print stack traces when the %busy goes above some threshhold, or 
other way of limiting the data to go though?

thanks in advance,
Robert
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Trying to identify writer and/or reason for iowrite.

2008-12-05 Thread Jim Mauro
The problem you're running into is disk IO operations tend to occur
asynchronously to the thread that initiated the IO, so when the IO
provider probe fires, execname shows the process name for PID 0.
This is not uncommon when chasing disk and network IOs. You
need to capture the write further up the stack.

The easiest method for determining which process(es) are writing is
to use either the syscall provider, or the fsinfo provider (depending
on which release of Solaris you're running, fsinfo may not be there).

Use the syscall provider and see which system calls are being used to
generate disk writes - it's probably write(2), but it may be any of;

nv98 dtrace -l -P syscall | grep write
76295syscall   write entry
76296syscall   write return
76497syscall  writev entry
76498syscall  writev return
76597syscall  pwrite entry
76598syscall  pwrite return
76691syscallpwrite64 entry
76692syscallpwrite64 return

#!/usr/sbin/dtrace -s

syscall::write:entry,syscall::writev:entry,syscall::pwrite:entry,syscall::pwrite64:entry
{
@[pid,probefunc] = count();
}

Once you have the correct system call and process name(s), fine tune
the DTrace and grab a user stack();

#!/usr/sbin/dtrace -s

syscall::write:entry
/ pid == PID_OF_INTEREST /
{
@[ustack()] = count();
}

The above assumes it's all write(2) system calls.

You can determine which files by grabbing arg0 when the
probe fires. Depending on which release of Solaris you're
running, you can use the fds array to get the file path;

#!/usr/sbin/dtrace -s

syscall::write:entry
{
@[execname, fds[arg0].fi_pathname] = count();
}


If your version of Solaris is older, and does not have fds available, 
just track arg0
(the file descriptor), and run pfiles on the process to map the file 
descriptor to
the file.

HTH,
/jim



Robert Alatalo wrote:
 Hello,

   Running the iotop dtrace script, we see many entries like

 UIDPID   PPID CMD  DEVICE  MAJ MIN DBYTES
 0  3  0 fsflush  md3  85   3 W  512
 ...
 ...
 0  0  0 schedssd22   118 176 W 80538112
 0  0  0 schedssd18   118 144 W 80585728
 0  0  0 schedssd24   118 192 W 80730624
 0  0  0 schedssd19   118 152 W 80762880
 0  0  0 schedssd23   118 184 W 80764928
 0  0  0 schedssd25   118 200 W 80965632
 0  0  0 schedssd20   118 160 W 81029120
 0  0  0 schedssd21   118 168 W 81132032

 In the iostat we see things like

 device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 ...
 ...
 ssd180.0  157.20.0 17914.5  9.2 13.3  142.9  63  75
 ssd190.0  161.40.0 17887.7  9.6 13.7  144.6  65  76
 ssd200.0  166.40.0 17922.0  8.3 12.7  126.1  58  74
 ssd210.0  157.60.0 17880.0  9.3 13.4  144.1  64  75
 ssd220.0  153.30.0 17867.7  9.2 13.4  147.3  63  75
 ssd230.0  154.30.0 17905.6  8.8 13.0  141.5  61  74
 ssd240.0  160.42.1 17915.4  9.2 13.4  141.3  63  75
 ssd250.0  160.70.0 17934.5  9.7 13.8  145.8  66  76


   Can anyone suggest a different script, or help modify the iotop as 
 included in the tookkit so as to either print stack traces or better 
 yet, print stack traces when the %busy goes above some threshhold, or 
 other way of limiting the data to go though?

 thanks in advance,
 Robert
 ___
 dtrace-discuss mailing list
 dtrace-discuss@opensolaris.org
   
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] lint and statically defined tracing for user applications

2008-12-05 Thread Adam Leventhal
Hey David,

Nice trick. Sounds like a good RFE for dtrace -G would be to generate some
lint-friendly output.

Adam

On Fri, Dec 05, 2008 at 10:49:40AM -0800, David Bustos wrote:
 Quoth David Bustos on Mon, Dec 01, 2008 at 02:20:45PM -0800:
  I added some static probes to my daemon, but lint complains:
  
warning: name used but not defined: 
  __dtrace_configd___import__request__bad in file_object.c(2134) 
  (E_NAME_USED_NOT_DEF2)
  
  Is there a standard way to fix this?
 
 Apparently not.  Let me propose one.
 
 I changed my provider definition file from
 
 -- configd_sdt.d 
 provider configd {
   probe import__request__bad(string);
 };
 -
 
 to
 
 -- configd_sdt.c 
 #ifndef lint
 
 provider configd {
   probe import__request__bad(string);
 };
 
 #else
 
 /* ARGSUSED */
 void
 __dtrace_configd___import__request__bad(unsigned long a)
 {
 }
 
 #endif
 -
 
 In the Makefile, I added -C to the dtrace -G options and configd_sdt.c
 to the lint arguments.  Note that lint requires the .c extension
 rather than the .d extension.
 
 
 David
 ___
 dtrace-discuss mailing list
 dtrace-discuss@opensolaris.org

-- 
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org