Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread tester
Jim, Thanks . Here is a snapshot from vmsat 5 o/p during the failure. I am not sure how this is going to be formatted on the forum. kthr memorypagedisk faults cpu r b w swap free re mf pi po fr de sr rm s0 s1 s2 in sy cs us sy id [pre] 0 0

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread Jim Mauro
D'oh! Disregard that last question (address space) - my brain was thinking thread create failures - it's not applicable to fork failures. My bad. The system memory and swap space health checks still apply, as well as process count - grab some "sar -v 1 60" samples /jim Jim Mauro wrote: "

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread Jim Mauro
"not enough space" indicates an errno 28 ENOSPC, which isn't listed is the fork man page under ERRORS. Are you sure it's fork(2) that's failing? It may be errno 12, ENOMEM. So what does a general memory health profile of the system look like? Lots of free memory? Plenty of swap space? How about

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread tester
Thanks Jim. Will use this during the next testing window. -- This message posted from opensolaris.org ___ dtrace-discuss mailing list dtrace-discuss@opensolaris.org

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread Jim Mauro
Try this; #!/usr/sbin/dtrace -s #pragma D option quiet extern int errno; syscall::forkall:return, syscall::vfork:return, syscall::forksys:return, syscall::fork1:return / arg0 == -1 || arg1 == -1 / { printf("FORKED FAILED, errno: %d, arg0: %d, arg1: %d\n",errno, arg0, arg1); }

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread tester
Hi Jim, The app software doesn't poduce a errno in its logs (bad software, although from a leading vendor, I think they inherited it, but a error string says "not enough space" I tried grepping some of the header files but could not find a match. /var/adm/messages: that's the first thing I lo

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread tester
Michael, Thanks. I think that's what the script from wiki.sun.com (specopen.d) does. Did I mis interpret your suggestion? Thanks -- This message posted from opensolaris.org ___ dtrace-discuss mailing list dtrace-discuss@opensolaris.org

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread Jim Mauro
Hi (ummm, Tester?) - First and foremost, what's the errno on the fork failure? 99% of the time, the errno information is enough to figure out why forks are failing. Second, make sure you look in /var/adm/messages - if fork is failing because of a system resource issue, you'll often get a syslog

Re: [dtrace-discuss] dtrace performance overhead

2009-06-03 Thread Michael Schuster
On 06/02/09 18:30, tester wrote: Jim, Thanks. You are right, I was using the specopen.d, but looking for fork errors instead of open. I did not know that probe has to fire before predicate gets evaluated. It now makes sense for 40% increase in load during dtracing. I would like to see the code

Re: [dtrace-discuss] dtrace performance overhead

2009-06-02 Thread tester
Jim, Thanks. You are right, I was using the specopen.d, but looking for fork errors instead of open. I did not know that probe has to fire before predicate gets evaluated. It now makes sense for 40% increase in load during dtracing. I would like to see the code path during a fork failure (and

Re: [dtrace-discuss] dtrace performance overhead

2009-06-02 Thread Jim Mauro
Ah, OK - I think I get it. tester wrote: counting system call process during this interval: Dtrace came on top ioctl dtrace 10609 Got it. DTrace executed 10,609 system calls during your sampling period, more than any other process. I often filter dtrace out in a predicate; / execname !=

Re: [dtrace-discuss] dtrace performance overhead

2009-06-02 Thread Jim Mauro
I'm sorry, but I am unable to parse this. What is the question here? Thanks, /jim tester wrote: counting system call process during this interval: Dtrace came on top ioctl dtrace 10609 I am sure if that is from the speculative dtrace script or the script used to count the system calls. Th

Re: [dtrace-discuss] dtrace performance overhead

2009-06-02 Thread tester
counting system call process during this interval: Dtrace came on top ioctl dtrace 10609 I am sure if that is from the speculative dtrace script or the script used to count the system calls. Thanks -- This message posted from opensolaris.org ___ dtra

Re: [dtrace-discuss] dtrace performance overhead

2009-06-02 Thread Jim Mauro
Which example are you using, specopen.d, /*the script that instruments every fbt probe*/? Please post or be more precise about which script you're using. If you're using specopen.d, than you're enabling on the order of 30,000 probes. That's going to add up, even at the very reasonable cost of ab

[dtrace-discuss] dtrace performance overhead

2009-06-02 Thread tester
Hi, On a T5220, when using the speculative tracing there is a signifcant increase on system load. I am using the examples from http://wikis.sun.com/display/DTrace/Speculative+Tracing The system call traced is fork instead of open64. Can that script cause such a load? The system itself withou