Re: [dtrace-discuss] output of ustack only shows the library call but no program call which called the library
Try increase ustackframes, I think the default is 20 and you won't see the real entry point when ustack is too deep. From: dtrace-discuss-boun...@opensolaris.org [mailto:dtrace-discuss-boun...@opensolaris.org] On Behalf Of Qihua Wu Sent: 2009年5月4日 9:59 To: dtrace-discuss@opensolaris.org Subject: [dtrace-discuss] output of ustack only shows the library call but no program call which called the library After I run dtrace -n 'syscall::brk:entry {ustack()}', I found some output show the function from ONLY library. But libaray is not runnable by itself. 0 59753brk:entry libc.so.1`_brk_unlocked+0xa libc.so.1`sbrk+0x20 libc.so.1`_morecore+0x12b libc.so.1`_malloc_unlocked+0x10d libc.so.1`malloc+0x2d libproc.so.1`optimize_symtab+0x6c libproc.so.1`Pbuild_file_symtab+0x787 libproc.so.1`build_map_symtab+0x36 libproc.so.1`Pxlookup_by_name+0x6d libproc.so.1`Plookup_by_name+0x1c libproc.so.1`ps_pglobal_lookup+0x1a librtld_db.so.1`_rd_reset64+0x10f librtld_db.so.1`rd_reset+0x58 librtld_db.so.1`rd_new+0x6d libproc.so.1`Prd_agent+0x4f libproc.so.1`Pxlookup_by_addr+0x55 libproc.so.1`Plookup_by_addr+0xc libdtrace.so.1`dt_print_ustack+0x16b libdtrace.so.1`dt_consume_cpu+0x3ae libdtrace.so.1`dt_consume_begin+0x92 === The library is not runnable by itself, why the function comes from library? 0 59753brk:entry libc.so.1`_brk_unlocked+0xa libc.so.1`sbrk+0x20 libc.so.1`_morecore+0x27 libc.so.1`_malloc_unlocked+0x10d libc.so.1`malloc+0x2d libproc.so.1`optimize_symtab+0x6c libproc.so.1`Pbuild_file_symtab+0x787 libproc.so.1`build_map_symtab+0x36 libproc.so.1`Pxlookup_by_addr+0x70 libproc.so.1`Plookup_by_addr+0xc libdtrace.so.1`dt_print_ustack+0x16b libdtrace.so.1`dt_consume_cpu+0x3ae libdtrace.so.1`dt_consume_begin+0x92 libdtrace.so.1`dtrace_consume+0x114 libdtrace.so.1`dtrace_work+0xaa dtrace`main+0x902 dtrace`_start+0x6c == This one is better to understand: dtrace is a process calling some library, so it makes sense to have the dtrace`_start+0x6c as the outer most function. ___ dtrace-discuss mailing list dtrace-discuss@opensolaris.org
Re: [dtrace-discuss] dtrace : isolating the slow write
You may try this: Renil Thomas wrote: Currently, the d-script used here is as below : #!/usr/sbin/dtrace -s syscall::write:entry /pid == $1 tid == 3 / { self-timestamp_ = timestamp; self-traceme = 1; } syscall::write:return /self-timestamp_ pid == $1 tid == 3 / { self-traceme = 0; @data[write] = quantize(timestamp - self-timestamp_); @dataavg[write] = avg(timestamp - self-timestamp_); self-need_attention = timestamp - self-timestamp_ 100?1:0; } syscall::write:return /self-need_attention/ { self-need_attention = 0; printf(Caught:); stack(); ustack(); //or something else... } alloccgblk:entry /self-traceme/ { self-timestamp_alloccgblk = timestamp; } alloccgblk:return /self-traceme self-timestamp_alloccgblk/ { @data[alloccgblk] = quantize(timestamp - self-timestamp_alloccgblk); @dataavg[alloccgblk] = avg(timestamp - self-timestamp_alloccgblk); self-timestamp_alloccgblk = 0; } mapsearch:entry /self-traceme/ { self-timestamp_mapsearch = timestamp; } mapsearch:return /self-traceme self-timestamp_mapsearch/ { @data[mapsearch] = quantize(timestamp - self-timestamp_mapsearch); @dataavg[mapsearch] = avg(timestamp - self-timestamp_mapsearch); self-timestamp_mapsearch = 0; } tick-60sec { printa(@dataavg); printa(@data); exit(0); } ___ dtrace-discuss mailing list dtrace-discuss@opensolaris.org
Re: [dtrace-discuss] How to dig deeper
It seems VOP_RWLOCK takes a long time. ufs_rwlock starts at line 12, but ends at line 15611. Is there another application keeps reading/writing in your system? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Hans-Peter Sent: 2008年12月3日 21:41 To: dtrace-discuss@opensolaris.org Subject: Re: [dtrace-discuss] How to dig deeper Hello all, I added a clause to my script. sysinfo::: /self-traceme==1 pid == $1/ { trace(execname); printf(sysinfo: timestamp : %d , timestamp); } A subsequent trace created a file of about 19000 lines. I loaded in Excel to be able to subtrace timestamps etc. The longest jump in timestamp is between the first pair of savectx and restorectx at line 70. I count 50 savectx/restorectx calls in my trace. But the actual physical write as indicated by the sysinfo is almost at the end of the file directly after the ufs_write and fop_write call end (line 18746). 24 - tsd_agent_get oracle timestamp 1795469946839100 24 - ufs_lockfs_top_vop_return oracle timestamp 1795469946841500 24 - ufs_lockfs_top_vop_return oracle timestamp 1795469946844300 24 - ufs_lockfs_end oracle timestamp 1795469946846700 24 - ufs_write oracle timestamp 1795469946849600 24 - fop_write oracle timestamp 179546994685350057,365,700 24 | pwrite syswriteoracle sysinfo timestamp 1795469946856800 24 | pwrite writech oracle sysinfo timestamp 1795469946860200 So it seems that the actual write not the bottle neck but I attached a zip file with the excel document. Am I right in thinking that is is more an OS issue than a storage issue? Regards Hans-Peter -- This message posted from opensolaris.org ___ dtrace-discuss mailing list dtrace-discuss@opensolaris.org