No, this X4600 M1 runs Solaris 10 Update 3 with no local zone,
# uname -a SunOS nygeqdbxpc2p1 5.10 Generic_127112-02 i86pc i386 i86pc # cat /etc/release Solaris 10 11/06 s10x_u3wos_10 X86 Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 14 November 2006 # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared # # df -kl Filesystem kbytes used avail capacity Mounted on /dev/md/dsk/d0 8266719 2715437 5468615 34% / /devices 0 0 0 0% /devices ctfs 0 0 0 0% /system/contract proc 0 0 0 0% /proc mnttab 0 0 0 0% /etc/mnttab swap 61579940 776 61579164 1% /etc/svc/volatile objfs 0 0 0 0% /system/object /usr/lib/libc/libc_hwcap2.so.1 8266719 2715437 5468615 34% /lib/libc.so.1 fd 0 0 0 0% /dev/fd swap 62419216 840052 61579164 2% /tmp swap 61579188 24 61579164 1% /var/run /dev/md/dsk/d51 986735 1060 926471 1% /export/home/dbxpt3 /export/home/dbxpt3 986735 1060 926471 1% /home/dbxpt3 /dev/md/dsk/d52 10326524 1743828 8479431 18% /export/data/sor-prod /dev/md/dsk/d50 61962204 353899 60988683 1% /export/data/dbxpt3 /export/data/dbxpt3 61962204 353899 60988683 1% /data/dbxpt3 # # raidctl RAID Volume RAID RAID Disk Volume Type Status Disk Status ------------------------------------------------------ c3t0d0 IM OK c3t0d0 OK c3t1d0 OK c3t2d0 IM OK c3t2d0 OK c3t3d0 OK Fri Dec 19 10:07:37 2008 cpu us sy wt id 6 4 0 90 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 18.3 0.0 0.0 0 100 md/d30 0.0 0.0 0.0 0.0 0.0 18.3 0.0 0.0 0 100 md/d40 0.0 0.0 0.0 0.0 0.0 18.3 0.0 0.0 0 100 md/d50 0.0 0.0 0.0 0.0 17.3 1.0 0.0 0.0 100 100 c3t2d0 Fri Dec 19 10:07:42 2008 cpu us sy wt id 4 3 0 93 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 20.0 0.0 0.0 0 100 md/d30 0.0 0.0 0.0 0.0 0.0 20.0 0.0 0.0 0 100 md/d40 0.0 0.0 0.0 0.0 0.0 20.0 0.0 0.0 0 100 md/d50 0.0 0.0 0.0 0.0 19.0 1.0 0.0 0.0 100 100 c3t2d0 Fri Dec 19 10:07:47 2008 cpu us sy wt id 4 3 0 93 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 20.5 0.0 0.0 0 100 md/d30 0.0 0.0 0.0 0.0 0.0 20.5 0.0 0.0 0 100 md/d40 0.0 0.0 0.0 0.0 0.0 20.5 0.0 0.0 0 100 md/d50 0.0 0.0 0.0 0.0 19.5 1.0 0.0 0.0 100 100 c3t2d0 Fri Dec 19 10:07:52 2008 cpu us sy wt id 5 3 0 91 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 21.0 0.0 0.0 0 100 md/d30 0.0 0.0 0.0 0.0 0.0 21.0 0.0 0.0 0 100 md/d40 0.0 0.0 0.0 0.0 0.0 21.0 0.0 0.0 0 100 md/d50 0.0 0.0 0.0 0.0 20.0 1.0 0.0 0.0 100 100 c3t2d0 Fri Dec 19 10:07:57 2008 cpu us sy wt id 4 3 0 92 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 12.8 0.0 184.7 0.0 3.6 0.0 278.6 0 19 md/d30 0.0 12.8 0.0 184.7 0.0 3.4 0.0 268.3 0 19 md/d40 0.0 6.4 0.0 108.3 0.0 3.4 0.0 536.7 0 19 md/d50 0.0 12.8 0.0 184.7 2.9 0.5 229.7 38.6 15 19 c3t2d0 Fri Dec 19 10:08:02 2008 cpu us sy wt id 4 3 0 93 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.2 0.0 4.2 0.0 0.0 0.0 7.3 0 0 md/d30 0.0 0.2 0.0 4.2 0.0 0.0 0.0 7.3 0 0 md/d40 0.0 0.2 0.0 4.2 0.0 0.0 0.0 7.4 0 0 md/d50 0.0 0.2 0.0 4.2 0.0 0.0 0.0 7.3 0 0 c3t2d0 I will collect mpstat and vmstat statistics to see any unusual activities on the system. James Yang --- This communication may contain confidential and/or privileged information. If you are not the intended recipient (or have received this communication in error) please notify the sender immediately and destroy this communication. Any unauthorized copying, disclosure or distribution of the material in this communication is strictly forbidden. Deutsche Bank does not render legal or tax advice, and the information contained in this communication should not be regarded as such. Jim Mauro <james.ma...@sun.c OM> To Sent by: Jianhua Yang/db/db...@dbamericas james.ma...@sun.co cc M dtrace-discuss@opensolaris.org Subject Re: [dtrace-discuss] disk utilization 12/20/08 09:57 PM is over 200% Hmm...the iostat data does not make sense. Consistent non-zero values (12/13) in the wait queue, with an IO rate of zero (no read, no writes). Is this data from a virtualization environment? An LDOM, a Xen domU, a VMware virtual machine, or a Solaris Zone? Thanks, /jim Jianhua Yang wrote: > > Hello Jim, > > pls the attached outout file /(See attached file: io.out.20081218)/ > > I ran iosnoop -eo, the script hang until the iostat is less 100% busy. > > Thanks, > > James Yang > Global Unix Support, IES, GTO > Deutsche Bank US > Phone: 201-593-1360 > Email : jianhua.y...@db.com > Pager : 1-800-946-4646 PIN# 6105618 > CR: NYC_UNIX_ES_US_UNIX_SUPPORT > http://dcsupport.ies.gto.intranet.db.com/ > > > --- > This communication may contain confidential and/or privileged information. > If you are not the intended recipient (or have received this communication > in error) please notify the sender immediately and destroy this > communication. Any unauthorized copying, disclosure or distribution of the > material in this communication is strictly forbidden. > > Deutsche Bank does not render legal or tax advice, and the information > contained in this communication should not be regarded as > such.Inactive hide details for Jim Mauro <james.ma...@sun.com>Jim > Mauro <james.ma...@sun.com> > > > *Jim Mauro <james.ma...@sun.com>* > Sent by: james.ma...@sun.com > > 12/17/08 07:02 PM > > > > To > > Jianhua Yang/db/db...@dbamericas > > cc > > dtrace-discuss@opensolaris.org > > Subject > > Re: [dtrace-discuss] disk utilization is over 200% > > > > > This is all very odd....iostat is historically extremely reliable. > I've never observed stats like that before - zero reads and writes > with a non-zero value in the wait queue (forget utilization when > it comes to disk - it's a useless metric). > > IO rates per process are best measured at the VOP layer. > Depending on what version of Solaris you're running, you > can use the fsinfo provider (fsinfo::fop_read:entry, > fsinfo::fop_write:entry). If you don't have the fsinfo > provider, instrument the syscall layer to track reads and writes. > > Can we get another sample, using "iostat -zxnd 1 20"? > > Does the application recover from the hang, or does it > remain hung and require kill/restart? > > Thanks, > /jim > > > Jianhua Yang wrote: > > > > Hello, > > > > I use Brendan's sysperfstat script to see the overall system > > performance and found the the disk utilization is over 100: > > > > 15:51:38 14.52 15.01 200.00 24.42 0.00 0.00 83.53 0.00 > > 15:51:42 11.37 15.01 200.00 25.48 0.00 0.00 88.43 0.00 > > ------ Utilisation ------ ------ Saturation ------ > > Time %CPU %Mem *%Disk* %Net CPU Mem *Disk* Net > > 15:51:45 11.01 15.01* 200.00* 12.02 0.00 0.00 *95.03* 0.00 > > 15:51:48 13.80 15.01 *200.00* 24.87 0.00 0.00 *98.86* 0.00 > > 15:51:51 9.44 15.01 *200.00* 17.02 0.00 0.00 *102.64* 0.00 > > 15:51:54 9.49 15.01 *164.59* 9.10 0.00 0.00 *83.75* 0.00 > > 15:51:57 16.58 15.01 *2.83* 20.46 0.00 0.00 0.00 0.00 > > > > how can I fix this ? is there new verion of this script ? > > > > my system is X4600-M1 with hardware RAID of > > 0+1 = OS disk =72 GB = d0 > > 2+3 = apps data disk = 146 GB = d2, SVM soft partition with one UFS > > file system is active > > at that time, iostat showed strange output: > > cpu > > us sy wt id > > 13 9 0 78 > > extended device statistics > > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > > 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d30 > > 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d40 > > 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d52 > > 0.0 0.0 0.0 0.0 334.0 1.0 0.0 0.0 100 100 c3t2d0 > > cpu > > us sy wt id > > 10 5 0 85 > > extended device statistics > > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > > 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d30 > > 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d40 > > 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d52 > > 0.0 0.0 0.0 0.0 334.0 1.0 0.0 0.0 100 100 c3t2d0 > > kr/s & kw/s show 0, but wait is 334, > > > > at this, the application always hang. > > > > # dtrace -n 'io:::start { @files[pid, execname, args[2]->fi_pathname] > > = sum(args[0]->b_bcount); } tick-5sec { exit(); }' > > dtrace: description 'io:::start ' matched 7 probes > > CPU ID FUNCTION:NAME > > 8 49675 :tick-5sec > > > > 16189 nTrade > > > /export/data/dbxpt3/logs/ledgers/arinapt3.NTRPT3-MOCA.trans_outmsg.ledger > > 32768 > > 25456 pt_chmod /export/data/dbxpt3/logs/NTRPT3-MOCA.log 32768 > > 3 fsflush <none> 38912 > > 25418 pt_chmod /export/data/dbxpt3/logs/NTRPT3-MOCA.log 49152 > > 21372 tail /export/data/dbxpt3/logs/NTRPT3-MOCA.log 65536 > > 16189 nTrade > > > /export/data/dbxpt3/logs/ledgers/arinapt3.NTRPT3-MOCA.trans_exerep.ledger > > 81920 > > 16189 nTrade /export/data/dbxpt3/logs/ntrade.imbalances.log 114688 > > 25419 iostat /export/data/dbxpt3/logs/NTRPT3-MOCA.log 114688 > > 8018 tail /export/data/dbxpt3/logs/NTRPT3-MOCA.log 131072 > > 24915 tail /export/data/dbxpt3/logs/NTRPT3-MOCA.log 147456 > > 16189 nTrade <none> 207872 > > 20900 tail /export/data/dbxpt3/logs/NTRPT3-MOCA.log 270336 > > 0 sched <none> 782336 > > 16189 nTrade /export/data/dbxpt3/logs/NTRPT3-MOCA.log 2162688 > > > > the write is about 10MB/s, did the above dtrace script tell the real > > IO going on at that time ? > > is there a way to find how many IO generate by processes, and how many > > IO are in the IO wait queue ? > > is there a way to find out the disk RPM besides checking the physical > > drive ? > > > > Thanks, > > > > James Yang > > --- > > This communication may contain confidential and/or privileged > information. > > If you are not the intended recipient (or have received this > communication > > in error) please notify the sender immediately and destroy this > > communication. Any unauthorized copying, disclosure or distribution > of the > > material in this communication is strictly forbidden. > > > > Deutsche Bank does not render legal or tax advice, and the information > > contained in this communication should not be regarded as such. > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > dtrace-discuss mailing list > > dtrace-discuss@opensolaris.org > > > > ------------------------------------------------------------------------ > > _______________________________________________ > dtrace-discuss mailing list > dtrace-discuss@opensolaris.org
<<inline: graycol.gif>>
<<inline: pic32391.gif>>
<<inline: ecblank.gif>>
_______________________________________________ dtrace-discuss mailing list dtrace-discuss@opensolaris.org