On Apr 18, 2008, at 1:03 PM, Kleyson Rios wrote:
> Hi przemol,
>
> Bellow output of plockstat for malloc and libumem. Both many locks.
> Why changing to libumem I didn't get less locks ?
>

You're looking at Mutex hold statistics, which don't mean a lot  
(unless contention is caused by long hold times)

The important thing for multi-threaded performance is *contention*.   
(Spinning and blocking)  Those are the statistics you should be  
looking at.

Both malloc and libumem use locks to protect their state;  libumem  
just uses many locks, in order to reduce contention.

Cheers,
- jonathan

> ********** Plockstat using malloc (many many locks):
>
> Mutex hold
>
> Count     nsec Lock                         Caller
> ----------------------------------------------------------------------------
> ---
> 4036    14391 libc.so.1`libc_malloc_lock
> libjava.so`JNU_GetStringPlatformChars+0x290
> 2118     7385 libc.so.1`libc_malloc_lock
> libjava.so`JNU_ReleaseStringPlatformChars+0x18
> 3174     4700 libc.so.1`libc_malloc_lock
> libjvm.so`__1cCosGmalloc6FI_pv_+0x29
>  181    63407 libc.so.1`_uberdata+0x40
> libjvm.so`__1cCosRpd_suspend_thread6FpnGThread_i_i_+0x46
> 3170     3407 libc.so.1`libc_malloc_lock
> libjvm.so`__1cCosEfree6Fpv_v_+0x18
>  588    12443 libc.so.1`libc_malloc_lock
> libnet.so`Java_java_net_SocketOutputStream_socketWrite0+0x7f
>  172    37572 libc.so.1`_uberdata+0x40
> libjvm.so`__1cCosQpd_resume_thread6FpnGThread__i_+0x1f
>  176    26701 libc.so.1`libc_malloc_lock
> libjvm.so`__1cCosGmalloc6FI_pv_+0x29
>  596     7124 libc.so.1`libc_malloc_lock
> libnet.so`Java_java_net_SocketOutputStream_socketWrite0+0x1e0
>  450     7254 libc.so.1`libc_malloc_lock   0x858b2167
> (...)
>
>
> ********** Plockstat using libumem (many many locks too):
>
> Mutex hold
>
> Count     nsec Lock                         Caller
> ----------------------------------------------------------------------------
> ---
>    4  1450455 libumem.so.1`umem_cache_lock
> libumem.so.1`umem_cache_applyall+0x51
>  100    46388 libc.so.1`_uberdata+0x40
> libjvm.so`__1cCosRpd_suspend_thread6FpnGThread_i_i_+0x46
>  100    23226 libc.so.1`_uberdata+0x40
> libjvm.so`__1cCosQpd_resume_thread6FpnGThread__i_+0x1f
>  486     4314 0x807b680
> libumem.so.1`umem_cache_alloc+0xc1
>  488     4236 0x807b680
> libumem.so.1`umem_cache_free+0x194
>  356     4859 0x807b300
> libumem.so.1`umem_cache_alloc+0xc1
>  150    11499 0x8073030                    libumem.so.1`vmem_xfree 
> +0xfe
>  115    14473 0x8073030                    libumem.so.1`vmem_alloc 
> +0x13c
>  297     5374 0x807ad80
> libumem.so.1`umem_cache_alloc+0xc1
>  362     4258 0x807b300
> libumem.so.1`umem_cache_free+0x194
>  297     4304 0x807ad80
> libumem.so.1`umem_cache_free+0x194
>   48    21805 0x807b6c0
> libumem.so.1`umem_cache_free+0x194
>  150     6635 libumem.so.1`vmem0+0x30      libumem.so.1`vmem_alloc 
> +0x126
> (...)
>
>
>
> Mutex hold
>
> ----------------------------------------------------------------------------
> ---
> Count     nsec Lock                         Caller
>  543    13110 0x807b680
> libumem.so.1`umem_cache_alloc+0xc1
>
>      nsec ---- Time Distribution --- count Stack
>      8192 |@@@@@@@@@@@@            |   289
> libumem.so.1`umem_cache_alloc+0xc1
>     16384 |@@@@@@@@@               |   224 libumem.so.1`umem_alloc 
> +0x3f
>     32768 |@                       |    27 libumem.so.1`malloc+0x23
>     65536 |                        |     3
> libjava.so`JNU_GetStringPlatformChars+0x290
>                                            0x20ac
> ----------------------------------------------------------------------------
> ---
> Count     nsec Lock                         Caller
>   78    89901 libc.so.1`_uberdata+0x40
> libjvm.so`__1cCosRpd_suspend_thread6FpnGThread_i_i_+0x46
>
>      nsec ---- Time Distribution --- count Stack
>     65536 |@@@@@@@@@@@@@@@@@       |    57 libc.so.1`fork_lock_exit 
> +0x2f
>    131072 |@@@@@                   |    19 libc.so.1`_thrp_suspend 
> +0x22c
>    262144 |                        |     1 libc.so.1`thr_suspend+0x1a
>    524288 |                        |     1
> libjvm.so`__1cCosRpd_suspend_thread6FpnGThread_i_i_+0x46
>
> libjvm.so`__1cGThreadNdo_vm_suspend6M_i_+0x42
>
> libjvm.so`__1cGThreadKvm_suspend6M_i_+0x33
>
> libjvm.so`__1cUThreadSafepointStateXexamine_state_of_thread6Mi_v_ 
> +0x101
>
> libjvm.so`__1cUSafepointSynchronizeFbegin6F_v_+0x12b
>
> libjvm.so`__1cIVMThreadEloop6M_v_+0x1b4
>
> (...)
>
>
> Regards.
> Kleyson Rios.
>
> -----Mensagem original-----
> De: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Em nome de
> [EMAIL PROTECTED]
> Enviada em: sexta-feira, 18 de abril de 2008 06:49
> Para: [email protected]
> Assunto: Re: [dtrace-discuss] Process in LCK / SLP (Please)
>
> On Wed, Apr 16, 2008 at 12:35:53PM -0300, Kleyson Rios wrote:
>> Hi,
>>
>>
>>
>> I really need help.
>>
>>
>>
>> How can i identify why my process are 100% in LCK and SLP ?
>
> Could you please use 'plockstat' and show us the output ?
> E.g.: plockstat -e 30 -s 10 -A -p `<pid of java proc>'
>
> We had such case in the past. I don't think it is the same reason  
> but might
> give you some ideas.
> We had a lot of locks. By using plockstat it turned out that those  
> locks
> comes from malloc. Using mtmalloc or libumem reduced a number of  
> locks but
> it was still a huge number (and eating our CPU). By using dtrace it  
> turned
> out that those locks comes from threads which were ... supposed to be
> killed. Our programmers found a bug in the application (already  
> unnecessary
> threads were still alive and all of the them going in the endless  
> loop),
> changed the code and ... all locks went away.
>
> As Jim said: start with watching who is using mallocs (which  
> thread(s)).
>
> Regards
> przemol
>
> --
> http://przemol.blogspot.com/
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ----------------------------------------------------------------------
> Poprowadz swoj klub do zwyciestwa!
> Sprawdz >>> http://link.interia.pl/f1d76
>
> _______________________________________________
> dtrace-discuss mailing list
> [email protected]
>
>
> _______________________________________________
> dtrace-discuss mailing list
> [email protected]

--------------------------------------------------------------------------
Jonathan Adams, Sun Microsystems, ZFS Team    http://blogs.sun.com/jwadams

_______________________________________________
dtrace-discuss mailing list
[email protected]

Reply via email to