You may want to cross-post to a Java alias, but I've been down this
road before.

Java will call into malloc() for buffers for network reads and writes
that are larger than 2k bytes (the 2k is from memory, and I think it
was a 1.5 JVM). A large number of malloc calls,
and resulting contention on locks in the library, are due to the application
doing network writes of larger than 2k.

Newer JVMs (1.6) may improve this, but I'm not sure. There's also
an alternative set of classes and methods, NIO, which also can
help (although I've heard tell that NIO brings other problems along
with it, but I can not speak from personal experience).

At this point, I think you need to consult with Java experts to determine
what options you have for buffer allocation for network IO from the
Java heap, versus the current behavior of the JVM dropping back
to malloc for allocating buffers for network IO.

The other option of course is determining if the code can be changed to
use buffers smaller than 2k.

Thanks,
/jim




Kleyson Rios wrote:
> OK jonathan,
>
> I understand.
>
> So, looking on right place now, i can see few locks and sometimes no locks
> (just Mutex Hold). But I still have many threads in 100% LCK.
>
> If I don't have a lot of locks, where is my problem ?
>
> Running rickey c weisner's script a get:
>
> (...)
>     25736
>               libc.so.1`_so_send+0x15
>               libjvm.so`__1cDhpiEsend6Fipcii_i_+0x67
>               libjvm.so`JVM_Send+0x32
>               libnet.so`Java_java_net_SocketOutputStream_socketWrite0+0x131
>               0xc3c098d3
>                10
>     25736
>               0xc3d2a33a
>                14
>     25736
>               libc.so.1`_write+0x15
>               libjvm.so`__1cDhpiFwrite6FipkvI_I_+0x5d
>               libjvm.so`JVM_Write+0x30
>               libjava.so`0xc8f7c04b
>                16
>     25736
>               libc.so.1`stat64+0x15
>                21
>     25736
>               libc.so.1`_write+0x15
>               libjvm.so`__1cDhpiFwrite6FipkvI_I_+0x5d
>               libjvm.so`JVM_Write+0x30
>               libjava.so`0xc8f80ce9
>                76
>   java                       25736  kernel-level lock              1
>   java                       25736  shuttle                        6
>   java                       25736  preempted                      7
>   java                       25736  user-level lock                511
>   java                       25736  condition variable             748
>
>  
> Atenciosamente,
>  
> ------------------------------------------------------------------
>  
> Kleyson Rios.
> Gerência de Suporte Técnico
> Analista de Suporte / Líder de Equipe
>  
>
> -----Mensagem original-----
> De: Jonathan Adams [mailto:[EMAIL PROTECTED] 
> Enviada em: sexta-feira, 18 de abril de 2008 15:40
> Para: Kleyson Rios
> Cc: [email protected]
> Assunto: Re: [dtrace-discuss] RES: Process in LCK / SLP (Please)
>
>
> On Apr 18, 2008, at 1:03 PM, Kleyson Rios wrote:
>   
>> Hi przemol,
>>
>> Bellow output of plockstat for malloc and libumem. Both many locks.
>> Why changing to libumem I didn't get less locks ?
>>
>>     
>
> You're looking at Mutex hold statistics, which don't mean a lot  
> (unless contention is caused by long hold times)
>
> The important thing for multi-threaded performance is *contention*.   
> (Spinning and blocking)  Those are the statistics you should be  
> looking at.
>
> Both malloc and libumem use locks to protect their state;  libumem  
> just uses many locks, in order to reduce contention.
>
> Cheers,
> - jonathan
>
>
>
>
> _______________________________________________
> dtrace-discuss mailing list
> [email protected]
>   
_______________________________________________
dtrace-discuss mailing list
[email protected]

Reply via email to