There is no way to scale on CPU's manually or setting any parameters.
May be I couldn't do that. But as I know and see that regardless of whatever
you do at application level, JVM decides on the threading and scaling
policy.

Note: I'm using Mina 1.1.7 with a highly loaded communication setup, similar
Hardware as Zigor's and same CPU scheduling behaviour as Zigor's top command
output.


-----Original Message-----
From: Zigor Salvador [mailto:zigorsalva...@gmail.com] 
Sent: Monday, October 24, 2011 3:34 PM
To: users@mina.apache.org
Subject: Re: Performance (CPU use)

Yes, I have just realized I won't improve performance until my 'routing
logic' code path is multithreaded.   :-/

Your suggestion to check CPU utilization was really helpful, indeed.

Thanks,

Zigor.

On 24 Oct 2011, at 14:07, Emmanuel Lecharny wrote:

> On 10/24/11 2:01 PM, Zigor Salvador wrote:
>> Hi,
>> 
>> I'm back with some more results.
>> 
>> This is the typical load I'm seeing in the system when communication
throughput with MINA is maxed out.
> 
> Strange. It seems like you only have one CPU working at 100% (CPU 9), like
if it does all the work.
>> 
>> I notice I'm only using around 10% of the available processing power.
>> 
>> Zigor.
>> 
>> top - 13:56:05 up  2:49,  2 users,  load average: 0.28, 0.12, 0.14
>> Tasks: 196 total,   2 running, 194 sleeping,   0 stopped,   0 zombie
>> Cpu0  :  0.3%us,  0.0%sy,  0.0%ni, 98.3%id,  0.0%wa,  0.0%hi,  1.3%si,
0.0%st
>> Cpu1  :  4.5%us,  4.2%sy,  0.0%ni, 91.3%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu3  :  3.5%us,  4.1%sy,  0.0%ni, 92.4%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu6  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu7  :  0.3%us,  0.3%sy,  0.0%ni, 99.4%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu8  :  0.7%us,  0.0%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu9  : 92.7%us,  5.0%sy,  0.0%ni,  1.0%id,  0.0%wa,  0.0%hi,  1.3%si,
0.0%st
>> Cpu10 :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu11 :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu12 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu13 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu14 : 14.0%us,  8.5%sy,  0.0%ni, 77.5%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Cpu15 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
>> Mem:  24677232k total,  1123960k used, 23553272k free,    26644k buffers
>> Swap: 25101524k total,        0k used, 25101524k free,   285016k cached
>> 
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>  3015 zigor     20   0 16.3g 171m 9808 S  9.2  0.7  17:28.95 java
>>    33 root      20   0     0    0    0 S  0.0  0.0   0:01.04 kworker/9:0
>>   178 root      20   0     0    0    0 S  0.0  0.0   0:00.09 kworker/6:1
>>  1571 zigor     20   0 27380 3068  868 S  0.0  0.0   0:03.34 dbus-daemon
>>  1691 zigor     20   0  198m  12m 9500 S  0.0  0.1   0:00.97 metacity
>>  1766 zigor     20   0  374m  19m  10m S  0.0  0.1   0:01.84
unity-panel-ser
>>  3456 zigor     20   0 21568 1532 1084 R  0.0  0.0   0:01.60 top
>>     1 root      20   0 24184 2276 1344 S  0.0  0.0   0:02.34 init
>>     2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd
>>     3 root      20   0     0    0    0 S  0.0  0.0   0:00.20 ksoftirqd/0
>>     6 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0
>>     7 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/1
>>     8 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kworker/1:0
>>     9 root      20   0     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/1
>>    10 root      20   0     0    0    0 S  0.0  0.0   0:01.82 kworker/0:1
>>    11 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/2
> 
> 
> -- 
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
> 

Reply via email to