Hi Benoît,

    Here is the output of ulimit -a command :

     core file size                  (blocks, -c)    0
     data seg size                 (kbytes, -d)    unlimited
     scheduling priority             (-e)             0
     file size                          (blocks, -f)    unlimited
     pending signals                 (-i)              7678
     max locked memory       (kbytes, -l)     32
     max memory size         (kbytes, -m)    819400
     open files                      (-n)                1024
     pipe size                   (512 bytes, -p)    8
     POSIX message queues   (bytes, -q)   819200
     real-time priority              (-r)                0
     stack size                    (kbytes, -s)      8192
     cpu time                    (seconds, -t)      unlimited
     max user processes              (-u)         7678
     virtual memory              (kbytes, -v)     1613040
     file locks                             (-x)          unlimited

Regards,
Rajesh


Benoît Clouet wrote:
> 
> It would be useful if you could send us the result of a ulimit -a  
> command using the account under which your java process is launched.  
> It might have something to do with the number of process the user is  
> able to launch, as says the error message.
> 
> Benoît
> 
> Le 19 août 08 à 17:02, RKalaria <[EMAIL PROTECTED]> a écrit :
> 
>>
>> Hi,
>>
>>   I have a server with apache servicemix installed within following
>> environment:
>>          OS                                        = SUSE 10.3 LINUX
>>          Java version                          = 1.5.0_12
>>          servicemix version                 = 3.2.2 (using ActiveMQ  
>> 5.0.1)
>>          servicemix.corePoolSize         = 60
>>          servicemix.maximumPoolSize = 100
>>          jvm configuration                    =  JAVA_MIN_MEM=128M,
>> JAVA_MAX_MEM=512M
>>
>>    I was trying load testing for 60 parallel requests in one hit.  
>> When I am
>> hitting 3rd such hit (each hit includes 60 parallel request), it  
>> starts to
>> give OutOfMemoryError with unable to create new native thread  
>> message (see
>> the full stack trace below), but at the same time the memory  
>> consumption
>> never reaches to the maximum memory allocated. So it seems that the
>> OutOfMemoryError is not occurring due to out of available memory.
>>
>> Exception in thread "Timer-3" java.lang.OutOfMemoryError
>>        at java.util.zip.ZipFile.open(Native Method)
>>        at java.util.zip.ZipFile.<init>(ZipFile.java:203)
>>        at java.util.zip.ZipFile.<init>(ZipFile.java:234)
>>        at
>> org. 
>> apache. 
>> servicemix. 
>> jbi. 
>> framework. 
>> AutoDeploymentService.isAvailable(AutoDeploymentService.java:711)
>>        at
>> org. 
>> apache. 
>> servicemix. 
>> jbi. 
>> framework. 
>> AutoDeploymentService.monitorDirectory(AutoDeploymentService.java:655)
>>        at
>> org.apache.servicemix.jbi.framework.AutoDeploymentService.access 
>> $800(AutoDeploymentService.java:62)
>>        at
>> org.apache.servicemix.jbi.framework.AutoDeploymentService 
>> $1.run(AutoDeploymentService.java:628)
>>        at java.util.TimerThread.mainLoop(Timer.java:512)
>>        at java.util.TimerThread.run(Timer.java:462)
>> Exception in thread "ActiveMQ Transport Initiator: / 
>> 192.168.2.80:56524"
>> java.lang.OutOfMemoryError: unable to create new native thread
>>        at java.lang.Thread.start0(Native Method)
>>        at java.lang.Thread.start(Thread.java:574)
>>        at
>> org. 
>> apache. 
>> activemq. 
>> transport.TransportThreadSupport.doStart(TransportThreadSupport.java: 
>> 43)
>>        at
>> org. 
>> apache.activemq.transport.tcp.TcpTransport.doStart(TcpTransport.java: 
>> 382)
>>        at
>> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:50)
>>        at
>> org. 
>> apache.activemq.transport.TransportFilter.start(TransportFilter.java: 
>> 57)
>>    ---------------
>>
>>  I have captured two samples about no. of threads between each such  
>> hit.
>> Here is that :
>>
>> 1st Sample
>> Before First Hit    : Live Threads:   269 Peak:   286 Daemon  
>> threads:    131
>> Total started:    375
>> After First Hit      : Live Threads:   836 Peak:   838 Daemon threads:
>> 314 Total started:  1,009
>> After Second Hit  : Live Threads: 1,147 Peak: 1,152 Daemon  
>> threads:    494
>> Total started:  1,408
>> During Third Hit   : Live Threads: 1,435 Peak: 1,437 Daemon  
>> threads:    661
>> Total started:  1,760
>> (It starts to give that OOM error during this 3rd hit)
>>
>> 2nd Sample
>> Before First Hit    : Live Threads:   278 Peak:   292 Daemon  
>> threads:    135
>> Total started:    350
>> After First Hit      : Live Threads:   825 Peak:   829 Daemon threads:
>> 313 Total started:    973
>> After Second Hit  : Live Threads: 1,138 Peak: 1,138 Daemon  
>> threads:    499
>> Total started:  1,347
>> During Third Hit   : Live Threads: 1,431 Peak: 1,437 Daemon  
>> threads:    674
>> Total started:  1,719
>> (It starts to give that OOM error during this 3rd hit)
>>
>>    From this it seems that whenever the total Live Threads increases  
>> from
>> 1300 (approximately), it starts to give that error. I have also  
>> tried with
>> reducing the thread stack size to 512k, but the result was the same.
>>
>>   I have following doubts on this :
>>   a. After each hit, the no. of Live Threads are just increasing never
>> decreasing. Is it expected or it is an issue?
>>   b. The OOM is because of some kind of jvm memory tuning or it is  
>> related
>> with no. of threads (Total no. of threads that can be handled in one  
>> OS
>> process) ?
>>
>>   Please help us, this is really blocking us from moving ahead on  
>> this.
>>
>> Regards,
>> Rajesh Kalaria
>> -- 
>> View this message in context:
>> http://www.nabble.com/OutOfMemoryError-during-load-testing-tp19052284p19052284.html
>> Sent from the ServiceMix - User mailing list archive at Nabble.com.
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/OutOfMemoryError-during-load-testing-tp19052284p19063878.html
Sent from the ServiceMix - User mailing list archive at Nabble.com.

Reply via email to