Thanks a lot Mark again. Actually i made mistake in getting the correct thread dump from my server when it was not accepting any further requests.
We make blocking network I/O calls which blocks the threads . I see that the current threads go to park state when expecting to read from the input stream. The system stops responding when 200 concurrent requests have been fired. This situation obviously improves after some time when request processing finishes. I think this situation can be improved by implementing non blocking asynchronous calls in my java servlet code. This should free up my threads. Please provide your further inputs. Best Regards, Saurav On Mon, Dec 3, 2018 at 10:26 PM Mark Thomas <ma...@apache.org> wrote: > On 03/12/2018 15:26, Saurav Sarkar wrote: > > Thanks a lot Mark for the reply. > > > > Please bear with me for my follow up questions :) > > > > Does the park state (in visual vm) depicts the connection is idle and > > waiting for requests ? > > There is no direct correlation between thread and connection. A thread > is only assigned to a connection when there is a request on that > connection to process. Once the request has been processed the thread > returns to the pool. > > The park state (as shown below) means the thread is idle in the pool > waiting to be assigned to a connection with a request to process. > > > I see all threads reaching to this stage and my tomcat stops accepting > any > > further requests > > Then there is something wrong in your system but it isn't related to the > size of Tomcat's thread pool. > > > "http-nio-0.0.0.0-8080-exec-1357" - Thread t@80536 > > java.lang.Thread.State: WAITING > > at jdk.internal.misc.Unsafe.park(Native Method) > > - parking to wait for <b136c19> (a > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > > at sun.misc.Unsafe.park(Unsafe.java:1079) > > at > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > > at > > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > > at > > > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > > at > org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:103) > > at > org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:31) > > at > > > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > at > > > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > > at java.lang.Thread.run(Thread.java:836) > > > > Also one general question : Isn't the persistent connection mechanism > > counter productive with nio handling ? > > No. > > > Because i will be never able to achieve high throughput if persistent > > connections are established. > > Also incorrect. > > > Only way for me to achieve is to increase the number of threads. > > Given you have a large number of idle threads, that statement does not > seem logical. > > > We have 8G instances for 200 threads. I don't know how many threads we > can > > scale up to. > > That is highly application dependent. I've seen apps that can choke a > server with 8G RAM and just 5 concurrent requests and apps that are > barely loading a server with 1G RAM and over 1500 concurrent requests. > > There is something else going wrong in your system if the system freezes > with Tomcat threads in the idle state. > > What other components are there between the clients and Tomcat (proxies, > firewalls, etc.)? > > If you provide a complete thread dump for when the system is hung we can > try and provide additional pointers. > > Mark > > > > > > Best Regards, > > Saurav > > > > > > On Mon, Dec 3, 2018 at 4:14 PM Mark Thomas <ma...@apache.org> wrote: > > > >> On 03/12/2018 09:24, Saurav Sarkar wrote: > >>> Hi All, > >>> > >>> I want to know the connector's protocol which is being used in my > tomcat > >> 8 > >>> container and clear the behaviour of request handling > >>> > >>> We have a cloud foundry based application running on java build pack. > >>> > >>> Below is the connector settings in server.xml > >>> > >>> <Connector port="${http.port}" > >>> > >>> bindOnInit="false" > >>> > >>> compression="on" > >>> > >>> > >>> > >> > compressableMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json" > >>> > >>> allowTrace="false" > >>> > >>> address="${connector.address}" > >>> > >>> maxHttpHeaderSize="8192" > >>> > >>> maxThreads="200" > >>> > >>> server="tomcat" /> > >>> > >>> > >>> It does not show any connector details. > >>> > >>> > >>> My thread dumps shows http-nio-exec threads and reaches to maximum of > 200 > >>> threads. > >>> > >>> > >>> > >>> Does that mean Nio connector is used ? > >> > >> Yes. > >> > >>> But i am not able to address more than 200 threads . I understand that > if > >>> Nio connector is used then maxThreads values be ignored and i can at > >> least > >>> accept more requests. > >> > >> maxThreads is not ignored in your configuration. > >> > >> That configuration will support a maximum of 200 concurrent requests and > >> 10000 concurrent connections. > >> > >> Note that with HTTP keep-alive connections are often idle (not currently > >> processing request) so concurrent connections > concurrent requests. > >> > >> Mark > >> > >> --------------------------------------------------------------------- > >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > >> For additional commands, e-mail: users-h...@tomcat.apache.org > >> > >> > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >