Re: Tomcat misuse of Servlet 3.0's asynchronous support

2017-09-19 Thread Yasser Zamani


On 9/13/2017 10:25 PM, Yasser Zamani wrote:
> 
> 
> On 9/13/2017 9:49 PM, Mark Thomas wrote:
>> On 05/09/2017 19:56, Yasser Zamani wrote:
>>> Thanks a lot Mark!
>>>
>>> Yes I knew these and before tested that a tomcat with 400 max threads
>>> "scalabilitaly is equal" to a tomcat with 200 max threads but with
>>> servlet 3's async API including application's thread pool with size 200.
>>>
>>> However so far I thought Oracle's docs are like standards and tomcat
>>> have to satisfy them :)
>>
>> Tomcat implements the Servlet, JSP, UEL, EL and JASPIC specifications.
>>
>> The document you refer to is not part of those specs and, as I said, it
>> is misleading at best.
>>
>>>> That does increase scalability
>>>> because rather than having a bunch of threads waiting for these
>>>> non-blocking operations to complete, those threads can do useful work.
>>>
>>> But tomcat blocks another thread from container's thread pool for
>>> waiting or locking on that non-blocking operation's response!
>>
>> As I said, if the async API is used to move a blocking operation from
>> one thread to another, that won't improve scalability.
>>
>> You are only going to improve scalability if you move non-blocking
>> operations from the Servlet.service() method (which has to block waiting
>> for the non-blocking operation to complete) to the async API.
>> Essentially, if you leave it in the service() method you have one thread
>> allocated to each non-blocking operation.
>>
>> If the Servlet async API is used, the non-blocking operation is started
>> and the container thread continues to complete the service method. The
>> container thread is now free to do other useful work and the
>> non-blocking operation isn't using any thread at all - and won't until
>> the operation completes at which point it will require a thread to
>> perform the dispatch back to the container and then to process that
>> dispatch.
>>
> 
> :S I apologize as I know that I fails to understand some points again.
> Please excuse me bothering you.
> 
> The non-blocking operation isn't using any thread at all ?! So which 
> thread executes my Runnable passed to startAsync? At my first initial 
> mail, I see Tomcat passes that to it's thread pool. If I define my 
> non-blocking operation (that Runnable) as 
> System.out.println(Thread.getCurrentThread.getName()), I'll see 
> "http-nio-exec-XX which means Tomcat's thread pool has been allocated a 
> thread for my Runnable non-blocking operation. However, maybe you mean a 
> real IO wait as non-blocking operation.
> 
> Let forget scalablity. Could you please define an example business 
> operational and functional scenario which using Servlet's async API 
> improves or resolve it in such way that could not being improved or 
> resolved with NIO and increasing maxThreads and maxConnections?
> 

Thank you Mark. I wish to close and conclude this thread as below:

To whom concern, I found an excellent article at [1] which nicely 
describes what makes difference between increasing pool sizes or using 
Servlet 3's Async API. He says:

> We have solved the problem of the HTTP thread pool exhaustion, but the number 
> of required threads to handle the requests has not improved: we are just 
> spawning background threads to handle the requests. In terms of simultaneous 
> running thread count this should be equivalent to simply increase the HTTP 
> thread pool size: under heavy load the system will not scale.
> 
> Effective usage of Asynchronous Servlets
> In order to demonstrate the powerful features offered by asynchronous 
> servlets, we will implement the following use case:
> 
> * There is a file which size is 100 bytes that may be streamed to remote 
> clients
> 
> * We will have a background thread pool with a predefined number of threads 
> that will be responsible to stream the file to remote clients
> 
> * The HTTP threads will handle incoming requests and immediately pass them to 
> the background thread pool
> 
> * The background threads will send chunks of 10 bytes to the remote clients 
> in a round robin fashion

Best Regards,
Yasser.

[1] http://www.byteslounge.com/tutorials/asynchronous-servlets-in-java

>> In this case a container thread is only required up to the point where
>> there non-blocking operation starts and form the point where it
>> completes. While the non-blocking operation is in progress, the
>> container thread is free to do other useful work.
>>
>> Mark
>>
>>

Re: Tomcat misuse of Servlet 3.0's asynchronous support

2017-09-13 Thread Yasser Zamani


On 9/13/2017 9:49 PM, Mark Thomas wrote:
> On 05/09/2017 19:56, Yasser Zamani wrote:
>> Thanks a lot Mark!
>>
>> Yes I knew these and before tested that a tomcat with 400 max threads
>> "scalabilitaly is equal" to a tomcat with 200 max threads but with
>> servlet 3's async API including application's thread pool with size 200.
>>
>> However so far I thought Oracle's docs are like standards and tomcat
>> have to satisfy them :)
> 
> Tomcat implements the Servlet, JSP, UEL, EL and JASPIC specifications.
> 
> The document you refer to is not part of those specs and, as I said, it
> is misleading at best.
> 
>>> That does increase scalability
>>> because rather than having a bunch of threads waiting for these
>>> non-blocking operations to complete, those threads can do useful work.
>>
>> But tomcat blocks another thread from container's thread pool for
>> waiting or locking on that non-blocking operation's response!
> 
> As I said, if the async API is used to move a blocking operation from
> one thread to another, that won't improve scalability.
> 
> You are only going to improve scalability if you move non-blocking
> operations from the Servlet.service() method (which has to block waiting
> for the non-blocking operation to complete) to the async API.
> Essentially, if you leave it in the service() method you have one thread
> allocated to each non-blocking operation.
> 
> If the Servlet async API is used, the non-blocking operation is started
> and the container thread continues to complete the service method. The
> container thread is now free to do other useful work and the
> non-blocking operation isn't using any thread at all - and won't until
> the operation completes at which point it will require a thread to
> perform the dispatch back to the container and then to process that
> dispatch.
> 

:S I apologize as I know that I fails to understand some points again.
Please excuse me bothering you.

The non-blocking operation isn't using any thread at all ?! So which 
thread executes my Runnable passed to startAsync? At my first initial 
mail, I see Tomcat passes that to it's thread pool. If I define my 
non-blocking operation (that Runnable) as 
System.out.println(Thread.getCurrentThread.getName()), I'll see 
"http-nio-exec-XX which means Tomcat's thread pool has been allocated a 
thread for my Runnable non-blocking operation. However, maybe you mean a 
real IO wait as non-blocking operation.

Let forget scalablity. Could you please define an example business 
operational and functional scenario which using Servlet's async API 
improves or resolve it in such way that could not being improved or 
resolved with NIO and increasing maxThreads and maxConnections?

> In this case a container thread is only required up to the point where
> there non-blocking operation starts and form the point where it
> completes. While the non-blocking operation is in progress, the
> container thread is free to do other useful work.
> 
> Mark
> 
> 
>> so I do
>> not agree that "because those threads can do useful work" then "does
>> increase scalability". I think Servlet 3's async API here may increase
>> scalability if and only if the released thread also releases some
>> resources which other threads may are blocked on them. and if and only
>> if the new thread does not lock more resources than the original one.
>> **Actually as I understand, using Servlet 3's async API compared with
>> tomcat's nio with greater max threads, does not have any gain except
>> what I wrote above and also preventing deadlocks. wdyt?**
>>
>> On 9/5/2017 11:57 AM, Mark Thomas wrote:
>>> On 03/09/17 09:01, Yasser Zamani wrote:
>>>> Hi there,
>>>>
>>>> At [1] we read:
>>>>
>>>>>  Web containers in application servers normally use a server thread
>>>>>  per client request. Under heavy load conditions, containers need a
>>>>>  large amount of threads to serve all the client requests.
>>>>>  Scalability limitations include running out of memory or
>>>>>  *exhausting the pool of container threads*. To create scalable web
>>>>>  applications, you must ensure that no threads associated with a
>>>>>  request are sitting idle, so *the container can use them to
>>>>>  process new requests*. Asynchronous processing refers to
>>>>>  *assigning these blocking operations to a new thread and returning
>>>>>  the thread associated with the 

Re: Tomcat misuse of Servlet 3.0's asynchronous support

2017-09-13 Thread Yasser Zamani


On 9/5/2017 11:26 PM, Yasser Zamani wrote:
> Thanks a lot Mark!
> 
> Yes I knew these and before tested that a tomcat with 400 max threads 
> "scalabilitaly is equal" to a tomcat with 200 max threads but with 
> servlet 3's async API including application's thread pool with size 200.
> 
> However so far I thought Oracle's docs are like standards and tomcat 
> have to satisfy them :)
> 
>> That does increase scalability
>> because rather than having a bunch of threads waiting for these
>> non-blocking operations to complete, those threads can do useful work.
> 
> But tomcat blocks another thread from container's thread pool for 
> waiting or locking on that non-blocking operation's response! so I do 
> not agree that "because those threads can do useful work" then "does 
> increase scalability". I think Servlet 3's async API here may increase 
> scalability if and only if the released thread also releases some 
> resources which other threads may are blocked on them. and if and only 
> if the new thread does not lock more resources than the original one. 
> **Actually as I understand, using Servlet 3's async API compared with 
> tomcat's nio with greater max threads, does not have any gain except 
> what I wrote above and also preventing deadlocks. wdyt?**
> 

Have you seen this please? Until now, I concluded that with Tomcat's 
configurable maxThreads,maxConnections and NIO, we don't need Servlet 
3's async API anymore as we can instead simply increase them. right?

> On 9/5/2017 11:57 AM, Mark Thomas wrote:
>> On 03/09/17 09:01, Yasser Zamani wrote:
>>> Hi there,
>>>
>>> At [1] we read:
>>>
>>>>     Web containers in application servers normally use a server thread
>>>>     per client request. Under heavy load conditions, containers need a
>>>>     large amount of threads to serve all the client requests.
>>>>     Scalability limitations include running out of memory or
>>>>     *exhausting the pool of container threads*. To create scalable web
>>>>     applications, you must ensure that no threads associated with a
>>>>     request are sitting idle, so *the container can use them to
>>>>     process new requests*. Asynchronous processing refers to
>>>>     *assigning these blocking operations to a new thread and returning
>>>>     the thread associated with the request immediately to the 
>>>> container*.
>>>>
>>> I could not achieve this scalability in tomcat via calling
>>> `javax.servlet.AsyncContext.start(Runnable)`! I investigated the cause
>>> and found it at [2]:
>>>
>>>  public synchronized void asyncRun(Runnable runnable) {
>>>     ...
>>>  processor.execute(runnable);
>>>
>>> I mean `processor.execute(runnable)` uses same thread pool which it's
>>> also it's duty to process new requests! Such usage made things worse!
>>> i.e. not only does not make thread pool more free to process new
>>> requests, but also has an overhead via thread switching!
>>>
>>> I think Tomcat must use another thread pool for such blocking operations
>>> and keep current thread pool free for new requests; It's the philosophy
>>> of Servlet 3.0's asynchronous support according to Oracle's
>>> documentation. wdyt?
>>
>> I think this is a good question that highlights a lot of
>> misunderstanding in this area. The quote above is misleading at best.
>>
>> There is no way that moving a blocking operation from the container
>> thread pool to some other thread will increase scalability any more then
>> simply increasing the size of the container thread pool.
>>
>> Consider the following:
>>
>> - If the system is not at capacity then scalability can be increased by
>>   increasing the size of the container thread pool
>>
>> - If the system as at capacity, the container thread pool will need to
>>   be reduced to create capacity for these 'other' blocking threads.
>>
>> - If too many resources are allocated to these 'other' blocking threads
>>   then scalability will be reduced because there will be idle 'other'
>>   blocking threads that could be doing useful work elsewhere such as
>>   processing incoming requests.
>>
>> - If too few resources are allocated to these 'other' blocking  threads
>>   then scalability will be reduced because a bottleneck will have been
>>   introduced.
>>

Re: BIO: Async servlet with it's own thread pool; but get connection refused!

2017-09-12 Thread Yasser Zamani
Ouch! maxConnections! You're right. I failed to take account of it 
correctly. Actually I was confused connections like threads (because of 
maxThreads/Connections) and thought they release when their thread 
releases :[ while a connection keeps alive until response and thread 
pool is shared between all of them to service them on demand.

I'm really sorry that I bothered you a lot!

Thanks a tone for your replies!

Regards.

On 9/12/2017 1:51 PM, Mark Thomas wrote:
> On 12/09/17 10:00, Yasser Zamani wrote:
>>
>>
>> On 9/12/2017 1:17 AM, Mark Thomas wrote:
>>> On 07/09/17 23:30, Yasser Zamani wrote:
>>>> Thanks for your attention.
>>>>
>>>> Now I downloaded a fresh apache-tomcat-7.0.81-windows-x64 and chenged
>>>> it's connector in same way (BIO,20,20,10). I get same result, fortunately 
>>>> :)
>>>>
>>>> OUTPUT:
>>>>
>>>> Using CATALINA_BASE:
>>>> "C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2"
>>>> Using CATALINA_HOME:
>>>> "C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81"
>>>> Using CATALINA_TMPDIR:
>>>> "C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81\temp"
>>>> Using JRE_HOME:"E:\jdk1.7.0_79"
>>>> INFO: Server version:Apache Tomcat/7.0.81
>>>> INFO: Server built:  Aug 11 2017 10:21:27 UTC
>>>> INFO: Server number: 7.0.81.0
>>>> INFO: OS Name:   Windows 8.1
>>>> INFO: OS Version:6.3
>>>> INFO: Architecture:  amd64
>>>> INFO: Java Home: E:\jdk1.7.0_79\jre
>>>> INFO: JVM Version:   1.7.0_79-b15
>>>> INFO: JVM Vendor:Oracle Corporation
>>>> INFO: CATALINA_BASE:
>>>> C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2
>>>>
>>>> Container MAX used threads: 10
>>>
>>> I see similar results.
>>>
>>> There looks to be things going on either in JMeter or at the network
>>> level I don't understand. I had to resort to drawing it out to get my
>>> head around what is happening.
>>>
>>
>> Sorry for bothering you,
>>
>> To examine if things going on either in JMeter or at the network, I
>> tested same config (BIO,20,20,10) on Jetty. All 70 requests returned
>> successfully and response time was ~20 seconds for all, as I expects.
>
> I'm fairly sure Jetty uses NIO, not BIO which would explain the
> differences you are observing.
>
>> Then to make myself sure, I tested same servlet but a sync one (removed
>> my own thread pool and asyncStart etc) on Jetty. Average response time
>> increased to 95s, as I expected.
>>
>>> The first 20 requests (10 seconds) are accepted and processed by Tomcat.
>>> The 21st request is accepted but then the acceptor blocks waiting for
>>> the connection count to reduce below 20 before proceeding.
>>>
>>
>> You have forgot.
>
> No I haven't.
>
>> My configuration is maxThreads=maxConnections=20 (not
>> 10) and acceptCount=10. As it prints "Container MAX used threads: 10" so
>> it never reaches maxThreads. So why acceptor blocks ?!
>
> After the first 20 requests you have 20 connections so you hit the
> maxConnection limit. That is why the acceptor blocks and subsequent
> requests go into the accept queue.
>
>> I think although
>> I have an async servlet and own thread pool, and although time consuming
>> process (Thread.sleep) is inside my own thread pool, but Tomcat's
>> container thread wrongly does not back to thread pool
>
> Incorrect. The container thread does return to the container thread
> pool. It does so almost immediately. Given that there are 0.5 seconds
> between requests and that the time taken to process an incoming request,
> dispatch the request to your thread pool and return the container thread
> to the container thread pool is almost certainly less than 0.5 it is
> very likely that there is never more than one container thread active at
> any one point.
>
>> and fails to> satisfy Servlet 3's async API!
>
> Also incorrect.
>
>>> Requests 22 to 31 are placed in the accept queue. We are now 15.5s into
>>> the test and the first request accepted won't finish processing for
>>> another 4.5 seconds.
>>>
>>> Requests 32 to 40 are dropped since the request queue is full.

Re: BIO: Async servlet with it's own thread pool; but get connection refused!

2017-09-12 Thread Yasser Zamani


On 9/12/2017 1:17 AM, Mark Thomas wrote:
> On 07/09/17 23:30, Yasser Zamani wrote:
>> Thanks for your attention.
>>
>> Now I downloaded a fresh apache-tomcat-7.0.81-windows-x64 and chenged
>> it's connector in same way (BIO,20,20,10). I get same result, fortunately :)
>>
>> OUTPUT:
>>
>> Using CATALINA_BASE:
>> "C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2"
>> Using CATALINA_HOME:
>> "C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81"
>> Using CATALINA_TMPDIR:
>> "C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81\temp"
>> Using JRE_HOME:"E:\jdk1.7.0_79"
>> INFO: Server version:Apache Tomcat/7.0.81
>> INFO: Server built:  Aug 11 2017 10:21:27 UTC
>> INFO: Server number: 7.0.81.0
>> INFO: OS Name:   Windows 8.1
>> INFO: OS Version:6.3
>> INFO: Architecture:  amd64
>> INFO: Java Home: E:\jdk1.7.0_79\jre
>> INFO: JVM Version:   1.7.0_79-b15
>> INFO: JVM Vendor:Oracle Corporation
>> INFO: CATALINA_BASE:
>> C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2
>>
>> Container MAX used threads: 10
>
> I see similar results.
>
> There looks to be things going on either in JMeter or at the network
> level I don't understand. I had to resort to drawing it out to get my
> head around what is happening.
>

Sorry for bothering you,

To examine if things going on either in JMeter or at the network, I 
tested same config (BIO,20,20,10) on Jetty. All 70 requests returned 
successfully and response time was ~20 seconds for all, as I expects.

Then to make myself sure, I tested same servlet but a sync one (removed 
my own thread pool and asyncStart etc) on Jetty. Average response time 
increased to 95s, as I expected.

> The first 20 requests (10 seconds) are accepted and processed by Tomcat.
> The 21st request is accepted but then the acceptor blocks waiting for
> the connection count to reduce below 20 before proceeding.
>

You have forgot. My configuration is maxThreads=maxConnections=20 (not 
10) and acceptCount=10. As it prints "Container MAX used threads: 10" so 
it never reaches maxThreads. So why acceptor blocks ?! I think although 
I have an async servlet and own thread pool, and although time consuming 
process (Thread.sleep) is inside my own thread pool, but Tomcat's 
container thread wrongly does not back to thread pool and fails to 
satisfy Servlet 3's async API!

> Requests 22 to 31 are placed in the accept queue. We are now 15.5s into
> the test and the first request accepted won't finish processing for
> another 4.5 seconds.
>
> Requests 32 to 40 are dropped since the request queue is full. We are
> now 20s into the test and the first request is about to complete
> processing. Oddly, JMeter doesn't report these as failed until some 35
> seconds later.
>
> Request 1 completes. This allows request 21 to proceed. The acceptor
> takes a connection from the accept queue (this appears to be FIFO).
> Request 41 enters the accept queue.
>
> The continues until request 10 completes, 30 starts processing and 50
> enters the accept queue.
>
> Next 11 completes, 41 starts processing and 51 enters the accept queue.
> This continues until 20 completes, 50 starts processing and 60 enters
> the accept queue.
>
> At this point there are 20 threads processing, 10 in the accept queue
> and no thread due to complete for anther 10s.
>
> I'd expected requests 61 to 70 to be rejected. However, 65 to 70 are
> processed. It looks like there is some sort of timeout for acceptance or
> rejection in the accept queue.
>
> That explains the rejected requests.
>

I'm not such smart to understand your analysis :) I just understand when 
my servlet is completely async, then I should not have any rejected 
requests (specially in such low load) like Jetty result. As I said, max 
Tomcat container's threads used by my APP is 10 (half of initial size 
20) so why I see rejected requests while thread pool has 10 free threads 
to accept new requests ?!

> The other question is why maxThreads is reported as it is.
>
> The answer is that the thread pool never grows beyond its initial size
> of 10. A request comes in, it is processed by a container thread,
> dispatched to an async thread and then the container thread is returned
> to the pool to await the next request. Tomcat is able to do this because
> the container doesn't perform any I/O on the connection once it enters
> async mode until it is dispatched back

Re: BIO: Async servlet with it's own thread pool; but get connection refused!

2017-09-07 Thread Yasser Zamani


On 9/7/2017 12:15 PM, Guang Chao wrote:
> On Thu, Sep 7, 2017 at 3:59 AM, Yasser Zamani 
> wrote:
>
>> Hi there,
>>
>> I'm studying Servlet 3's async API using tomcat. I see following strange
>> behavior from tomcat in a very simple test app!
>>
>> I have following JMeter test plan:
>> Number of threads (users): 700
>> Ramp-Up period (in seconds): 23
>> Loop count: 1
>>
>> So JMeter generates 30.43 requests per second and 304.3 requests per 10
>> seconds. I'm planning to full tomcat's BIO pool and accept buffer :)
>>
>> I have an always async test servlet which on destroy, I print tomcat
>> container max used threads count. It prints 187 for me which is lower
>> than 200 (tomcat default pool size) so I should not get any "connection
>> refuse" but I get!
>>
>> I have simulated a blocking operation by a sleep for 10 seconds. When my
>> servlet gets a request, it quickly starts an async and add further
>> processing to my own thread pool (container thread comes back to pool
>> quickly, right). My own thread pool size is 310 which is greater than
>> 304.3 (requests in 10 seconds) so never full.
>>
>> I've tested several times. Tomcat successfully returns from all requests
>> below 326th but fails 102 requests from 326th to 700th with "connection
>> refuse" and afew with "connection reset".
>>
>> Why?! My own thread pool does the jobs and Tomcat's pool is free (my
>> servlet uses 187 threads of tomcat at max).
>>
>> Thanks in advance!
>>
>> JMETER RESULT of RESPONSE TIMES:
>> Max: 60 seconds (lower then tomcat and asyncContext timeout)
>> MIN: 10 seconds
>> AVG: 37 seconds
>> ERR: 15%
>>
>> CONFIGURATIONS:
>>
>> Server.xml
>> > connectionTimeout="12"
>> redirectPort="7743" />
>>
>> Async.java
>>
>> package com.sstr.example;
>>
>> import javax.servlet.*;
>> import javax.servlet.annotation.WebInitParam;
>> import javax.servlet.annotation.WebServlet;
>> import javax.servlet.http.HttpServlet;
>> import javax.servlet.http.HttpServletRequest;
>> import javax.servlet.http.HttpServletResponse;
>> import java.io.IOException;
>> import java.util.concurrent.ExecutorService;
>> import java.util.concurrent.Executors;
>> import java.util.concurrent.ThreadFactory;
>>
>> @WebServlet(
>>  name = "async",
>>  value = {"/async"},
>>  asyncSupported = true,
>>  initParams = {
>>  @WebInitParam(name = "JobPoolSize", value = "310")
>>  }
>> )
>> public class Async extends HttpServlet {
>>
>>  public final int REQUEST_TIMEOUT = 12;
>>  private ExecutorService exe;
>>
>>  @Override
>>  public void init() throws ServletException {
>>  int size = Integer.parseInt(getInitParameter("JobPoolSize"));
>>  exe = Executors.newFixedThreadPool(
>>  size,
>>  new ThreadFactory() {
>>  @Override
>>  public Thread newThread(Runnable r) {
>>  return new Thread(r, "Async Processor");
>>  }
>>  }
>>  );
>>  }
>>
>>  @Override
>>  protected void doGet(HttpServletRequest req, HttpServletResponse
>> resp) throws ServletException, IOException {
>>  final AsyncContext context = req.startAsync();
>>  context.setTimeout(REQUEST_TIMEOUT);
>>  exe.execute(new ContextExecution(context,
>> Thread.currentThread().getName()));
>>  }
>>
>
> I'm not 100% sure, but it seems the doGet method code here is not correct.

Maybe you mean I should startAsync() inside Servlet's thread pool rather 
than tomcat's thread?

>
>
>>
>>  @Override
>>  public void destroy() {
>>  System.out.println("Container MAX used threads: " + threadCount);
>>  exe.shutdown();
>>  }
>>
>>  int threadCount = 0;
>>  class ContextExecution implements Runnable {
>>
>>  final AsyncContext context;
>>  final String containerThreadName;
>>
>>  public ContextExecution(AsyncContext context, String
>> containerThreadName) {
>>   

Re: BIO: Async servlet with it's own thread pool; but get connection refused!

2017-09-07 Thread Yasser Zamani
Thanks for your attention.

Now I downloaded a fresh apache-tomcat-7.0.81-windows-x64 and chenged 
it's connector in same way (BIO,20,20,10). I get same result, fortunately :)

OUTPUT:

Using CATALINA_BASE: 
"C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2"
Using CATALINA_HOME: 
"C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81"
Using CATALINA_TMPDIR: 
"C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81\temp"
Using JRE_HOME:"E:\jdk1.7.0_79"
INFO: Server version:Apache Tomcat/7.0.81
INFO: Server built:  Aug 11 2017 10:21:27 UTC
INFO: Server number: 7.0.81.0
INFO: OS Name:   Windows 8.1
INFO: OS Version:6.3
INFO: Architecture:  amd64
INFO: Java Home: E:\jdk1.7.0_79\jre
INFO: JVM Version:   1.7.0_79-b15
INFO: JVM Vendor:Oracle Corporation
INFO: CATALINA_BASE: 
C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2

Container MAX used threads: 10

Sincerely Yours,
Yasser.

On 9/8/2017 2:30 AM, Mark Thomas wrote:
> On 07/09/17 22:22, Yasser Zamani wrote:
>> At first thanks a lot for your reply!
>>
>> On 9/7/2017 1:43 PM, Mark Thomas wrote:
>>> On 06/09/17 20:59, Yasser Zamani wrote:
>>>> Hi there,
>>>>
>>>> I'm studying Servlet 3's async API using tomcat. I see following strange
>>>> behavior from tomcat in a very simple test app!
>>>
>>> You are also using the BIO connector which, since it is blocking,
>>> doesn't offer any benefits when using async. You'd be better off with
>>> the NIO connector.
>>
>> Yes I know but currently it's not important for me. I am studying
>> Servlet 3's async API and BIO keeps it simpler to study and focus only
>> on it (with NIO I cannot know if something is because of Servlet 3's
>> async API or Tomcat's NIO).
>>
>>>
>>> You haven't told us which Tomcat version you are using. Since you are
>>> using BIO that narrows it down a bit to 7.0.x or 8.0.x but that is still
>>> a lot of possibilities.
>>
>> It's 7.0.47
>
> You are unlikely to get much interest on this list until you upgrade to
> the latest stable 7.0.x (or 8.0.x). So much has changed in the ~4 years
> since 7.0.47 that there isn't much value in investigating this. If you
> see the same or similar issues with 7.0.81, that would be more interesting.
>
> Mark
>
>
>>
>>>
>>> Neither have you told us what operating system you are using. My
>>> experience of JMeter under load, particularly on Windows, is that you
>>> see strange behaviour and it can be hard to figure out the interaction
>>> of JMeter, the OS network stack and Tomcat.
>>>
>>> You also haven't told us what hardware this test is running on.
>>> Particularly the number of cores available.
>>>
>>
>> OS Name  Microsoft Windows 8.1 Enterprise
>> System Type  x64-based PC
>> ProcessorIntel(R) Core(TM) i3-4130 CPU @ 3.40GHz, 3400 Mhz, 2 Core(s),
>> 4 Logical Processor(s)
>>
>>
>>>> I have following JMeter test plan:
>>>> Number of threads (users): 700
>>>> Ramp-Up period (in seconds): 23
>>>> Loop count: 1
>>>>
>>>> So JMeter generates 30.43 requests per second and 304.3 requests per 10
>>>> seconds. I'm planning to full tomcat's BIO pool and accept buffer :)
>>>>
>>>> I have an always async test servlet which on destroy, I print tomcat
>>>> container max used threads count. It prints 187 for me which is lower
>>>> than 200 (tomcat default pool size) so I should not get any "connection
>>>> refuse" but I get!
>>>
>>> There are all sorts of possible reasons for that. I'd suggest scaling
>>> down the test. Limit Tomcat to 20 threads. Reduce the load similarly.
>>> Increase the sleep time. You want to ensure that the only limit you are
>>> hitting is the one you are trying to hit.
>>>
>>
>> Previously I also tested very low loads but again as you suggested I
>> tested following low load configuration and get even worse results!!
>> Tomcat successfully returns from request below 29th but fails 25
>> requests of 29th to 70th. However this time all fails are "connection
>> refuse" and there are not any "connection reset". Whilst the program
>> output is "Container MAX used threads: 10" !!
>>
>> CONF

Re: BIO: Async servlet with it's own thread pool; but get connection refused!

2017-09-07 Thread Yasser Zamani
At first thanks a lot for your reply!

On 9/7/2017 1:43 PM, Mark Thomas wrote:
> On 06/09/17 20:59, Yasser Zamani wrote:
>> Hi there,
>>
>> I'm studying Servlet 3's async API using tomcat. I see following strange
>> behavior from tomcat in a very simple test app!
>
> You are also using the BIO connector which, since it is blocking,
> doesn't offer any benefits when using async. You'd be better off with
> the NIO connector.

Yes I know but currently it's not important for me. I am studying 
Servlet 3's async API and BIO keeps it simpler to study and focus only 
on it (with NIO I cannot know if something is because of Servlet 3's 
async API or Tomcat's NIO).

>
> You haven't told us which Tomcat version you are using. Since you are
> using BIO that narrows it down a bit to 7.0.x or 8.0.x but that is still
> a lot of possibilities.

It's 7.0.47

>
> Neither have you told us what operating system you are using. My
> experience of JMeter under load, particularly on Windows, is that you
> see strange behaviour and it can be hard to figure out the interaction
> of JMeter, the OS network stack and Tomcat.
>
> You also haven't told us what hardware this test is running on.
> Particularly the number of cores available.
>

OS Name Microsoft Windows 8.1 Enterprise
System Type x64-based PC
Processor   Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz, 3400 Mhz, 2 Core(s), 
4 Logical Processor(s)


>> I have following JMeter test plan:
>> Number of threads (users): 700
>> Ramp-Up period (in seconds): 23
>> Loop count: 1
>>
>> So JMeter generates 30.43 requests per second and 304.3 requests per 10
>> seconds. I'm planning to full tomcat's BIO pool and accept buffer :)
>>
>> I have an always async test servlet which on destroy, I print tomcat
>> container max used threads count. It prints 187 for me which is lower
>> than 200 (tomcat default pool size) so I should not get any "connection
>> refuse" but I get!
>
> There are all sorts of possible reasons for that. I'd suggest scaling
> down the test. Limit Tomcat to 20 threads. Reduce the load similarly.
> Increase the sleep time. You want to ensure that the only limit you are
> hitting is the one you are trying to hit.
>

Previously I also tested very low loads but again as you suggested I 
tested following low load configuration and get even worse results!! 
Tomcat successfully returns from request below 29th but fails 25 
requests of 29th to 70th. However this time all fails are "connection 
refuse" and there are not any "connection reset". Whilst the program 
output is "Container MAX used threads: 10" !!

CONFIGURATION #2:

Server.xml


JMeter
Number of threads (users): 70
Ramp-Up period (in seconds): 35 (40 requests per 20 seconds)
Loop count: 1

My async servlet
Async Sleep Time: 20 seconds (ensures 40 concurrent requests)
It's own thread pool size: 41 (lower than 40 so never fulls)

JMETER RESULT of RESPONSE TIMES #2:
Max: 38 seconds (lower then tomcat and asyncContext timeout)
MIN: 20 seconds
AVG: 18 seconds (because of fails)
ERR: 36%

UTPUT:

Container MAX used threads: 10

Thanks in advance!

> Mark
>
>
>> I have simulated a blocking operation by a sleep for 10 seconds. When my
>> servlet gets a request, it quickly starts an async and add further
>> processing to my own thread pool (container thread comes back to pool
>> quickly, right). My own thread pool size is 310 which is greater than
>> 304.3 (requests in 10 seconds) so never full.
>>
>> I've tested several times. Tomcat successfully returns from all requests
>> below 326th but fails 102 requests from 326th to 700th with "connection
>> refuse" and afew with "connection reset".
>>
>> Why?! My own thread pool does the jobs and Tomcat's pool is free (my
>> servlet uses 187 threads of tomcat at max).
>>
>> Thanks in advance!
>>
>> JMETER RESULT of RESPONSE TIMES:
>> Max: 60 seconds (lower then tomcat and asyncContext timeout)
>> MIN: 10 seconds
>> AVG: 37 seconds
>> ERR: 15%
>>
>> CONFIGURATIONS:
>>
>> Server.xml
>> > connectionTimeout="12"
>> redirectPort="7743" />
>>
>> Async.java
>>
>> package com.sstr.example;
>>
>> import javax.servlet.*;
>> import javax.servlet.annotation.WebInitParam;
>> import javax.servlet.annotation.WebServlet;
>> import javax.servlet.http.HttpServlet;
>> import javax.servlet.http.HttpServletRequest;
>> import javax.servlet.http.HttpServletResponse;
>> import java.io.IOExceptio

BIO: Async servlet with it's own thread pool; but get connection refused!

2017-09-06 Thread Yasser Zamani
Hi there,

I'm studying Servlet 3's async API using tomcat. I see following strange 
behavior from tomcat in a very simple test app!

I have following JMeter test plan:
Number of threads (users): 700
Ramp-Up period (in seconds): 23
Loop count: 1

So JMeter generates 30.43 requests per second and 304.3 requests per 10 
seconds. I'm planning to full tomcat's BIO pool and accept buffer :)

I have an always async test servlet which on destroy, I print tomcat 
container max used threads count. It prints 187 for me which is lower 
than 200 (tomcat default pool size) so I should not get any "connection 
refuse" but I get!

I have simulated a blocking operation by a sleep for 10 seconds. When my 
servlet gets a request, it quickly starts an async and add further 
processing to my own thread pool (container thread comes back to pool 
quickly, right). My own thread pool size is 310 which is greater than 
304.3 (requests in 10 seconds) so never full.

I've tested several times. Tomcat successfully returns from all requests 
below 326th but fails 102 requests from 326th to 700th with "connection 
refuse" and afew with "connection reset".

Why?! My own thread pool does the jobs and Tomcat's pool is free (my 
servlet uses 187 threads of tomcat at max).

Thanks in advance!

JMETER RESULT of RESPONSE TIMES:
Max: 60 seconds (lower then tomcat and asyncContext timeout)
MIN: 10 seconds
AVG: 37 seconds
ERR: 15%

CONFIGURATIONS:

Server.xml


Async.java

package com.sstr.example;

import javax.servlet.*;
import javax.servlet.annotation.WebInitParam;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;

@WebServlet(
 name = "async",
 value = {"/async"},
 asyncSupported = true,
 initParams = {
 @WebInitParam(name = "JobPoolSize", value = "310")
 }
)
public class Async extends HttpServlet {

 public final int REQUEST_TIMEOUT = 12;
 private ExecutorService exe;

 @Override
 public void init() throws ServletException {
 int size = Integer.parseInt(getInitParameter("JobPoolSize"));
 exe = Executors.newFixedThreadPool(
 size,
 new ThreadFactory() {
 @Override
 public Thread newThread(Runnable r) {
 return new Thread(r, "Async Processor");
 }
 }
 );
 }

 @Override
 protected void doGet(HttpServletRequest req, HttpServletResponse 
resp) throws ServletException, IOException {
 final AsyncContext context = req.startAsync();
 context.setTimeout(REQUEST_TIMEOUT);
 exe.execute(new ContextExecution(context, 
Thread.currentThread().getName()));
 }

 @Override
 public void destroy() {
 System.out.println("Container MAX used threads: " + threadCount);
 exe.shutdown();
 }

 int threadCount = 0;
 class ContextExecution implements Runnable {

 final AsyncContext context;
 final String containerThreadName;

 public ContextExecution(AsyncContext context, String 
containerThreadName) {
 this.context = context;
 this.containerThreadName = containerThreadName;
 }

 @Override
 public void run() {
 try {
 int threadNumber = 
Integer.parseInt(containerThreadName.substring(
 containerThreadName.lastIndexOf('-')+1));
 if(threadNumber > threadCount) {
 threadCount = threadNumber;
 }

 // Simulate Time Consuming Task
 Thread.sleep(1);

 ServletResponse resp = context.getResponse();
 if (resp != null) {
 resp.getWriter().write("Ok");
 }

 context.complete();
 } catch (Exception e) {
 // Handle ?
 }
 }
 }
}

OUTPUT:

Container MAX used threads: 187


Re: Tomcat misuse of Servlet 3.0's asynchronous support

2017-09-05 Thread Yasser Zamani
Thanks a lot Mark!

Yes I knew these and before tested that a tomcat with 400 max threads 
"scalabilitaly is equal" to a tomcat with 200 max threads but with 
servlet 3's async API including application's thread pool with size 200.

However so far I thought Oracle's docs are like standards and tomcat 
have to satisfy them :)

> That does increase scalability
> because rather than having a bunch of threads waiting for these
> non-blocking operations to complete, those threads can do useful work.

But tomcat blocks another thread from container's thread pool for 
waiting or locking on that non-blocking operation's response! so I do 
not agree that "because those threads can do useful work" then "does 
increase scalability". I think Servlet 3's async API here may increase 
scalability if and only if the released thread also releases some 
resources which other threads may are blocked on them. and if and only 
if the new thread does not lock more resources than the original one. 
**Actually as I understand, using Servlet 3's async API compared with 
tomcat's nio with greater max threads, does not have any gain except 
what I wrote above and also preventing deadlocks. wdyt?**

On 9/5/2017 11:57 AM, Mark Thomas wrote:
> On 03/09/17 09:01, Yasser Zamani wrote:
>> Hi there,
>>
>> At [1] we read:
>>
>>> Web containers in application servers normally use a server thread
>>> per client request. Under heavy load conditions, containers need a
>>> large amount of threads to serve all the client requests.
>>> Scalability limitations include running out of memory or
>>> *exhausting the pool of container threads*. To create scalable web
>>> applications, you must ensure that no threads associated with a
>>> request are sitting idle, so *the container can use them to
>>> process new requests*. Asynchronous processing refers to
>>> *assigning these blocking operations to a new thread and returning
>>> the thread associated with the request immediately to the container*.
>>>
>> I could not achieve this scalability in tomcat via calling
>> `javax.servlet.AsyncContext.start(Runnable)`! I investigated the cause
>> and found it at [2]:
>>
>>  public synchronized void asyncRun(Runnable runnable) {
>> ...
>>  processor.execute(runnable);
>>
>> I mean `processor.execute(runnable)` uses same thread pool which it's
>> also it's duty to process new requests! Such usage made things worse!
>> i.e. not only does not make thread pool more free to process new
>> requests, but also has an overhead via thread switching!
>>
>> I think Tomcat must use another thread pool for such blocking operations
>> and keep current thread pool free for new requests; It's the philosophy
>> of Servlet 3.0's asynchronous support according to Oracle's
>> documentation. wdyt?
>
> I think this is a good question that highlights a lot of
> misunderstanding in this area. The quote above is misleading at best.
>
> There is no way that moving a blocking operation from the container
> thread pool to some other thread will increase scalability any more then
> simply increasing the size of the container thread pool.
>
> Consider the following:
>
> - If the system is not at capacity then scalability can be increased by
>   increasing the size of the container thread pool
>
> - If the system as at capacity, the container thread pool will need to
>   be reduced to create capacity for these 'other' blocking threads.
>
> - If too many resources are allocated to these 'other' blocking threads
>   then scalability will be reduced because there will be idle 'other'
>   blocking threads that could be doing useful work elsewhere such as
>   processing incoming requests.
>
> - If too few resources are allocated to these 'other' blocking  threads
>   then scalability will be reduced because a bottleneck will have been
>   introduced.
>
> - The 'right' level of resources to allocate to these 'other' blocking
>   threads will vary over time.
>
> - Rather than try and solve the complex problem of balancing resources
>   across multiple thread pools, it is far simpler to use a single thread
>   pool, as Tomcat does.
>
>
> Servlet 3 async can only increase scalability where the Servlet needs to
> perform a genuinely non-blocking operation. Prior to the availability of
> the async API, the Servlet thread would have to block until the
> non-blocking operation completed. Tha

Tomcat misuse of Servlet 3.0's asynchronous support

2017-09-03 Thread Yasser Zamani
Hi there,

At [1] we read:

> Web containers in application servers normally use a server thread
> per client request. Under heavy load conditions, containers need a
> large amount of threads to serve all the client requests.
> Scalability limitations include running out of memory or
> *exhausting the pool of container threads*. To create scalable web
> applications, you must ensure that no threads associated with a
> request are sitting idle, so *the container can use them to
> process new requests*. Asynchronous processing refers to
> *assigning these blocking operations to a new thread and returning
> the thread associated with the request immediately to the container*.
>
I could not achieve this scalability in tomcat via calling 
`javax.servlet.AsyncContext.start(Runnable)`! I investigated the cause 
and found it at [2]:

 public synchronized void asyncRun(Runnable runnable) {
...
 processor.execute(runnable);

I mean `processor.execute(runnable)` uses same thread pool which it's 
also it's duty to process new requests! Such usage made things worse! 
i.e. not only does not make thread pool more free to process new 
requests, but also has an overhead via thread switching!

I think Tomcat must use another thread pool for such blocking operations 
and keep current thread pool free for new requests; It's the philosophy 
of Servlet 3.0's asynchronous support according to Oracle's 
documentation. wdyt?

[1] https://docs.oracle.com/javaee/7/tutorial/servlets012.htm
[2] 
https://github.com/apache/tomcat/blob/trunk/java/org/apache/coyote/AsyncStateMachine.java#L451

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org