On Fri, 9 Apr 2021 at 18:12, Peter Chamberlain <peter.chamberl...@htk.co.uk>
wrote:

>
>
> On Fri, 9 Apr 2021, 14:10 Christopher Schultz, <
> ch...@christopherschultz.net> wrote:
>
>> Peter,
>>
>> On 4/9/21 06:53, Peter Chamberlain wrote:
>> > Hello,
>> > I've been trying to understand the behaviour of tomcat when handling
>> > internal redirects. I'm testing using tomcat 9.0.38. I'm testing using
>> > jdk8 1.8.0_265. My main test cases have been 2 forwards to the same
>> > servlet, and then a response. Or 2 redirects to the same servlet and
>> > then a response. Servlet as follows:
>> >
>> > @WebServlet(loadOnStartup = 1, value = "/")
>> > public class ConnectorLimitServlet extends HttpServlet {
>> >
>> >    @Override
>> >    protected void doGet(HttpServletRequest req, HttpServletResponse
>> > resp) throws IOException, ServletException {
>> >      int number = Integer.parseInt(req.getParameter("number"));
>> >      // Fake some work done at each stage of processing
>> >      try { Thread.sleep(500); } catch (InterruptedException e) {}
>> >      resp.setContentType("text/plain");
>> >      if (number <= 1) {
>> >        resp.getWriter().write("Finished " + req.getServletPath());
>> >        return;
>> >      }
>> >      switch (req.getServletPath()) {
>> >        case "/redirect":
>> >          resp.sendRedirect(new URL(req.getScheme() + "://" +
>> > req.getServerName() + ":" + req.getServerPort() +
>> >              req.getRequestURI() + "?number=" + (number -
>> 1)).toString());
>> >          return;
>> >        case "/forward":
>> >          final String forwardAddress = "/forward?number=" + (number -
>> 1);
>> >
>> getServletContext().getRequestDispatcher(forwardAddress).forward(req,
>> > resp);
>> >      }
>> >    }
>> > }
>> >
>> >
>> > It seems that under high load, 1000 threads in jmeter, Tomcat will
>> > refuse some of the connections for nio2 connections but not for nio,
>> > further it seems that these failures happen considerably earlier than
>> > the configuration page would suggest would be the case. The
>> > configuration suggests that if acceptCount is high enough for the
>> > number of connections then they will be queued prior to reaching the
>> > processing threads, so a small number of processing threads can exist
>> > with a queue of connection feeding them, it seems like until
>> > connectionTimeout is reached connections shouldn't be refused, but
>> > that is not what occurs. In fact acceptCount seems to have very little
>> > effect.
>>
>> Are you testing on localhost, or over a real network connection? If a
>> real network, what kind of network? How many JMeter instances vs Tomcat
>> instances?
>>
>>
> Localhost on Windows,  although similar has been seen across the network
> on Linux,  this was an attempt to replicate a live issue in a minimal code
> approach.
>
> > In short, my questions are:
>> > Why is the nio2 connector type worse at this than nio type?
>>
>> Let's table that for now.
>>
>> > Why are connections refused before acceptCount is reached, or
>> > connectionTimeout is reached?
>>
>> How are you measuring the size of the OS's TCP connection queue? What
>> makes you think that the OS has allocated exactly acceptCount entries in
>> the TCP connection queue? What makes you think acceptCount has been
>> reached? Or not yet reached?
>>
>> What do you think connectionTimeout does, and when do you think it
>> applies?
>>
>>
>>
> I was attempting to use netstat for the queue. Tbh, I found it almost
> impossible so was trying to gauge it mostly from jmeter results. I found
> that it was important to leave a gap between tests as otherwise it was more
> likely to fail.
>
> I was just reading the configuration,  and it sounded like acceptCount
> connections would be queued, after maxThreads, until connectionTimeout
> expired, but it seems connections were refused before then. From Marks
> response it sounds like acceptCount is more of a hint than a precise value,
> and may not be used at all. And also there are likely to be other factors
> outside of these settings that have impacts on these sorts of cases.
>
> > I'm guessing that each forward or redirect effectively counts as an
>> > extra connection, as removing the redirects and multipling the number
>> > of jmeter threads suggests that is the case, am I correct here?
>>
>> A redirect will cause one connection to be terminated (at least
>> logically) and a new connection established. Assuming you are using
>> KeepAlives from JMeter, the same underlying TCP connection will likely
>> be used for the first and second requests. acceptCount probably doesn't
>> apply, since the connection has definitely been established.
>>
>> For a "forward", the connection is definitely maintained. The client is
>> unaware of the fact that it is being sent back through the
>> request-processing pipeline as if there were a new request being made.
>> At this point, acceptCount, connectionTimeout, and everything else
>> you've been talking about is no longer an issue because the connection
>> has been accepted and request-processing has begun.
>>
>>
> I expect the issue I was seeing wasn't necessarily related to forwarding
> or redirecting, more the extra sleeptime and context switching. Although it
> wasn't exactly consistent, so it's hard to say.
>
> > Also, I feel like it would help if there were better documentation
>> > around the differences between nio2 and nio, as, for example, the
>> > connector comparison part makes them sound almost the same.
>>
>> The differences are mostly in the uses of the underlying Java APIs. If
>> you are familiar with the differences between NIO and NIO2 in Java, then
>> the differences between the connectors will be self-evident. If you are
>> unfamiliar with those differences, listing them won't help very much.
>>
>> NIO is significantly different from BIO (blocking I/O) and therefore
>> requires a very different I/O model than BIO. NIO and NIO2 are much more
>> similar to each other. When NIO2 was introduced, it looked as though NIO
>> had been a stepping-stone between BIO and NIO2 and that NIO2 would
>> definitely be the way to go into the future, as the APIs were cleaner
>> and generally offered the best performance. The Java VM has been
>> undergoing a re-implementation of NIO to bring some of those performance
>> improvements "back" to NIO from NIO2 and so the difference is becoming
>> less important at this point. It pretty much comes down to API usage at
>> this point.
>>
>> Hope that helps,
>> -chris
>>
>
> I think I'm much clearer on this in general now. Just wanted to check
> there wasn't some magic setting I was missing, but it sounds like this is
> expected behaviour in certain cases (greatly exceeding the maxThreads with
> requests). Knowing this, we can factor it in better.
>
> Thanks, Peter.
>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
I've been investigating this some more, as I'm not convinced nio2 isn't
behaving strangely in this case. I think there may of been some sort of
reversion as it is much less likely to refuse connections for nio2 in
tomcat 9.0.13 when compared to 9.0.14. I'm wondering if it has something to
do with:

         Avoid using a dedicated thread for accept on the NIO2 connector,
it is always less efficient. (remm)

And if it is hitting some sort of accept thread starvation case when it is
fully loaded. In tomcat 9.0.13 I can hit a maxTheads=200 nio2 connector
with 5000 jmeter threads and not experience a connection refused, but in
9.0.14 I can't reach 1000 without refused connections. It doesn't seem to
be related to forwards or redirects either. If I just sleep for 1500
milliseconds for every servlet run and not redirect or forward and it
behaves the same.
We've been using nio2 in our tomcats exclusively for some time, as we hit
an issue with nio in the past (can't remember what it was, it is likely
fixed by now I would think), so I guess we're more likely to notice this
sort of thing.

Best regards, Peter

Reply via email to