-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Ayub,

On 5/29/20 20:23, Ayub Khan wrote:
> Chris,
>
> You might want (2) and (3) to have their own, independent
> connector and thread pool, just to be safe. You don't want a
> connection in (1) to stall because a loopback connection can't be
> made to (2)/(3). Meanwhile, it's sitting there making no progress
> but also consuming a connection+thread.
>
> *there is only one connector per tomcat where all the applications
receive
> the requests, they do not have independent connector and thread
> pool per tomcat. How to configure independent connector and thread
> pool per application per tomcat instance ? below is the current
> connector
config in
> each tomcat instance:*

You can't allocate a connector to a particular web application -- at
least not in the way that you think.

What you have to do if use different port numbers. Users will never
use them, though. But since you have nginx (finally! A reason to have
it!), you can map /app1 to port 8080 and /app2 to port 8081 and /app3
to port 8083 or whatever you want.

Internal loopback connections will either have to go through nginx
(which I wouldn't recommend) or know the correct port numbers to use
(which I *do* recommend).

- -chris

> *<Connector port="8080"
> protocol="org.apache.coyote.http11.Http11NioProtocol"
> connectionTimeout="20000"               URIEncoding="UTF-8"
> redirectPort="8443" />*
>
>
>
> On Fri, May 29, 2020 at 9:05 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 5/28/20 17:25, Ayub Khan wrote:
>>>> Nginx is being used for image caching and converting https to
>>>> http requests before hitting tomcat.
> So you encrypt between the ALB and your app server nodes? That's
> fine, though nginx probably won't offer any performance improvement
> for images (unless it's really caching dynamically-generated images
> from your application) or TLS termination.
>
>>>> The behavior I am noticing is application first throws
>>>> Borken pipe client abort exception at random apis calls
>>>> followed by socket timeout and then database connection leak
>>>> errors. This happens only during high load.
>
> If you are leaking connections, that's going to be an application
> resource-management problem. Definitely solve that, but it
> shouldn't affect anything with Tomcat connections and/or threads.
>
>>>> During normal traffic open files for tomcat process goes up
>>>> and down to not more than 500. However during high traffic if
>>>> I keep track of the open files for each tomcat process as
>>>> soon as the open files count reaches above 10k that tomcat
>>>> instance stops serving the requests.
>
> Any other errors shown in the logs? Like OutOfMemoryError (for
> e.g. open files)?
>
>>>> If the open file count goes beyond 5k its sure that this
>>>> number will never come back to below 500 at this point we
>>>> need to restart tomcat.
>>>>
>>>>
>>>> There are three application installed on each tomcat
>>>> instance,
>>>>
>>>> 1) portal: portal calls (2) and (3) using localhost, should
>>>> we change this to use dns name instead of localhost calls ?
>>>>
>>>> 2) Services for portal 3) Services for portal and mobile
>>>> clients
>
> Are they all sharing the same connector / thread pool?
>
> You might want (2) and (3) to have their own, independent
> connector and thread pool, just to be safe. You don't want a
> connection in (1) to stall because a loopback connection can't be
> made to (2)/(3). Meanwhile, it's sitting there making no progress
> but also consuming a connection+thread.
>
> -chris
>
>>>> On Thu, May 28, 2020 at 4:50 PM Christopher Schultz <
>>>> ch...@christopherschultz.net> wrote:
>>>>
>>>> Ayub,
>>>>
>>>> On 5/27/20 19:43, Ayub Khan wrote:
>>>>>>> If we have 18 core CPU and 100GB RAM. What value can I
>>>>>>> set for maxConnections ?
>>>> Your CPU and RAM really have nothing to do with it. It's more
>>>> about your usage profile.
>>>>
>>>> For example, if you are serving small static files, you can
>>>> serve a million requests a minute on a Raspberry Pi, many of
>>>> them concurrently.
>>>>
>>>> But if you are performing fluid dynamic simulations with
>>>> each request, you will obviously need more horsepower to
>>>> service a single request, let alone thousands of concurrent
>>>> requests.
>>>>
>>>> If you have tons of CPU and memory to spare, feel free to
>>>> crank-up the max connections. The default is 10000 which is
>>>> fairly high. At some point, you will run out of connection
>>>> allocation space in the OS's TCP/IP stack, so that is really
>>>> your upper-limit. You simply cannot have more than the OS
>>>> will allow. See https://stackoverflow.com/a/2332756/276232
>>>> for some information about that.
>>>>
>>>> Once you adjust your settings, perform a load-test. You may
>>>> find that adding more resources actually slows things down.
>>>>
>>>>>>> Want to make sure we are utilizing the hardware to the
>>>>>>> max capacity. Is there any config of tomcat which
>>>>>>> enabled could help serve more requests per tomcat
>>>>>>> instance.
>>>>
>>>> Not really. Improving performance usually come down to tuning
>>>> the application to make the requests take less time to
>>>> process. Tomcat is rarely the source of performance problems
>>>> (but /sometimes/ is, and it's usually a bug that can be
>>>> fixed).
>>>>
>>>> You can improve throughput somewhat by pipelineing requests.
>>>> That means HTTP keepalive for direct connections (but with a
>>>> small timeout; you don't want clients who aren't making any
>>>> follow-up requests to waste your resources waiting for a
>>>> keep-alive timeout to close a connection). For proxy
>>>> connections (e.g. from nginx), you'll want those connections
>>>> to remain open as long as possible to avoid the
>>>> re-negotiation of TCP and possibly TLS handshakes. Using
>>>> HTTP/2 can be helpful for performance, at the cost of some
>>>> CPU on the back-end to perform the complicated connection
>>>> management that h2 requires.
>>>>
>>>> Eliminating useless buffering is often very helpful. That's
>>>> why I asked about nginx. What are you using it for, other
>>>> than as a barrier between the load-balancer and your Tomcat
>>>> instances? If you remove nginx, I suspect you'll see a
>>>> measurable performance increase. This isn't a knock against
>>>> nginx: you'd see a performance improvement by removing *any*
>>>> reverse-proxy that isn't providing any value. But you haven't
>>>> said anything about why it's there in the first place, so I
>>>> don't know if it /is/ providing any value to you.
>>>>
>>>>>>> The current setup is able to handle most of the load,
>>>>>>> however there are predictable times where there is an
>>>>>>> avalanche of requests and thinking how to handle it
>>>>>>> gracefully.
>>>>
>>>> You are using AWS: use auto-scaling. That's what it's for.
>>>>
>>>> -chris
>>>>
>>>>>>> On Wed, May 27, 2020 at 5:38 PM Christopher Schultz <
>>>>>>> ch...@christopherschultz.net> wrote:
>>>>>>>
>>>>>>> Ayub,
>>>>>>>
>>>>>>> On 5/27/20 09:26, Ayub Khan wrote:
>>>>>>>>>> previously I was using HTTP/1.1 connector,
>>>>>>>>>> recently I changed to NIO2 to see the
>>>>>>>>>> performance. I read that NIO2 is non blocking so
>>>>>>>>>> trying to check how this works.
>>>>>>>
>>>>>>> Both NIO and NIO2 are non-blocking. They use different
>>>>>>> strategies for certain things. Anything but the "BIO"
>>>>>>> connector will be non-blocking for most operations.
>>>>>>> The default is NIO.
>>>>>>>
>>>>>>>>>> which connector protocol do you recommend  and
>>>>>>>>>> best configuration for the connector ?
>>>>>>> This depends on your environment, usage profile, etc.
>>>>>>> Note that non-blocking IO means more CPU usage: there
>>>>>>> is a trade-off.
>>>>>>>
>>>>>>>>>> Which stable version of tomcat would you
>>>>>>>>>> recommend ?
>>>>>>>
>>>>>>> Always the latest, of course. Tomcat 8.0 is
>>>>>>> unsupported, replaced by Tomcat 8.5. Tomcat 9.0 is
>>>>>>> stable and probably the best version if you are looking
>>>>>>> to upgrade. Both Tomcat 8.5 and 9.0 are continuing to
>>>>>>> get regular updates. But definitely move away from
>>>>>>> 8.0.
>>>>>>>
>>>>>>>>>> Are there any ubuntu specific configs for tomcat
>>>>>>>>>> ?
>>>>>>> No. There is nothing particular special about Ubuntu.
>>>>>>> Linux is one of the most well-performing platforms for
>>>>>>> the JVM. I wouldn't recommend switching platforms.
>>>>>>>
>>>>>>> Why are you using nginx? You already have
>>>>>>> load-balancing happening in the ALB. Inserting another
>>>>>>> layer of proxying is probably just adding another
>>>>>>> buffer to the mix. I'd remove nginx if it's not
>>>>>>> providing any specific, measurable benefit.
>>>>>>>
>>>>>>>>>> We are using OkHttp client library to call rest
>>>>>>>>>> api and stack trace shows failure at the api
>>>>>>>>>> call. The api being called is running on the same
>>>>>>>>>> tomcat instance (different context) usring url
>>>>>>>>>> localhost. This does not happen when number of
>>>>>>>>>> requests is less.
>>>>>>>
>>>>>>> Your Tomcat server is calling this REST API? Or your
>>>>>>> server is serving those API requests? If your service
>>>>>>> is calling itself, then you have to make sure you have
>>>>>>> double-capacity: every incoming request will cause a
>>>>>>> loopback request to your own service.
>>>>>>>
>>>>>>> Other than the timeouts, are you able to handle the
>>>>>>> load with your existing infrastructure? Sometimes, the
>>>>>>> solution is simply to throw most hardware at the
>>>>>>> problem.
>>>>>>>
>>>>>>> -chris
>>>>>>>
>>>>>>>>>> On Wed, May 27, 2020 at 11:48 AM Mark Thomas
>>>>>>>>>> <ma...@apache.org> wrote:
>>>>>>>>>>
>>>>>>>>>>> On 26/05/2020 23:28, Ayub Khan wrote:
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> During high load I am seeing below error on
>>>>>>>>>>>> tomcat logs
>>>>>>>>>>>>
>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>> java.net
>>>>>>>>>>> .SocketTimeoutException:
>>>>>>>>>>>> timeout
>>>>>>>>>>>
>>>>>>>>>>> And the rest of that stack trace? It is hard
>>>>>>>>>>> to provide advice without context. We need to
>>>>>>>>>>> know what is timing out when trying to do
>>>>>>>>>>> what.
>>>>>>>>>>>
>>>>>>>>>>>> We have 4 C5.18x large vms running tomcat 8
>>>>>>>>>>>> behind AWS application load balancer. We are
>>>>>>>>>>>> seeing socket timeouts during peak hours.
>>>>>>>>>>>> What should be the configuration of tomcat if
>>>>>>>>>>>> we get 60,000 to 70,000 requests per
>>>>>>>>>>> minute
>>>>>>>>>>>> on an average ?
>>>>>>>>>>>>
>>>>>>>>>>>> Tomcat 8.0.32 on Ubuntu 16.04.5 LTS
>>>>>>>>>>>
>>>>>>>>>>> Tomcat 8.0.x is no longer supported.
>>>>>>>>>>>
>>>>>>>>>>>> Below is the java version:
>>>>>>>>>>>>
>>>>>>>>>>>> java version "1.8.0_181" Java(TM) SE Runtime
>>>>>>>>>>>> Environment (build 1.8.0_181-b13) Java
>>>>>>>>>>>> HotSpot(TM) 64-Bit Server VM (build
>>>>>>>>>>>> 25.181-b13, mixed mode)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Below is the server.xml connector
>>>>>>>>>>>> configuration:
>>>>>>>>>>>>
>>>>>>>>>>>> <Connector port="8080"
>>>>>>>>>>>> protocol="org.apache.coyote.http11.Http11Nio2Protocol"
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>
>>>>>>>>>>>>
Why NIO2?
>>>>>>>>>>>
>>>>>>>>>>> Mark
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> connectionTimeout="20000"
>>>>>>>>>>>>
>>>>>>>>>>>> URIEncoding="UTF-8" redirectPort="8443" />
>>>>>>>>>>>>
>>>>>>>>>>>> We have 4  C5.18x large vms and each vm has
>>>>>>>>>>>> nginx and tomcat instance running. All the 4
>>>>>>>>>>>> vms are connected to AWS application load
>>>>>>>>>>>> balancer.
>>>>>>>>>>>>
>>>>>>>>>>>> I tried to add maxConnections=50000 but this
>>>>>>>>>>>> does not seem to have any affect and still
>>>>>>>>>>>> saw the timeouts
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks and Regards Ayub
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ------------------------------------------------------------
- ---
>
>>>>>>>>>>>
- ---
>>>>
>>>>>>>>>>>
>>>>>>>>>>>
> ---
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>>>>>>>>>>> For additional commands, e-mail:
>>>>>>>>>>> users-h...@tomcat.apache.org
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>> ---------------------------------------------------------------
- ---
>
>>>>>>>>
- ---
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>
>>>>>>>>
>>>>>>>>
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>>>>>>>> For additional commands, e-mail:
>>>>>>>> users-h...@tomcat.apache.org
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>>>> ------------------------------------------------------------------
- ---
>>>>>
>>>>>
>>>>>
>
>>>>>
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>>>>> For additional commands, e-mail:
>>>>> users-h...@tomcat.apache.org
>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>
>> ---------------------------------------------------------------------
>>
>>
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
>
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7TppsACgkQHPApP6U8
pFhQgBAAshTFIepyQLQ05z5+ri+Btx3eWLwnrawkLosXvou68u17XPZPi1LwVHCL
1hqxf2dZZOEwu1Lks4P1Y8SZTNE/RzzKSIkJ/edQUA7Aqfz/kl+rYPmft0EmFO5W
uwaDH9XbOa3B2s+iTLWoIjEWForZJl7y9vUYr8Xug85DQXUKTUujGqqOVw6xezo6
HkoR+940xxMj3Cz3q9aI9mYYI9Fj/GVMXro476cm6eqkF37vzviXxKINTBhdcJ2K
h8e/kWCNNMY4E4j5otqkl8LUbHV0LUh4IUOl30h9ibpwlcmRMWEh1N+KQS0zDfpK
c6eqlHBmI3KnJ9aphlMHf6regiydT1phLnLWKwMlxf/l1N7KqGCIUbYPVmfOKn6I
uWBNmfOmNL1svj3V1eimSYmmDvRNCLGvjI8BF/qxDv0RAEE/WkswbzEy47GD1usx
vweeHHQ4OVMbiwtQ3dgbcOyS+7Gck5k81Ul+CLr8QJSozYzChe6gjh7RaPzzbIRk
a9qtB0PvweDBmws4NXG5hdU/P44cb3lGkIPtSgDdzo+5SvEgSme8pkQzyxWahAfi
tBqyGmdWuUjZEpgT8YEg5nisHXws0Xdsj6IGpTRl3yBIv5k5lZtQTc4vGBMfwRDz
ypr8kIZIfSInBs63aHHycFIiS6uTGrOHw9oobXjidR/EVNJ5YsI=
=OhWG
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to