Re: tomcat 8.5 config for ecommerce site, seeing request timeouts

2020-06-04 Thread Ayub Khan
Chris

Sure I will use DNS and try to change the service calls.

Also once in a while we see the below error and tomcat stops serving
requests. Could you please let me know the cause of this issue ?

org.apache.tomcat.util.net.NioEndpoint$Acceptor run
SEVERE: Socket accept failed
java.nio.channels.ClosedChannelException
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:235)
at
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:682)
at java.lang.Thread.run(Thread.java:748)

On Thu, 4 Jun 2020, 19:47 Christopher Schultz, 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/4/20 11:05, Ayub Khan wrote:
> > Christopher Schultz wrote:
> >> There's no particular reason why a request to node1:/app1 needs
> >> to have its loopback request call node1:/app2, is there? Can
> >> node1:/app1 call node2:/app2?
> >
> >
> > Yes we can do that but then we would have to use the DNS urls and
> > wont this cause network latency compared to a localhost call?
>
> DNS lookups are cheap and cached. Connecting to "localhost"
> technically performs a DNS lookup, too. Once the DNS resolver has
> "node1" in its cache, it'll be just as fast as looking-up "localhost".
>
> > I have not changed the maxThreads config on each of the
> > connectors? If I have to customize it, how to decide what value to
> > use for maxThreads?
> The number of threads you allocate has more to do with your
> application than anything else. I've seen applications (on hardware)
> that can handle thousands of simultaneous threads. Others, I've seen
> will fall-over if more than 4 or 5 requests come in simultaneously.
>
> So you'll need to load-test your application to be sure what the right
> numbers are.
>
> Remember that if your application is database-heavy, then the number
> of connections to the database will soon become a bottleneck. There's
> no sense accepting connections from 100 users if every requests
> requires a database connection and you can only handle 20 connections
> to the database.
>
> - -chris
>
> > On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/1/20 11:12, Ayub Khan wrote:
>  Chris,
> 
>  As you described I have added two new connectors in
>  server.xml and using nginx to redirect requests to different
>  connector ports. Also configured nginx to route traffic of
>  each app on a different connector port of tomcat
> 
>  In config of each app I am using port specific for the app
>  which is being called localhost:8081 for app2 and
>  localhost:8082 for app 3
> 
>  So now in config of app1 we call app2 using
>  localhost:8081/app2 and localhost:8082/app3
> >
> > Perfect.
> >
>  Could you explain the benefit of using this type of config?
>  Will this be useful to not block requests for each app?
> >
> > This ensures that requests for /app1 do not starve the thread pool
> > for requests to /app2. Imagine that you have a single connector,
> > single thread pool, etc. for two apps: /app1 and /app2 and there is
> > only a SINGLE thread in the pool available, and that each request
> > to /app1 makes a call to /app2. Here's what happens:
> >
> > 1. Client requests /app1 2. /app1 makes connection to /app2 3.
> > Request to /app2 stalls waiting for a thread to become available
> > (it's already allocated to the request from #1 above)
> >
> > You basically have a deadlock, here, because /app1 isn't going to
> > give-up its thread, and the thread for the request to /app2 will
> > not be allocated until the request to /app1 gives up that thread.
> >
> > Now, nobody runs their application server with a SINGLE thread in
> > the pool, but this is instructive: it means that deadlock CAN
> > occur.
> >
> > Let's take a more reasonable situation: you have 100 threads in the
> > pool .
> >
> > Let's say that you get REALLY unlucky and the following series of
> > events occurs:
> >
> > 1. 100 requests come in simultaneously for /app1. All requests are
> > allocated a thread from the thread pool for these requests before
> > anything else happens. Note that the thread pool is currently
> > completely exhausted with requests to /app1. 2. All 100 threads
> > from /app1 make requests to /app2. Now you have 100 threads in
> > deadlock similar to the contrived SINGLE thread situation above.
> >
> > Sure, it's unlikely, but it CAN happen, especially if requests to
> > /app1 are mostly waiting on requests to /app2 to complete: you can
> > very easily run out of threads in a high-load situation. And a
> > high-load situation is EXACTLY what you reported.
> >
> > Let's take the example of separate thread pools per application.
> > Same number of total threads, except that 50 are in one pool for
> > /app1 and the other 50 are in the pool for /app2. Here's the series
> > of events:
> >
> > 1. 100 requests come in simultaneously for /app1. 50 

Re: tomcat 8.5 config for ecommerce site, seeing request timeouts

2020-06-04 Thread Ayub Khan
Mark,


  There's no particular reason why a request to node1:/app1 needs
to have its loopback request call node1:/app2, is there? Can
node1:/app1 call node2:/app2?


Yes we can do that but then we would have to use the DNS  urls and wont
this cause network latency compared to a localhost call ?

I have not changed the maxThreads config on each of the connectors ? If I
have to customize it, how to decide what value to use for maxThreads ?




On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/1/20 11:12, Ayub Khan wrote:
> > Chris,
> >
> > As you described I have added two new connectors in server.xml and
> > using nginx to redirect requests to different connector ports.
> > Also configured nginx to route traffic of each app on a different
> > connector port of tomcat
> >
> > In config of each app I am using port specific for the app which
> > is being called localhost:8081 for app2 and localhost:8082 for app
> > 3
> >
> > So now in config of app1 we call app2 using localhost:8081/app2 and
> >  localhost:8082/app3
>
> Perfect.
>
> > Could you explain the benefit of using this type of config? Will
> > this be useful to not block requests for each app?
>
> This ensures that requests for /app1 do not starve the thread pool for
> requests to /app2. Imagine that you have a single connector, single
> thread pool, etc. for two apps: /app1 and /app2 and there is only a
> SINGLE thread in the pool available, and that each request to /app1
> makes a call to /app2. Here's what happens:
>
> 1. Client requests /app1
> 2. /app1 makes connection to /app2
> 3. Request to /app2 stalls waiting for a thread to become available
> (it's already allocated to the request from #1 above)
>
> You basically have a deadlock, here, because /app1 isn't going to
> give-up its thread, and the thread for the request to /app2 will not
> be allocated until the request to /app1 gives up that thread.
>
> Now, nobody runs their application server with a SINGLE thread in the
> pool, but this is instructive: it means that deadlock CAN occur.
>
> Let's take a more reasonable situation: you have 100 threads in the pool
> .
>
> Let's say that you get REALLY unlucky and the following series of
> events occurs:
>
> 1. 100 requests come in simultaneously for /app1. All requests are
> allocated a thread from the thread pool for these requests before
> anything else happens. Note that the thread pool is currently
> completely exhausted with requests to /app1.
> 2. All 100 threads from /app1 make requests to /app2. Now you have 100
> threads in deadlock similar to the contrived SINGLE thread situation
> above.
>
> Sure, it's unlikely, but it CAN happen, especially if requests to
> /app1 are mostly waiting on requests to /app2 to complete: you can
> very easily run out of threads in a high-load situation. And a
> high-load situation is EXACTLY what you reported.
>
> Let's take the example of separate thread pools per application. Same
> number of total threads, except that 50 are in one pool for /app1 and
> the other 50 are in the pool for /app2. Here's the series of events:
>
> 1. 100 requests come in simultaneously for /app1. 50 requests are
> allocated a thread from the thread pool for these requests before
> anything else happens. The other 50 requests are queued waiting on a
> request-processing thread to become available. Note that the thread
> pool for /app1 is currently completely exhausted with requests to /app1.
> 2. 50 threads from /app1 make requests to /app2. All 50 requests get
> request-processing threads allocated, perform their work, and complete.
> 3. The 50 queued requests from step #1 above are now allocated
> request-processing threads and proceed to make requests to /app2
> 4. 50 threads (the second batch) from /app1 make requests to /app2.
> All 50 requests get request-processing threads allocated, perform
> their work, and complete.
>
> Here, you have avoided any possibility of deadlock.
>
> Personally, I'd further decouple these services so that they are
> running (possibly) on other servers, with different load-balancers,
> etc. There's no particular reason why a request to node1:/app1 needs
> to have its loopback request call node1:/app2, is there? Can
> node1:/app1 call node2:/app2? If so, you should let it happen. It will
> make your overall service mode robust. If not, you should fix things
> so it CAN be done.
>
> You might also want to make sure that you do the same thing for any
> database connections you might use, although holding a database
> connection open while making a REST API call might be considered a Bad
> Idea.
>
> Hope that helps,
> - -chris
>
> > On Mon, 1 Jun 2020, 16:27 Christopher Schultz,
> 
> > wrote:
> >
> > Ayub,
> >
> > On 5/31/20 09:20, Ayub Khan wrote:
>  On single tomcat instance how to map each app to different
>  port number?
> >
> > You'd have to use multiple  elements, which 

Re: tomcat 8.5 config for ecommerce site, seeing request timeouts

2020-06-04 Thread Ayub Khan
Mark,


  There's no particular reason why a request to node1:/app1 needs
to have its loopback request call node1:/app2, is there? Can
node1:/app1 call node2:/app2?


Yes we can do that but then we would have to use the DNS  urls and wont
this cause network latency compared to a localhost call ?

I have not changed the maxThreads config on each of the connectors ? If I
have to customize it, how to decide what value to use for maxThreads ?

Also once in a while we see the below error and tomcat stops serving
requests. Could you please let me know the cause of this issue ?

org.apache.tomcat.util.net.NioEndpoint$Acceptor run
SEVERE: Socket accept failed
java.nio.channels.ClosedChannelException
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:235)
at
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:682)
at java.lang.Thread.run(Thread.java:748)

On Thu, Jun 4, 2020 at 6:05 PM Ayub Khan  wrote:

> Mark,
>
>
>   There's no particular reason why a request to node1:/app1 needs
> to have its loopback request call node1:/app2, is there? Can
> node1:/app1 call node2:/app2?
>
>
> Yes we can do that but then we would have to use the DNS  urls and wont
> this cause network latency compared to a localhost call ?
>
> I have not changed the maxThreads config on each of the connectors ? If I
> have to customize it, how to decide what value to use for maxThreads ?
>
>
>
>
> On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> Ayub,
>>
>> On 6/1/20 11:12, Ayub Khan wrote:
>> > Chris,
>> >
>> > As you described I have added two new connectors in server.xml and
>> > using nginx to redirect requests to different connector ports.
>> > Also configured nginx to route traffic of each app on a different
>> > connector port of tomcat
>> >
>> > In config of each app I am using port specific for the app which
>> > is being called localhost:8081 for app2 and localhost:8082 for app
>> > 3
>> >
>> > So now in config of app1 we call app2 using localhost:8081/app2 and
>> >  localhost:8082/app3
>>
>> Perfect.
>>
>> > Could you explain the benefit of using this type of config? Will
>> > this be useful to not block requests for each app?
>>
>> This ensures that requests for /app1 do not starve the thread pool for
>> requests to /app2. Imagine that you have a single connector, single
>> thread pool, etc. for two apps: /app1 and /app2 and there is only a
>> SINGLE thread in the pool available, and that each request to /app1
>> makes a call to /app2. Here's what happens:
>>
>> 1. Client requests /app1
>> 2. /app1 makes connection to /app2
>> 3. Request to /app2 stalls waiting for a thread to become available
>> (it's already allocated to the request from #1 above)
>>
>> You basically have a deadlock, here, because /app1 isn't going to
>> give-up its thread, and the thread for the request to /app2 will not
>> be allocated until the request to /app1 gives up that thread.
>>
>> Now, nobody runs their application server with a SINGLE thread in the
>> pool, but this is instructive: it means that deadlock CAN occur.
>>
>> Let's take a more reasonable situation: you have 100 threads in the pool
>> .
>>
>> Let's say that you get REALLY unlucky and the following series of
>> events occurs:
>>
>> 1. 100 requests come in simultaneously for /app1. All requests are
>> allocated a thread from the thread pool for these requests before
>> anything else happens. Note that the thread pool is currently
>> completely exhausted with requests to /app1.
>> 2. All 100 threads from /app1 make requests to /app2. Now you have 100
>> threads in deadlock similar to the contrived SINGLE thread situation
>> above.
>>
>> Sure, it's unlikely, but it CAN happen, especially if requests to
>> /app1 are mostly waiting on requests to /app2 to complete: you can
>> very easily run out of threads in a high-load situation. And a
>> high-load situation is EXACTLY what you reported.
>>
>> Let's take the example of separate thread pools per application. Same
>> number of total threads, except that 50 are in one pool for /app1 and
>> the other 50 are in the pool for /app2. Here's the series of events:
>>
>> 1. 100 requests come in simultaneously for /app1. 50 requests are
>> allocated a thread from the thread pool for these requests before
>> anything else happens. The other 50 requests are queued waiting on a
>> request-processing thread to become available. Note that the thread
>> pool for /app1 is currently completely exhausted with requests to /app1.
>> 2. 50 threads from /app1 make requests to /app2. All 50 requests get
>> request-processing threads allocated, perform their work, and complete.
>> 3. The 50 queued requests from step #1 above are now allocated
>> request-processing threads and proceed to make requests to /app2
>> 4. 50 threads (the second batch) from /app1 make requests to /app2.
>> All 50 requests get 

Re: tomcat 8.5 config for ecommerce site, seeing request timeouts

2020-06-04 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/4/20 11:05, Ayub Khan wrote:
> Christopher Schultz wrote:
>> There's no particular reason why a request to node1:/app1 needs
>> to have its loopback request call node1:/app2, is there? Can
>> node1:/app1 call node2:/app2?
>
>
> Yes we can do that but then we would have to use the DNS urls and
> wont this cause network latency compared to a localhost call?

DNS lookups are cheap and cached. Connecting to "localhost"
technically performs a DNS lookup, too. Once the DNS resolver has
"node1" in its cache, it'll be just as fast as looking-up "localhost".

> I have not changed the maxThreads config on each of the
> connectors? If I have to customize it, how to decide what value to
> use for maxThreads?
The number of threads you allocate has more to do with your
application than anything else. I've seen applications (on hardware)
that can handle thousands of simultaneous threads. Others, I've seen
will fall-over if more than 4 or 5 requests come in simultaneously.

So you'll need to load-test your application to be sure what the right
numbers are.

Remember that if your application is database-heavy, then the number
of connections to the database will soon become a bottleneck. There's
no sense accepting connections from 100 users if every requests
requires a database connection and you can only handle 20 connections
to the database.

- -chris

> On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/1/20 11:12, Ayub Khan wrote:
 Chris,

 As you described I have added two new connectors in
 server.xml and using nginx to redirect requests to different
 connector ports. Also configured nginx to route traffic of
 each app on a different connector port of tomcat

 In config of each app I am using port specific for the app
 which is being called localhost:8081 for app2 and
 localhost:8082 for app 3

 So now in config of app1 we call app2 using
 localhost:8081/app2 and localhost:8082/app3
>
> Perfect.
>
 Could you explain the benefit of using this type of config?
 Will this be useful to not block requests for each app?
>
> This ensures that requests for /app1 do not starve the thread pool
> for requests to /app2. Imagine that you have a single connector,
> single thread pool, etc. for two apps: /app1 and /app2 and there is
> only a SINGLE thread in the pool available, and that each request
> to /app1 makes a call to /app2. Here's what happens:
>
> 1. Client requests /app1 2. /app1 makes connection to /app2 3.
> Request to /app2 stalls waiting for a thread to become available
> (it's already allocated to the request from #1 above)
>
> You basically have a deadlock, here, because /app1 isn't going to
> give-up its thread, and the thread for the request to /app2 will
> not be allocated until the request to /app1 gives up that thread.
>
> Now, nobody runs their application server with a SINGLE thread in
> the pool, but this is instructive: it means that deadlock CAN
> occur.
>
> Let's take a more reasonable situation: you have 100 threads in the
> pool .
>
> Let's say that you get REALLY unlucky and the following series of
> events occurs:
>
> 1. 100 requests come in simultaneously for /app1. All requests are
> allocated a thread from the thread pool for these requests before
> anything else happens. Note that the thread pool is currently
> completely exhausted with requests to /app1. 2. All 100 threads
> from /app1 make requests to /app2. Now you have 100 threads in
> deadlock similar to the contrived SINGLE thread situation above.
>
> Sure, it's unlikely, but it CAN happen, especially if requests to
> /app1 are mostly waiting on requests to /app2 to complete: you can
> very easily run out of threads in a high-load situation. And a
> high-load situation is EXACTLY what you reported.
>
> Let's take the example of separate thread pools per application.
> Same number of total threads, except that 50 are in one pool for
> /app1 and the other 50 are in the pool for /app2. Here's the series
> of events:
>
> 1. 100 requests come in simultaneously for /app1. 50 requests are
> allocated a thread from the thread pool for these requests before
> anything else happens. The other 50 requests are queued waiting on
> a request-processing thread to become available. Note that the
> thread pool for /app1 is currently completely exhausted with
> requests to /app1. 2. 50 threads from /app1 make requests to /app2.
> All 50 requests get request-processing threads allocated, perform
> their work, and complete. 3. The 50 queued requests from step #1
> above are now allocated request-processing threads and proceed to
> make requests to /app2 4. 50 threads (the second batch) from /app1
> make requests to /app2. All 50 requests get request-processing
> threads allocated, perform their work, and complete.
>
> Here, you have avoided any possibility of deadlock.
>
> 

Re: tomcat 9.0 doesn't load the ECDSA keystore. (ver # 9.0.24)

2020-06-04 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Madhan,

On 6/3/20 21:08, Madhan Raj wrote:
> OS - CentOS 7.6.1810( Core)
>
> Below connector doesn't load my EC keystore whereas it works with
> RSA . Any insights please .

When you say "doesn't load", what do you mean? Possible reasonable
responses are:

1. I can only complete a handshake with RSA cert, not ECDSA cert
2. Error message (please post)
3. JVM crashes
4. OS crashes
5. Universe ends (possible, but unlikely to be reproducible)

> this is my connector tag  in server.xml  SSLEnabled="true" URIEncoding="UTF-8"  maxThreads="200" port="443"
> scheme="https" secure="true"
> protocol="org.apache.coyote.http11.Http11NioProtocol"
> sslImplementationName="org.apache.tomcat.util.net.jsse.JSSEImplementat
ion"
>
>
disableUploadTimeout="true" enableLookups="false" maxHttpHeaderSize="819
2"
> minSpareThreads="25">  certificateVerification="none" sessionTimeout="1800"
> protocols="TLSv1,TLSv1.1,TLSv1.2,TLSv1.3"
> ciphers="ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECD
HE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:AES256-SHA:DHE-DS
S-AES256-SHA:AES128-SHA:DHE-RSA-AES128-SHA"
>
>
sessionCacheSize="1">
>  certificateKeystoreFile="/usr/local/platform/.security/tomcat-ECDSA/ce
rts/tomcat-ECDSA.keystore"
>
>
certificateKeystorePassword="8o8yeAH2qSJbJ2sn"
> certificateKeystoreType="PKCS12" type="EC"/> 
> 
>
> tomcat start up command used :- /home/tomcat/tomcat -user tomcat
> -home /usr/local/thirdparty/java/j2sdk -pidfile
> /usr/local/thirdparty/jakarta-tomcat/conf/tomcat.pid -procname
> /home/tomcat/tomcat -outfile
> /usr/local/thirdparty/jakarta-tomcat/logs/catalina.out -errfile &1
> -Djdk.tls.ephemeralDHKeySize=2048
> -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
> -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
> -Djava.util.logging.config.file=/usr/local/thirdparty/jakarta-tomcat/c
onf/logging.properties
>
>
- -agentlib:jdwp=transport=dt_socket,address=localhost:8000,server=y,suspe
nd=n
> -XX:+UseParallelGC -XX:GCTimeRatio=99 -XX:MaxGCPauseMillis=80
> -Xmx1824m -Xms256m
> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
> -cp
> /usr/local/thirdparty/jakarta-tomcat/bin/bootstrap.jar:/usr/local/thir
dparty/jakarta-tomcat/bin/tomcat-juli.jar
>
>
- -Djava.security.policy==/usr/local/thirdparty/jakarta-tomcat/conf/catali
na.policy
> -Dcatalina.base=/usr/local/thirdparty/jakarta-tomcat
> -Dcatalina.home=/usr/local/thirdparty/jakarta-tomcat
> -Djava.io.tmpdir=/usr/local/thirdparty/jakarta-tomcat/temp
> org.apache.catalina.startup.Bootstrap start'
>
> JAVA_OPTS= -Djava.library.path=$LD_LIBRARY_PATH
> -Djavax.net.ssl.sessionCacheSize=1
> -Djavax.net.ssl.trustStore=/usr/local/platform/.security/tomcat/trust-
certs/tomcat-trust.keystore
>
>
- -Djavax.net.ssl.trustStorePassword=$TRUST_STORE_PASSWORD
> -XX:ErrorFile=$CATALINA_HOME/logs/diagnostic-info.jvm-crash.%p.tomcat.
txt
>
>
- -Dsun.zip.disableMemoryMapping=true
> -XX:OnOutOfMemoryError=/home/tomcat/tomcat_diagnostics.sh
> -XX:OnError=/home/tomcat/tomcat_diagnostics.sh $TOMCAT_JAVA_OPTS
>
> Also can i have both RSA and ECDSA in a single keystore. Will that
> work in tomcat 9?

Yes. You have to use two  elements each with a different
"type" and "certificateKeyAlias"

> it used to work with tomat 7

It still works with Tomcat 9.

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7ZJEwACgkQHPApP6U8
pFg/Tg/9El60qkdMWwk6SpBiKjy0rgQEYgmdv2hkVQXmfX4uaWHZuEBDydX/xQ9L
3JaS+rDeM/4Z6Y7HrKqLGQ0Q+mtgWSoXohhGAqZMcsaGtdiz9oBYukRW7e0JG4Hv
OZgmyPUifLH0kPDyrql3feLQL9TW7G998rR9+N2BsFWnyVdaHYIWt2vSu+/vak7T
OqqNj0Wze9G8/OudKXCEQBi1ADql8XAt7hRCaQLHRcaDLEVLnULq6lgol0dV9qXM
suzNGud9VWNUgsoNX7wZDmx2xYnvDUfOnUJSEYLfRV6zFHOJOLiKLk8GBjymLVt3
PEW3EXlJpq2rQo++s4tNhJGjZRR7yEGNRUO1bl/eB7O4MZrwpZyV9lmy2TN2Im5g
LsMas3p3m87vz8ajafo9SDSZkmXmJ270dUZd8MAxxIvDSCnhw0trSTxbppgeb7p4
LGn/gA9igAY9S9PUKkyLocKVW9XpRg1v21WCSyifKzM7b0787e1EFx6rhxBTsZAk
7D7nL+0Em61LRQKaM3noDtyofEzYGoUtaRwv5gx+dCfF5huDCKvkhWxGQfAwiE/3
fRHCZK1la1Jn3wikApLXU6iEjXV33TmF/hAjLOPaizl90AYxR6O4pvwRKOF+9+fV
Z4CO1ysmLK/WHTYXcpZ8/zPEo9EgXbTULU9DiDu3N6+LKrUFQcc=
=L+y6
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat DB Connection pool timeBetweenEvictionRunsMillis

2020-06-04 Thread Sanders, Steve
Hi all,

Tomcat Version - 8.5.55
OS - OL7

I'm working with an application team that wishes to set the 
timeBetweenEvictionRunsMillis setting of their database connection pool to a 
very low setting - 20ms. According to the documentation 
(https://tomcat.apache.org/tomcat-8.5-doc/jdbc-pool.html), this setting should 
never be set below 1000ms, so I'm weary to allow this. Their claim is that 
application performance is poor when set to our default of 2000ms. To me this 
indicates that there is a bug in the application itself not closing connections 
after it is done with them and have to wait for an idle connection to be 
created by the pool, causing the performance lag.

I'm curious if this advice in the documentation is given based on performance 
concerns or if there is some other underlying issue that could present itself 
if this setting is below 1000 ms?

Thanks!
CONFIDENTIALITY NOTICE This e-mail message and any attachments are only for the 
use of the intended recipient and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If you are not the 
intended recipient, any disclosure, distribution or other use of this e-mail 
message or attachments is prohibited. If you have received this e-mail message 
in error, please delete and notify the sender immediately. Thank you.


Re: Tomcat DB Connection pool timeBetweenEvictionRunsMillis

2020-06-04 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Steve,

On 6/4/20 16:59, Sanders, Steve wrote:
> I'm working with an application team that wishes to set the
> timeBetweenEvictionRunsMillis setting of their database connection
> pool to a very low setting - 20ms.
Hah!

Sorry. Continue...

> According to the documentation
> (https://tomcat.apache.org/tomcat-8.5-doc/jdbc-pool.html), this
> setting should never be set below 1000ms, so I'm weary to allow
> this. Their claim is that application performance is poor when set
> to our default of 2000ms. To me this indicates that there is a bug
> in the application itself not closing connections after it is done
> with them and have to wait for an idle connection to be created by
> the pool, causing the performance lag.

My experience suggests that you are right.

> I'm curious if this advice in the documentation is given based on
> performance concerns or if there is some other underlying issue
> that could present itself if this setting is below 1000 ms?

With a 20ms eviction run, you are going to be thrashing the CPU with
that thread. Sure, the CPU can handle many context-switches in a short
amount of time, but evicting lost database connections isn't something
that really should be done /aggressively/ IMO.

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7ZakEACgkQHPApP6U8
pFg1iQ/+MlwIaZJFVH7J8TQhvC5WOLg6VX4kkcRg9Jm8RYcz+OugkUPKN/p0MGdf
+4U67pAqYFr0aoy/baXkcWsa1EwBnodIKZqZieR3L2PyN2t4GrnCBwX/9wSyZOAC
IdzWAA9Jdg0jcFtIy0Erjib8TABYx1a8SxlFYLS4evyb9Ep1rBvQa+Rw/Uy3fSqo
IHBuB3HZKfRMtUU69PPreXJkFq3WuS9rw2vKvGa4cOb6bCTTfmH4+tvSlFYfdrU4
SDtBXZuHGKe1nThR8WbSjxNa0rgGjV6L5FjhfcdtVYeJ1XCTOpEY9ilWIxwm4RIP
uA8ZVXqV0jBCRoRb3U6uRVLxUghKCSgmvyrsn0TSeTiU4ipdSNfi1emPqxl3wgbX
Gclw4sT6uIIqZx061OdKnQETYBLB++g0+lX+h82CsK+U0Ul1UACWsdkfiz2AU+EG
rJSYjg2ZXjraeXj0hiKGrlwZ8V5BZIvIYp7O/UfM3mSG+UUYQPOmBcKDA6Z24up6
5VlmCoDWqlMNqHWyLDamifeNYAlytyrnGKwdhJczfoGpBjFynxmGFjBReargn2Xv
fWFxLV0d03pr2FYJq5S4C0xgyPmkAwbTM7FxAYg383tlUCXHR5aes1Dxx/OIkCRz
e9goRYTI1R5g2YlrV4RmApQqZ+1FTyuo9d3jwvGLdq0QSH0Fagc=
=rdBH
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 8.5 config for ecommerce site, seeing request timeouts

2020-06-04 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/4/20 12:52, Ayub Khan wrote:
> Sure I will use DNS and try to change the service calls.
>
> Also once in a while we see the below error and tomcat stops
> serving requests. Could you please let me know the cause of this
> issue ?
>
> org.apache.tomcat.util.net.NioEndpoint$Acceptor run SEVERE: Socket
> accept failed java.nio.channels.ClosedChannelException at
>
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:2
35)
> at
>
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:682
)
> at java.lang.Thread.run(Thread.java:748)

You've reached the limit of my knowledge of Java IO, sockets, and
channels, here.

It's very possible that accept() has pulled de-queued a connection
from a client who has disappeared after some (short?) timeout. Is it
fatal, or does it just fill-up your logs?

- -chris

> On Thu, 4 Jun 2020, 19:47 Christopher Schultz,

> wrote:
>
> Ayub,
>
> On 6/4/20 11:05, Ayub Khan wrote:
 Christopher Schultz wrote:
> There's no particular reason why a request to node1:/app1
> needs to have its loopback request call node1:/app2, is
> there? Can node1:/app1 call node2:/app2?


 Yes we can do that but then we would have to use the DNS urls
 and wont this cause network latency compared to a localhost
 call?
>
> DNS lookups are cheap and cached. Connecting to "localhost"
> technically performs a DNS lookup, too. Once the DNS resolver has
> "node1" in its cache, it'll be just as fast as looking-up
> "localhost".
>
 I have not changed the maxThreads config on each of the
 connectors? If I have to customize it, how to decide what
 value to use for maxThreads?
> The number of threads you allocate has more to do with your
> application than anything else. I've seen applications (on
> hardware) that can handle thousands of simultaneous threads.
> Others, I've seen will fall-over if more than 4 or 5 requests come
> in simultaneously.
>
> So you'll need to load-test your application to be sure what the
> right numbers are.
>
> Remember that if your application is database-heavy, then the
> number of connections to the database will soon become a
> bottleneck. There's no sense accepting connections from 100 users
> if every requests requires a database connection and you can only
> handle 20 connections to the database.
>
> -chris
>
 On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
 ch...@christopherschultz.net> wrote:

 Ayub,

 On 6/1/20 11:12, Ayub Khan wrote:
>>> Chris,
>>>
>>> As you described I have added two new connectors in
>>> server.xml and using nginx to redirect requests to
>>> different connector ports. Also configured nginx to
>>> route traffic of each app on a different connector port
>>> of tomcat
>>>
>>> In config of each app I am using port specific for the
>>> app which is being called localhost:8081 for app2 and
>>> localhost:8082 for app 3
>>>
>>> So now in config of app1 we call app2 using
>>> localhost:8081/app2 and localhost:8082/app3

 Perfect.

>>> Could you explain the benefit of using this type of
>>> config? Will this be useful to not block requests for
>>> each app?

 This ensures that requests for /app1 do not starve the thread
 pool for requests to /app2. Imagine that you have a single
 connector, single thread pool, etc. for two apps: /app1 and
 /app2 and there is only a SINGLE thread in the pool
 available, and that each request to /app1 makes a call to
 /app2. Here's what happens:

 1. Client requests /app1 2. /app1 makes connection to /app2
 3. Request to /app2 stalls waiting for a thread to become
 available (it's already allocated to the request from #1
 above)

 You basically have a deadlock, here, because /app1 isn't
 going to give-up its thread, and the thread for the request
 to /app2 will not be allocated until the request to /app1
 gives up that thread.

 Now, nobody runs their application server with a SINGLE
 thread in the pool, but this is instructive: it means that
 deadlock CAN occur.

 Let's take a more reasonable situation: you have 100 threads
 in the pool .

 Let's say that you get REALLY unlucky and the following
 series of events occurs:

 1. 100 requests come in simultaneously for /app1. All
 requests are allocated a thread from the thread pool for
 these requests before anything else happens. Note that the
 thread pool is currently completely exhausted with requests
 to /app1. 2. All 100 threads from /app1 make requests to
 /app2. Now you have 100 threads in deadlock similar to the
 contrived SINGLE thread situation above.

 Sure, it's unlikely, but it CAN happen, especially if
 requests to /app1 are mostly waiting on requests to /app2 to
 complete: you 

Re: tomcat 9.0 doesn't load the ECDSA keystore. (ver # 9.0.24)

2020-06-04 Thread logo
Madhan,


> Am 04.06.2020 um 18:41 schrieb Christopher Schultz 
> :
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
> 
> Madhan,
> 
> On 6/3/20 21:08, Madhan Raj wrote:
>> OS - CentOS 7.6.1810( Core)
>> 
>> Below connector doesn't load my EC keystore whereas it works with
>> RSA . Any insights please .

Try to update to the latest version. Check the change log. In 9.0.31 support 
for EC keys was at least updated. Maybe this will work. I had problems using 
unencrypted EC keys in Tomcat 8.5.50 in JSSE connectors - however with pem 
encoded cert files (fixed in 8.5.51). But yours may be a similar problem.

Regards

Peter

> 
> When you say "doesn't load", what do you mean? Possible reasonable
> responses are:
> 
> 1. I can only complete a handshake with RSA cert, not ECDSA cert
> 2. Error message (please post)
> 3. JVM crashes
> 4. OS crashes
> 5. Universe ends (possible, but unlikely to be reproducible)
> 
>> this is my connector tag  in server.xml > SSLEnabled="true" URIEncoding="UTF-8"  maxThreads="200" port="443"
>> scheme="https" secure="true"
>> protocol="org.apache.coyote.http11.Http11NioProtocol"
>> sslImplementationName="org.apache.tomcat.util.net.jsse.JSSEImplementat
> ion"
>> 
>> 
> disableUploadTimeout="true" enableLookups="false" maxHttpHeaderSize="819
> 2"
>> minSpareThreads="25"> > certificateVerification="none" sessionTimeout="1800"
>> protocols="TLSv1,TLSv1.1,TLSv1.2,TLSv1.3"
>> ciphers="ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECD
> HE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:AES256-SHA:DHE-DS
> S-AES256-SHA:AES128-SHA:DHE-RSA-AES128-SHA"
>> 
>> 
> sessionCacheSize="1">
>> > certificateKeystoreFile="/usr/local/platform/.security/tomcat-ECDSA/ce
> rts/tomcat-ECDSA.keystore"
>> 
>> 
> certificateKeystorePassword="8o8yeAH2qSJbJ2sn"
>> certificateKeystoreType="PKCS12" type="EC"/> 
>> 
>> 
>> tomcat start up command used :- /home/tomcat/tomcat -user tomcat
>> -home /usr/local/thirdparty/java/j2sdk -pidfile
>> /usr/local/thirdparty/jakarta-tomcat/conf/tomcat.pid -procname
>> /home/tomcat/tomcat -outfile
>> /usr/local/thirdparty/jakarta-tomcat/logs/catalina.out -errfile &1
>> -Djdk.tls.ephemeralDHKeySize=2048
>> -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
>> -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
>> -Djava.util.logging.config.file=/usr/local/thirdparty/jakarta-tomcat/c
> onf/logging.properties
>> 
>> 
> - -agentlib:jdwp=transport=dt_socket,address=localhost:8000,server=y,suspe
> nd=n
>> -XX:+UseParallelGC -XX:GCTimeRatio=99 -XX:MaxGCPauseMillis=80
>> -Xmx1824m -Xms256m
>> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
>> -cp
>> /usr/local/thirdparty/jakarta-tomcat/bin/bootstrap.jar:/usr/local/thir
> dparty/jakarta-tomcat/bin/tomcat-juli.jar
>> 
>> 
> - -Djava.security.policy==/usr/local/thirdparty/jakarta-tomcat/conf/catali
> na.policy
>> -Dcatalina.base=/usr/local/thirdparty/jakarta-tomcat
>> -Dcatalina.home=/usr/local/thirdparty/jakarta-tomcat
>> -Djava.io.tmpdir=/usr/local/thirdparty/jakarta-tomcat/temp
>> org.apache.catalina.startup.Bootstrap start'
>> 
>> JAVA_OPTS= -Djava.library.path=$LD_LIBRARY_PATH
>> -Djavax.net.ssl.sessionCacheSize=1
>> -Djavax.net.ssl.trustStore=/usr/local/platform/.security/tomcat/trust-
> certs/tomcat-trust.keystore
>> 
>> 
> - -Djavax.net.ssl.trustStorePassword=$TRUST_STORE_PASSWORD
>> -XX:ErrorFile=$CATALINA_HOME/logs/diagnostic-info.jvm-crash.%p.tomcat.
> txt
>> 
>> 
> - -Dsun.zip.disableMemoryMapping=true
>> -XX:OnOutOfMemoryError=/home/tomcat/tomcat_diagnostics.sh
>> -XX:OnError=/home/tomcat/tomcat_diagnostics.sh $TOMCAT_JAVA_OPTS
>> 
>> Also can i have both RSA and ECDSA in a single keystore. Will that
>> work in tomcat 9?
> 
> Yes. You have to use two  elements each with a different
> "type" and "certificateKeyAlias"
> 
>> it used to work with tomat 7
> 
> It still works with Tomcat 9.
> 
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
> 
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7ZJEwACgkQHPApP6U8
> pFg/Tg/9El60qkdMWwk6SpBiKjy0rgQEYgmdv2hkVQXmfX4uaWHZuEBDydX/xQ9L
> 3JaS+rDeM/4Z6Y7HrKqLGQ0Q+mtgWSoXohhGAqZMcsaGtdiz9oBYukRW7e0JG4Hv
> OZgmyPUifLH0kPDyrql3feLQL9TW7G998rR9+N2BsFWnyVdaHYIWt2vSu+/vak7T
> OqqNj0Wze9G8/OudKXCEQBi1ADql8XAt7hRCaQLHRcaDLEVLnULq6lgol0dV9qXM
> suzNGud9VWNUgsoNX7wZDmx2xYnvDUfOnUJSEYLfRV6zFHOJOLiKLk8GBjymLVt3
> PEW3EXlJpq2rQo++s4tNhJGjZRR7yEGNRUO1bl/eB7O4MZrwpZyV9lmy2TN2Im5g
> LsMas3p3m87vz8ajafo9SDSZkmXmJ270dUZd8MAxxIvDSCnhw0trSTxbppgeb7p4
> LGn/gA9igAY9S9PUKkyLocKVW9XpRg1v21WCSyifKzM7b0787e1EFx6rhxBTsZAk
> 7D7nL+0Em61LRQKaM3noDtyofEzYGoUtaRwv5gx+dCfF5huDCKvkhWxGQfAwiE/3
> fRHCZK1la1Jn3wikApLXU6iEjXV33TmF/hAjLOPaizl90AYxR6O4pvwRKOF+9+fV
> Z4CO1ysmLK/WHTYXcpZ8/zPEo9EgXbTULU9DiDu3N6+LKrUFQcc=
> =L+y6
> -END PGP SIGNATURE-
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For 

Re: tomcat 8.5 config for ecommerce site, seeing request timeouts

2020-06-04 Thread Ayub Khan
Chris,

It is fatal and keeps throwing the error. I have to restart tomcat as the
apis stop responding.APIs return 502.





On Fri, Jun 5, 2020 at 12:37 AM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/4/20 12:52, Ayub Khan wrote:
> > Sure I will use DNS and try to change the service calls.
> >
> > Also once in a while we see the below error and tomcat stops
> > serving requests. Could you please let me know the cause of this
> > issue ?
> >
> > org.apache.tomcat.util.net.NioEndpoint$Acceptor run SEVERE: Socket
> > accept failed java.nio.channels.ClosedChannelException at
> >
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:2
> 35)
> > at
> >
> org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:682
> )
> > at java.lang.Thread.run(Thread.java:748)
>
> You've reached the limit of my knowledge of Java IO, sockets, and
> channels, here.
>
> It's very possible that accept() has pulled de-queued a connection
> from a client who has disappeared after some (short?) timeout. Is it
> fatal, or does it just fill-up your logs?
>
> - -chris
>
> > On Thu, 4 Jun 2020, 19:47 Christopher Schultz,
> 
> > wrote:
> >
> > Ayub,
> >
> > On 6/4/20 11:05, Ayub Khan wrote:
>  Christopher Schultz wrote:
> > There's no particular reason why a request to node1:/app1
> > needs to have its loopback request call node1:/app2, is
> > there? Can node1:/app1 call node2:/app2?
> 
> 
>  Yes we can do that but then we would have to use the DNS urls
>  and wont this cause network latency compared to a localhost
>  call?
> >
> > DNS lookups are cheap and cached. Connecting to "localhost"
> > technically performs a DNS lookup, too. Once the DNS resolver has
> > "node1" in its cache, it'll be just as fast as looking-up
> > "localhost".
> >
>  I have not changed the maxThreads config on each of the
>  connectors? If I have to customize it, how to decide what
>  value to use for maxThreads?
> > The number of threads you allocate has more to do with your
> > application than anything else. I've seen applications (on
> > hardware) that can handle thousands of simultaneous threads.
> > Others, I've seen will fall-over if more than 4 or 5 requests come
> > in simultaneously.
> >
> > So you'll need to load-test your application to be sure what the
> > right numbers are.
> >
> > Remember that if your application is database-heavy, then the
> > number of connections to the database will soon become a
> > bottleneck. There's no sense accepting connections from 100 users
> > if every requests requires a database connection and you can only
> > handle 20 connections to the database.
> >
> > -chris
> >
>  On Mon, Jun 1, 2020 at 10:24 PM Christopher Schultz <
>  ch...@christopherschultz.net> wrote:
> 
>  Ayub,
> 
>  On 6/1/20 11:12, Ayub Khan wrote:
> >>> Chris,
> >>>
> >>> As you described I have added two new connectors in
> >>> server.xml and using nginx to redirect requests to
> >>> different connector ports. Also configured nginx to
> >>> route traffic of each app on a different connector port
> >>> of tomcat
> >>>
> >>> In config of each app I am using port specific for the
> >>> app which is being called localhost:8081 for app2 and
> >>> localhost:8082 for app 3
> >>>
> >>> So now in config of app1 we call app2 using
> >>> localhost:8081/app2 and localhost:8082/app3
> 
>  Perfect.
> 
> >>> Could you explain the benefit of using this type of
> >>> config? Will this be useful to not block requests for
> >>> each app?
> 
>  This ensures that requests for /app1 do not starve the thread
>  pool for requests to /app2. Imagine that you have a single
>  connector, single thread pool, etc. for two apps: /app1 and
>  /app2 and there is only a SINGLE thread in the pool
>  available, and that each request to /app1 makes a call to
>  /app2. Here's what happens:
> 
>  1. Client requests /app1 2. /app1 makes connection to /app2
>  3. Request to /app2 stalls waiting for a thread to become
>  available (it's already allocated to the request from #1
>  above)
> 
>  You basically have a deadlock, here, because /app1 isn't
>  going to give-up its thread, and the thread for the request
>  to /app2 will not be allocated until the request to /app1
>  gives up that thread.
> 
>  Now, nobody runs their application server with a SINGLE
>  thread in the pool, but this is instructive: it means that
>  deadlock CAN occur.
> 
>  Let's take a more reasonable situation: you have 100 threads
>  in the pool .
> 
>  Let's say that you get REALLY unlucky and the following
>  series of events occurs:
> 
>  1. 100 requests come in simultaneously for /app1. All
>  requests are allocated a thread