Re: Wildcard certificates
I would have the opposite feeling. I would not want a java process parked out in the internet. Not saying you're wrong just my personal feeling. Maybe things have shifted in a different direction over the year. I do agree that something like that would be helpful to other tomcat admins. Would you consider putting it into github ? Thanks, J On Wed, Apr 17, 2019 at 9:18 AM John Dale wrote: > I have a really nice process that works great with certbot. Single > command to renew all of my certs and I'm finished. > > I get some piece of mind having a Java process guarding the front > door. Seems to be more impervious to overflows. What am I missing? > > I think what I have might be easily developed into something to help > other Tomcat users. > > On 4/17/19, TurboChargedDad . wrote: > > We terminated SSL above the tomcat layer using NGINX or Apache to avoid > > the complexities that come with managing a JKS. I want to hear all I can > > on this subject. > > > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: Wildcard certificates
Multi-tenant or single tenant system? On Wed, Apr 17, 2019 at 8:54 AM Sean Dawson wrote: > Thanks for the replies - I'm willing to use NGINX to handle this for us - > can you point me to a good page on that? > > > On Wed, Apr 17, 2019 at 9:46 AM John Larsen > wrote: > > > We do the same - via mod_jk we utilize apache httpd to handle the SSL. > > Keeps things simple and works well. > > John Larsen > > > > On Wed, Apr 17, 2019 at 7:44 AM TurboChargedDad . > > > wrote: > > > > > We terminated SSL above the tomcat layer using NGINX or Apache to > avoid > > > the complexities that come with managing a JKS. I want to hear all I > can > > > on this subject. > > > > > >
Re: Wildcard certificates
We terminated SSL above the tomcat layer using NGINX or Apache to avoid the complexities that come with managing a JKS. I want to hear all I can on this subject.
Re: Parallel Tomcat Instances On Same Server
The way I have done it in the past is to separate each tomcat instance by a local user on the machine. I use linux so I have no idea if this would work on windoze. This was done to separate powers and isolate permissions. I am actually looking for critique of this setup as well. So please feel free to blast away. Example : Let's say I have 4 websites. site1.com site2.com site3.com site4.com I have : An NGINX proxy in front of the apache servers that sits in a public segment. A tomcat server fronted by NGINX to terminate SSL that sits in a private segment. Tomcat is installed in /opt/company/tomcat-8.5 and a symlink exists /opt/company/tomcat-latest --> /opt/company/tomcat-8.5 Systemd requires a startup script. /usr/lib/systemd/system/tomcat8@.service # Systemd unit file for tomcat instances. # # To create clones of this service: # 1. systemctl enable tomcat@name.service # 2. create catalina.base directory structure in #/var/lib/tomcats/name # /usr/lib/systemd/system/tomcatN.service [Unit] Description=Apache Tomcat 8 After=network.target [Service] Type=simple User=%I Group=%I # Run ExecStartPre with root-permissions PermissionsStartOnly=true ExecStartPre=-/usr/bin/mkdir /var/run/tomcat8 #ExecStartPre=/opt/company/utility/tomcat8/pre-run.sh ExecStartPre=/usr/bin/chown -R root:tomcat8r /var/run/tomcat8 ExecStartPre=/usr/bin/chmod 770 /var/run/tomcat8 Environment="NAME=%I" EnvironmentFile=/etc/sysconfig/tomcat8@%I #ExecStart=/opt/company/tomcat8/bin/catalina.sh start ExecStart=/opt/company/tomcat8/bin/startup.sh ExecStop=/opt/company/tomcat8/bin/shutdown.sh RemainAfterExit=yes #User=%I #Group=%I [Install] WantedBy=multi-user.target Tomcat is setup as a service using the following service file : # Service-specific configuration file for tomcat8. This will be sourced by # the systemd script after the global configuration file # /etc/sysconfig/tomcat8@userNN, thus allowing values to be overridden in # a per-service manner. (NN being the numerical value for the specififed use 01-99) # # NEVER change the systemd unit file itself. To change values for all services make # your changes in /etc/sysconfig/tomcat8@userNN. # # To change values for a specific service make your edits here. # To create a new service a config file must exist for the user in # /etc/sysconfig/tomcat8@userNN. All of the tomcat environment variables will be # handled inside that config file for that user. When calling systemctl, systemd # will look the specificed config file based on the username passed to it. # Start the new service by executing : systemctl start tomcat8\@user99 replacing # user 99 with the appropriate user. # Make the service start at boot time by executing the following command: # systemctl enable tomcat8\@user99 again replacing user 99 with the appropriate # user. TOMCAT_CFG_LOADED=1 # Run tomcat under the Java Security Manager SECURITY_MANAGER="false" # Where your java installation lives JAVA_HOME="/opt/company/java-1.8" # Where your tomcat installation lives CATALINA_BASE="/home/user01/website" CATALINA_HOME="/opt/company/tomcat8" #JASPER_HOME="" CATALINA_TMPDIR="/home/user01/website/temp" # You can pass some parameters to java here if you wish to JAVA_OPTS="-Xms2048m -Xmx2048m -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -Dspring.profiles.active=development" # Use JAVA_OPTS to set java.library.path for libtcnative.so #JAVA_OPTS="-Djava.library.path=/usr/lib" # What user should run tomcat TOMCAT_USER="user01" TOMCAT_LOG="/home/user01/website/logs/catalina.out" # You can change your tomcat locale here #LANG="en_US" # Run tomcat under the Java Security Manager #SECURITY_MANAGER="false" # Time to wait in seconds, before killing process #SHUTDOWN_WAIT="30" # Whether to annoy the user with "attempting to shut down" messages or not #SHUTDOWN_VERBOSE="true" # Set the TOMCAT_PID location CATALINA_PID="/var/run/tomcat8/tomcat8-user01.pid" # Connector port is 8080 for this tomcat8 instance #CONNECTOR_PORT="8080" # If you wish to further customize your tomcat environment, # put your own definitions here # (i.e. LD_LIBRARY_PATH for some jdbc drivers) #CLASSPATH="" #The above will not work without makeing changes to the base tomcat startup scripts. A user is created for each site : site1.com = user01 site2.com = user02 site3.com = user03 site4.com = user04 A sysconfig file is created for each user. /etc/sysconfig/tomcat8@user01 /etc/sysconfig/tomcat8@user02 /etc/sysconfig/tomcat8@user03 /etc/sysconfig/tomcat8@user04 The tomcat configs for each website are stored in /home/user01/website/conf as an example. Each user is assigned their own unique port.using a scheme. Example : user01 = 8101 user02 = 8102 user03 = 8103 user04 = 8104 and so on. I have ran into some challenges that I have not been able to explain. Which is another reason I am posting this again for more eyes to be on it. Hope that helps. On Fri, Feb 22, 2019 at 12:26 AM Jerry Malcolm wrote: > I need a bit of
Host manager / manager access.
Java 8 Tomcat 8.5.20 Hello, I am trying to understand how to get the host manager / manager access working from somewhere other than the localhost. I have tried all the various methods out there on the web to no avail. I keep getting the 403 access denied message. I am at a total loss at this point.. Thanks in advance. I hope this is readable as it's hard to tell what it's going to look like in this gmail editor. I have tried creating the following files. $CATALINA_BASE/conf/server.xml $CATALINA_BASE/conf/Catalina/localhost/magager.xml $CATALINA_BASE/webapps/host-manager/WEB-INF/context.xml $CATALINA_BASE/webapps/manager/WEB-INF/context.xml $CATALINA_BASE/conf/ tomcat-users.xml http://tomcat.apache.org/xml; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd" version="1.0">
Re: Understanding tomcat + apache and I/O
It should also be noted that if I bounce one of the larger instances. Everyone suffers during the time to startup. The connection counts raise in the same way although I am not sure at this time if there is an actual outage experienced by anyone. I will have to do some testing to determine that. On Wed, Nov 29, 2017 at 2:16 PM, TurboChargedDad . <linuxhpc...@gmail.com> wrote: > >> So now all you have to do is upgrade to Tomcat 8.0 or, even better, > >>Tomcat 8.5 :) > That's the plan but it's kind of like pulling teeth. > > >>Can you expand on the "weirdness"? I see you have some more details > >>below but I think you could be more specific. > > Let's say that there are 12 users on a given system all running a tomcat > server that has SSL terminated on the same host. user01 user02 user03 and > son on all the way to user12. Each user has their own /home/userNN > directory. Each user has their own own environment file in /etc/sysconfig/ > '/etc/sysconfig/tomcat7@userNN . In each of those files contains the > various settings that are required for each user. CATALINA_HOME Java path, > PID etc. Each user starts it's own JVM in a work directory in their home > directory. > > Now imagine that user10's application starts to experience a database > issue and the app stops responding.. It used to be true that everyone > would stop responding because the AJP connectors were BIO. Then the HTTP > connections would stack up across the board. The stacking of the HTTP > connections was expected given the situation. Eventually the reverse proxy > servers would die from running out of memory if were didn't get the outage > under control quickly enough. > > Now that we switched that we have had 2 outages. In both cases the only > tenants impacted from a performance perspective were the tenants > experiencing the failures. No other alarms were detected during these > outages for any other tenants. Something odd does happen however. The > Apache HTTP connections rise for everyone along with the offending site. > > Please see the shared graph. > > https://photos.app.goo.gl/ZzEgpQUdbv9L84X82 > > This is caclulated by doing a netstat and grepping for EST then httpd > then the AJP port that would have connections passed back to it. ( sudo > -tt > /bin/netstat -ntp | grep EST | grep httpd | grep ':8125' | wc -l ) > > tcp0 0 127.0.0.1:37014 127.0.0.1:8125 > ESTABLISHED 5529/httpd > tcp0 0 127.0.0.1:40630 127.0.0.1:8125 > ESTABLISHED 29638/httpd > tcp0 0 127.0.0.1:40172 127.0.0.1:8125 > ESTABLISHED 28592/httpd > tcp0 0 127.0.0.1:36842 127.0.0.1:8125 > ESTABLISHED 5529/httpd > tcp0 0 127.0.0.1:40616 127.0.0.1:8125 > ESTABLISHED 29640/httpd > tcp0 0 127.0.0.1:37314 127.0.0.1:8125 > ESTABLISHED 20267/httpd > tcp0 0 127.0.0.1:39436 127.0.0.1:8125 > ESTABLISHED 29577/httpd > tcp0 0 127.0.0.1:39180 127.0.0.1:8125 > ESTABLISHED 25280/httpd > tcp0 0 127.0.0.1:40490 127.0.0.1:8125 > ESTABLISHED 29577/httpd > tcp0 0 127.0.0.1:39330 127.0.0.1:8125 > ESTABLISHED 29633/httpd > tcp0 0 127.0.0.1:40628 127.0.0.1:8125 > ESTABLISHED 29631/httpd > tcp0 0 127.0.0.1:39278 127.0.0.1:8125 > ESTABLISHED 28799/httpd > tcp0 0 127.0.0.1:39354 127.0.0.1:8125 > ESTABLISHED 29637/httpd > tcp0 0 127.0.0.1:39686 127.0.0.1:8125 > ESTABLISHED 29575/httpd > tcp0 0 127.0.0.1:37002 127.0.0.1:8125 > ESTABLISHED 8354/httpd > tcp0 0 127.0.0.1:39292 127.0.0.1:8125 > ESTABLISHED 29574/httpd > tcp0 0 127.0.0.1:39752 127.0.0.1:8125 > ESTABLISHED 29631/httpd > tcp0 0 127.0.0.1:41450 127.0.0.1:8125 > ESTABLISHED 29574/httpd > tcp0 0 127.0.0.1:37328 127.0.0.1:8125 > ESTABLISHED 20266/httpd > tcp0 0 127.0.0.1:39726 127.0.0.1:8125 > ESTABLISHED 28799/httpd > > > It is the example above that determines the connection counts for each > tenant. > > I cannot for the life of me understand how or why this is happening.. The > only rise in connections should be detected in the offending application? > Right? > > I can't say beyond a shadow of a doubt that the AJP connector threads > aren't being wonky. I am having trouble getting JMX to tell me that > information through zabbix. > > > Thoughts? > > Thank
Re: Understanding tomcat + apache and I/O
>> So now all you have to do is upgrade to Tomcat 8.0 or, even better, >>Tomcat 8.5 :) That's the plan but it's kind of like pulling teeth. >>Can you expand on the "weirdness"? I see you have some more details >>below but I think you could be more specific. Let's say that there are 12 users on a given system all running a tomcat server that has SSL terminated on the same host. user01 user02 user03 and son on all the way to user12. Each user has their own /home/userNN directory. Each user has their own own environment file in /etc/sysconfig/ '/etc/sysconfig/tomcat7@userNN . In each of those files contains the various settings that are required for each user. CATALINA_HOME Java path, PID etc. Each user starts it's own JVM in a work directory in their home directory. Now imagine that user10's application starts to experience a database issue and the app stops responding.. It used to be true that everyone would stop responding because the AJP connectors were BIO. Then the HTTP connections would stack up across the board. The stacking of the HTTP connections was expected given the situation. Eventually the reverse proxy servers would die from running out of memory if were didn't get the outage under control quickly enough. Now that we switched that we have had 2 outages. In both cases the only tenants impacted from a performance perspective were the tenants experiencing the failures. No other alarms were detected during these outages for any other tenants. Something odd does happen however. The Apache HTTP connections rise for everyone along with the offending site. Please see the shared graph. https://photos.app.goo.gl/ZzEgpQUdbv9L84X82 This is caclulated by doing a netstat and grepping for EST then httpd then the AJP port that would have connections passed back to it. ( sudo -tt > /bin/netstat -ntp | grep EST | grep httpd | grep ':8125' | wc -l ) tcp0 0 127.0.0.1:37014 127.0.0.1:8125 ESTABLISHED 5529/httpd tcp0 0 127.0.0.1:40630 127.0.0.1:8125 ESTABLISHED 29638/httpd tcp0 0 127.0.0.1:40172 127.0.0.1:8125 ESTABLISHED 28592/httpd tcp0 0 127.0.0.1:36842 127.0.0.1:8125 ESTABLISHED 5529/httpd tcp0 0 127.0.0.1:40616 127.0.0.1:8125 ESTABLISHED 29640/httpd tcp0 0 127.0.0.1:37314 127.0.0.1:8125 ESTABLISHED 20267/httpd tcp0 0 127.0.0.1:39436 127.0.0.1:8125 ESTABLISHED 29577/httpd tcp0 0 127.0.0.1:39180 127.0.0.1:8125 ESTABLISHED 25280/httpd tcp0 0 127.0.0.1:40490 127.0.0.1:8125 ESTABLISHED 29577/httpd tcp0 0 127.0.0.1:39330 127.0.0.1:8125 ESTABLISHED 29633/httpd tcp0 0 127.0.0.1:40628 127.0.0.1:8125 ESTABLISHED 29631/httpd tcp0 0 127.0.0.1:39278 127.0.0.1:8125 ESTABLISHED 28799/httpd tcp0 0 127.0.0.1:39354 127.0.0.1:8125 ESTABLISHED 29637/httpd tcp0 0 127.0.0.1:39686 127.0.0.1:8125 ESTABLISHED 29575/httpd tcp0 0 127.0.0.1:37002 127.0.0.1:8125 ESTABLISHED 8354/httpd tcp0 0 127.0.0.1:39292 127.0.0.1:8125 ESTABLISHED 29574/httpd tcp0 0 127.0.0.1:39752 127.0.0.1:8125 ESTABLISHED 29631/httpd tcp0 0 127.0.0.1:41450 127.0.0.1:8125 ESTABLISHED 29574/httpd tcp0 0 127.0.0.1:37328 127.0.0.1:8125 ESTABLISHED 20266/httpd tcp0 0 127.0.0.1:39726 127.0.0.1:8125 ESTABLISHED 28799/httpd It is the example above that determines the connection counts for each tenant. I cannot for the life of me understand how or why this is happening.. The only rise in connections should be detected in the offending application? Right? I can't say beyond a shadow of a doubt that the AJP connector threads aren't being wonky. I am having trouble getting JMX to tell me that information through zabbix. Thoughts? Thanks in advance. On Wed, Nov 29, 2017 at 8:51 AM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Big Papa, > > On 11/29/17 12:06 AM, TurboChargedDad . wrote: > > So.. Thank you for those help me understand the NIO vs BIO in > > tomcat 7.. > > So now all you have to do is upgrade to Tomcat 8.0 or, even better, > Tomcat 8.5 :) > > > I made those changes things have improved quite a bit. I am still > > experiencing some weirdness that I have tried to understand but > > can't get a handle on it. > > Can you expand on the "weirdness"? I see you have some more details > below but I think you could be more specific. > > > Quick overview.. --Proxies-- Apache Proxies (2) - The end user > > terminates SSL at the pr
Understanding tomcat + apache and I/O
(Sorry didn't mean to send please check this email for additional info) So.. Thank you for those help me understand the NIO vs BIO in tomcat 7.. I made those changes things have improved quite a bit. I am still experiencing some weirdness that I have tried to understand but can't get a handle on it. Quick overview.. --Proxies-- Apache Proxies (2) - The end user terminates SSL at the proxy/edge The proxies use HTTPS/SSL to reverse proxy back to the tomcat server. --/Proxies-- PXY1 & 2 configs for prefork mode. StartServers 30 MinSpareServers 15 MaxSpareServers 30 ServerLimit 400 MaxClients 400 MaxRequestsPerChild 4000 --Tomcat server-- (1) Apache terminates SSL over the top of Tomcat on the same server. Reverse proxies to the tomcat server using NIO AJP connectors. --/Tomcat server-- Tomcat apache prefork mode config: StartServers 8 MinSpareServers5 MaxSpareServers 20 ServerLimit 800 MaxClients 800 MaxRequestsPerChild 4000 Typical vhost config for a given tenant would look like this.. ServerAdmin ad...@company.com ServerName somewhere.somedomain.com ProxyPass / ajp://localhost:8126/ retry=3 DirectoryIndex index.php index.html index.htm # if not specified, the global error log is used ErrorLog "|/usr/sbin/rotatelogs /home/someuser/website/logs/ somewhere.somedomain.com-error_log_%Y%m%d 86400" CustomLog "|/usr/sbin/rotatelogs /home/someuser/website/logs/ somewhere.somedomain.com-access_log_%Y%m%d 86400" combined # log IP addresses HostnameLookups Off UseCanonicalName Off ServerSignature off SSLEngine on SSLCertificateFile /etc/ssl/ssl.crt/somewhere.somedomain.com.crt # Server Private Key: SSLCertificateKeyFile /etc/ssl/ssl.key/somewhere.somedomain.com.key SSLCertificateChainFile /etc/ssl/ssl.crt/somewhere. somedomain-chain.com.crt Typical tomcat connector thread config : We are operating a multi-tenant environment. As of right now, we have somewhere around 20 tomcat instances on a large machine of which only a handful are "busy". It used to be that when any one of them experienced a blocking issue. Every one of them went down. All of their AJP connector threads would rise until the system because tomcat was unresponsive. So far that appears for the most part to be addressed... However... When an issue is experienced. The site(s) experiencing the issue(s) going down doesn't seem to bring down any of the other sites. (w00t! w00t!) But the httpd connections for each site all still climb together. (Please see attached graph) Again no outage is experienced buy as demonstrated by the graph attached to this message. That graph is from zabbix using a custom metric that checks every 3 mins.. It does the following for each virtual host / tomcat instances For user25 : UserParameter=somewebsite.constats,sudo -tt /bin/netstat -ntp | grep EST | grep httpd | grep ':8125' | wc -l UserParameter=somewebsite2.constats,sudo -tt /bin/netstat -ntp | grep EST | grep httpd | grep ':8126' | wc -l So there is virtually no way they can be getting mixed up. Not to mention that there are a few that do not experience a rise in connections. Thoughts? Anyone? Thanks in advance. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Understanding tomcat + apache I/O
So.. Thank you for those help me understand the NIO vs BIO in tomcat 7.. I made those changes things have improved quite a bit. I am still experiencing some weirdness that I have tried to understand but can't get a handle on it. Quick overview.. --Proxies-- Apache Proxies (2) - The end user terminates SSL at the proxy/edge The proxies use HTTPS/SSL to reverse proxy back to the tomcat server. --/Proxies-- PXY1 & 2 configs for prefork mode. StartServers 30 MinSpareServers 15 MaxSpareServers 30 ServerLimit 400 MaxClients 400 MaxRequestsPerChild 4000 --Tomcat server-- (1) Apache terminates SSL over the top of Tomcat on the same server. Reverse proxies to the tomcat server using NIO AJP connectors. --/Tomcat server-- Tomcat apache prefork mode config: StartServers 8 MinSpareServers5 MaxSpareServers 20 ServerLimit 800 MaxClients 800 MaxRequestsPerChild 4000 Typical vhost config for a given tenant would look like this.. ServerAdmin ad...@company.com ServerName somewhere.somedomain.com ProxyPass / ajp://localhost:8326/ retry=3 DirectoryIndex index.php index.html index.htm # if not specified, the global error log is used ErrorLog "|/usr/sbin/rotatelogs /home/someuser/website/logs/somewhere.somedomain.com-error_log_%Y%m%d 86400" CustomLog "|/usr/sbin/rotatelogs /home/someuser/website/logs/somewhere.somedomain.com-access_log_%Y%m%d 86400" combined # log IP addresses HostnameLookups Off UseCanonicalName Off ServerSignature off SSLEngine on SSLCertificateFile /etc/ssl/ssl.crt/somewhere.somedomain.com.crt # Server Private Key: SSLCertificateKeyFile /etc/ssl/ssl.key/somewhere.somedomain.com.key SSLCertificateChainFile /etc/ssl/ssl.crt/somewhere.somedomain-chain.com.crt We are operating a multi-tenant environment. As of right now, we have somewhere around 20 tomcat instances on a large machine of which only a handful are "busy". It used to be that when any one of them experienced a blocking issue. Every one of them went down. All of their AJP connector threads would rise until the system because tomcat was unresponsive. So far that appears for the most part to be addressed... However... When an issue is experienced. The site(s) experiencing the issue(s) going down doesn't seem to bring down any of the other sites. (w00t! w00t!) But the httpd connections for each site all still climb together. (Please see attached graph) - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: AJP connection pool issue bug?
I missed some of these messages before.. I apologize. Can I send these to you privately. On Wed, Oct 4, 2017 at 4:01 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > TCD, > > On 10/4/17 3:45 PM, TurboChargedDad . wrote: > > Perhaps I am not wording my question correctly. > > Can you confirm that the connection-pool exhaustion appears to be > happening on the AJP client (httpd/mod_proxy_ajp) and NOT on the > server (Tomcat/AJP)? > > If so, the problem will likely not improve by switching-over to an > NIO-based connector on the Tomcat side. > > Having said that, the real problem is likely to be simple arithmetic. > Remember this expression: > > Ctc = Nhttpd * Cworkers > > Ctc = Connections Tomcat should be prepared to accept (e.g. Connector > maxConnections) > > Nhttpd = # of httpd servers > Cworkers = total # of connections in httpd connection pool for all > workers(!!) > > Imagine the following scenario: > > Nhttpd = 2 > Cworker = 200 > Ntomcat = 2 > > On httpd server A, we have a connection pool with 200 connections. If > Tomcat A goes down, all 200 connections will go to Tomcat B. If that > happens to both proxies (Tomcat A stops responding), then both proxies > will send all 200 connections to Tomcat B. That means that Tomcat B > needs to be able to support 400 connections, not 200. > > Let's say you now have 5 workers (1 for each application). Each worker > gets its own connection pool, and each connection pool has 200 workers > in it. Now, we have a situation where each httpd instance actually has > 1000 (potential) connections in the connection pool, and if Tomcat A > goes down, Tomcat B must be able to handle 2000 connections (1000 from > httpd A and 1000 from httpd B). > > At some point, you can't provision enough threads to handle all of > those connections. > > The solution (bringing this back around again) is to use NIO, because > you can handle a LOT more connections with a lower number of threads. > NIO doesn't allow you to handle more *concurrent* traffic (in fact, it > makes performance a tiny bit worse than BIO), but it will allow you to > have TONS of idle connections that aren't "wasting" request-processing > threads that are just waiting for another actual request to come > across the wire. > > > As a test I changed the following line in one of the many tomcat > > instances running on the server and bounced it. Old New > protocol="org.apache.coyote.ajp.AjpNioProtocol" redirectPort="8443" > > maxThreads="300" /> > > Yep, that's how to do it. > > > As the docs state I am able to verify that it did in fact switch > > over to NIO. > > > > INFO: Starting ProtocolHandler ["ajp-nio-9335"] > > Good. Now you can handle many idle connections with the same number of > threads. > > > Will running NIO and BIO on the same box have a negative impact? > > No. > > > I am thinking they should all be switched to NIO, this was just a > > test to see if I was understanding what I was reading. > I would recommend NIO in all cases. > > > That being said I suspect there are going to be far more tweaks > > that needs to be applied as there are none to date. > > Hopefully not. A recent Tomcat (which you don't actually have) with a > stock configuration should be fairly well-configured to handle a great > deal of traffic without falling-over. > > > I also know that the HTTPD server is running in prefork mode. > That will pose some other issues for you, mostly the ability to handle > bursts of high concurrency from your clients. You can consider it > out-of-scope for this discussion, though. What we want to do for you > is stop httpd+Tomcat from freaking out and getting stopped-up with > even a small number of users. > > > Which I think leaves me with no control over how many connections > > can be handed back from apache on a site by site basis. > > Probably not on a site-by-site basis, but you can adjust the > connection-pool size on a per-worker basis. For prefork it MUST BE > connection_pool_size=1 (the default for prefork httpd) and for > "worker" and similarly-threaded MPMs the default should be fine to use. > > > Really having hard time explaining to others how BIO could have > > caused the connection pool for another use to become exhausted. > Well... > > If one of your Tomcats locks-up (database is dead; might want to check > to see how the application is accessing that... infinite timeouts can > be a real killer, here), it can tie-up connections from > mod_proxy_ajp's connection p
Re: AJP connection pool issue bug?
Perhaps I am not wording my question correctly. Today we have... [Prxoy 1] | [Proxy 2] ---> [Apache ---> tomcat1] (HTTPS) (HTTPS) (HTTPS) --> (AJP) --> So we send the information from the proxies over https to the instance running the tomcat server. The SSL is terminated by Apache/HTTPD and handed back to tomcat over AJP. Tomcat doesn't handle SSL in anyway. It can't, it's not configured to do so. Is that how you understand the question I asked? As a test I changed the following line in one of the many tomcat instances running on the server and bounced it. Old New As the docs state I am able to verify that it did in fact switch over to NIO. INFO: Starting ProtocolHandler ["ajp-nio-9335"] Will running NIO and BIO on the same box have a negative impact? I am thinking they should all be switched to NIO, this was just a test to see if I was understanding what I was reading. That being said I suspect there are going to be far more tweaks that needs to be applied as there are none to date. I also know that the HTTPD server is running in prefork mode. Which I think leaves me with no control over how many connections can be handed back from apache on a site by site basis. Really having hard time explaining to others how BIO could have caused the connection pool for another use to become exhausted. Thanks, TCD On Wed, Oct 4, 2017 at 1:31 PM, Mark Thomas <ma...@apache.org> wrote: > On 04/10/17 19:26, TurboChargedDad . wrote: > > My initial reads about BIO vs NIO seems to involve terminating SSL at > the > > tomcat instance. Which we do not do. Am I running off into the weeds > with > > that? > > Yes. The NIO AJP connector is a drop in replacement for the BIO AJP > connector. > > https://tomcat.apache.org/tomcat-7.0-doc/config/ajp. > html#Standard_Implementations > > Look for the protocol attribute. > > Mark > > > > > > Thanks, > > TCD > > > > On Wed, Oct 4, 2017 at 9:17 AM, Mark Thomas <ma...@apache.org> wrote: > > > >> On 04/10/17 13:51, TurboChargedDad . wrote: > >>> Hello all.. > >>> I am going to do my best to describe my problem. Hopefully someone > will > >>> have some sort of insight. > >>> > >>> Tomcat 7.0.41 (working on updating that) > >>> Java 1.6 (Working on getting this updated to the latest minor release) > >>> RHEL Linux > >>> > >>> I inherited an opti-tenant setup. Individual user accounts on the > system > >>> each have their own Tomcat instance, each is started using sysinit. > This > >>> is done to keep each website in its own permissible world so one > website > >>> can't interfere with a others data. > >>> > >>> There are two load balanced apache proxies at the edge that point to > one > >>> Tomcat server (I know I know but again I inherited this) > >>> > >>> Apache lays over the top of tomcat to terminate SSL and uses AJP to > >>> proxypass to each tomcat instance based on the users assigned port. > >>> > >>> Things have run fine for years (so I am being told anyway) until > >> recently. > >>> Let me give an example of an outage. > >>> > >>> User1, user2 and user3 all use unique databases on a shared database > >>> server, SQL server 10. > >>> > >>> User 4 runs on a windows jboss server and also has a database on shared > >>> database server 10. > >>> > >>> Users 5-50 all run in the mentioned Linux server using tomcat and have > >>> databases on *other* various shared databases servers but have nothing > to > >>> do with database server 10. > >>> > >>> User 4 had a stored proc go wild on database server 10 basically > knocking > >>> it offline. > >>> > >>> Now one would expect sites 1-4 to experience interruption of service > >>> because they use a shared DBMS platform. However. > >>> > >>> Every single site goes down. I monitor the connections for each site > >> with a > >>> custom tool. When this happens, the connections start stacking up > across > >>> all the components. (Proxies all the way through the stack) > >>> Looking at the AJP connection pool threads for user 9 shows that user > has > >>> exhausted their AJP connection pool threads. They are maxed out at 300 > >> yet > >>> that user doesn't have high activity at all. The CPU load, memory usage > >> and > >>> traffic for everything exc
Re: AJP connection pool issue bug?
My initial reads about BIO vs NIO seems to involve terminating SSL at the tomcat instance. Which we do not do. Am I running off into the weeds with that? Thanks, TCD On Wed, Oct 4, 2017 at 9:17 AM, Mark Thomas <ma...@apache.org> wrote: > On 04/10/17 13:51, TurboChargedDad . wrote: > > Hello all.. > > I am going to do my best to describe my problem. Hopefully someone will > > have some sort of insight. > > > > Tomcat 7.0.41 (working on updating that) > > Java 1.6 (Working on getting this updated to the latest minor release) > > RHEL Linux > > > > I inherited an opti-tenant setup. Individual user accounts on the system > > each have their own Tomcat instance, each is started using sysinit. This > > is done to keep each website in its own permissible world so one website > > can't interfere with a others data. > > > > There are two load balanced apache proxies at the edge that point to one > > Tomcat server (I know I know but again I inherited this) > > > > Apache lays over the top of tomcat to terminate SSL and uses AJP to > > proxypass to each tomcat instance based on the users assigned port. > > > > Things have run fine for years (so I am being told anyway) until > recently. > > Let me give an example of an outage. > > > > User1, user2 and user3 all use unique databases on a shared database > > server, SQL server 10. > > > > User 4 runs on a windows jboss server and also has a database on shared > > database server 10. > > > > Users 5-50 all run in the mentioned Linux server using tomcat and have > > databases on *other* various shared databases servers but have nothing to > > do with database server 10. > > > > User 4 had a stored proc go wild on database server 10 basically knocking > > it offline. > > > > Now one would expect sites 1-4 to experience interruption of service > > because they use a shared DBMS platform. However. > > > > Every single site goes down. I monitor the connections for each site > with a > > custom tool. When this happens, the connections start stacking up across > > all the components. (Proxies all the way through the stack) > > Looking at the AJP connection pool threads for user 9 shows that user has > > exhausted their AJP connection pool threads. They are maxed out at 300 > yet > > that user doesn't have high activity at all. The CPU load, memory usage > and > > traffic for everything except SQL server 10 is stable during this > outrage. > > The proxies start consuming more and more memory the longer the outrage > > occurs but that's expected as the connection counts stack up into the > > thousands. After a short time all the sites apache / ssl termination > later > > start throwing AJP timeout errors. Shortly after that the edge proxies > > will naturally also starting throwing timeout errors of their own. > > > > I am only watching user 9 using a tool that allows me to have insight > into > > what's going on using JMX metrics but I suspect that once I get all the > > others instrumented that I will see the same thing. Maxed out AJP > > connection pools. > > > > Aren't those supposed to be unique per user/ JVM? Am I missing something > in > > the docs? > > > > Any assistance from the tomcat gods is much appreciated. > > TL;DR - Try switching to the NIO AJP connector on Tomcat. > > Take a look at this session I just uploaded from TomcatCon London last > week. You probably want to start around 35:00 and the topic of thread > exhaustion. > > HTH, > > Mark > > P.S. The other sessions we have are on the way. I plan to update the > site and post links once I have them all uploaded. > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
AJP connection pool issue bug?
Hello all.. I am going to do my best to describe my problem. Hopefully someone will have some sort of insight. Tomcat 7.0.41 (working on updating that) Java 1.6 (Working on getting this updated to the latest minor release) RHEL Linux I inherited an opti-tenant setup. Individual user accounts on the system each have their own Tomcat instance, each is started using sysinit. This is done to keep each website in its own permissible world so one website can't interfere with a others data. There are two load balanced apache proxies at the edge that point to one Tomcat server (I know I know but again I inherited this) Apache lays over the top of tomcat to terminate SSL and uses AJP to proxypass to each tomcat instance based on the users assigned port. Things have run fine for years (so I am being told anyway) until recently. Let me give an example of an outage. User1, user2 and user3 all use unique databases on a shared database server, SQL server 10. User 4 runs on a windows jboss server and also has a database on shared database server 10. Users 5-50 all run in the mentioned Linux server using tomcat and have databases on *other* various shared databases servers but have nothing to do with database server 10. User 4 had a stored proc go wild on database server 10 basically knocking it offline. Now one would expect sites 1-4 to experience interruption of service because they use a shared DBMS platform. However. Every single site goes down. I monitor the connections for each site with a custom tool. When this happens, the connections start stacking up across all the components. (Proxies all the way through the stack) Looking at the AJP connection pool threads for user 9 shows that user has exhausted their AJP connection pool threads. They are maxed out at 300 yet that user doesn't have high activity at all. The CPU load, memory usage and traffic for everything except SQL server 10 is stable during this outrage. The proxies start consuming more and more memory the longer the outrage occurs but that's expected as the connection counts stack up into the thousands. After a short time all the sites apache / ssl termination later start throwing AJP timeout errors. Shortly after that the edge proxies will naturally also starting throwing timeout errors of their own. I am only watching user 9 using a tool that allows me to have insight into what's going on using JMX metrics but I suspect that once I get all the others instrumented that I will see the same thing. Maxed out AJP connection pools. Aren't those supposed to be unique per user/ JVM? Am I missing something in the docs? Any assistance from the tomcat gods is much appreciated. Thanks in advance. TCD