RE: Tomcat 9.0.65 suspected memory leak

2023-02-10 Thread Chen Levy
Thanks Mark, workaround seem to be working
Chen

> -Original Message-
> From: Mark Thomas 
> Sent: Thursday, February 9, 2023 12:41
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 9.0.65 suspected memory leak
> 
> On 09/02/2023 13:25, Mark Thomas wrote:
> > On 09/02/2023 13:04, Mark Thomas wrote:
> >> On 04/02/2023 22:06, Chen Levy wrote:
> >>
> >>> Mark, I believe a change in Tomcat 9.0.65 causes it to accumulate
> >>> open connections:
> >>> I took a fresh Tomcat, unzipped and modified server.xml with only
> >>> the
> >>> following:
> >>> 1. Changed port 8080 to port 80
> >>> 2. Changed port 8443 to port 443
> >>> 3. Uncommented the nio connector and added the snippet
> >>>      >>> className="org.apache.coyote.http2.Http2Protocol" />
> >>>  
> >>>   >>> certificateKeystoreFile="conf/tomcat_noroot.p12"
> >>>   certificateKeyAlias="..."
> >>>   certificateKeystorePassword="..."
> >>>   certificateKeystoreType="PKCS12"/>
> >>>  
> >>>
> >>> I used Chrome to call the default index.html with Wireshark in the
> >>> middle:
> >>> With 9.0.63 - 20 seconds after the last data frame, came a GOAWAY
> >>> from the server.
> >>> With 9.0.65 - No GOAWAY was sent, and the server and client kept
> >>> ACKing each other.
> >>>
> >>> Tomcat 9.0.71 and 10.1.5 behaved similarly - no GOAWAY was sent.
> >>>
> >>> Test was conducted with:
> >>> Wireshark Version 4.0.3 (v4.0.3-0-gc552f74cdc23) Chrome Version
> >>> 109.0.5414.120 JDK 17.0.6+10 Windows 11
> >>
> >> Thanks for the reproduction details. I'll take a look now.
> >
> > A quick workaround is to configure useAsyncIO="false" on the Connector.
> 
> Fixed for the next round of releases.
> 
> Mark
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 9.0.65 suspected memory leak

2023-02-04 Thread Chen Levy

> -Original Message-
> From: Mark Thomas 
> Sent: Monday, September 19, 2022 13:02
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 9.0.65 suspected memory leak
> 
> On 15/09/2022 14:11, Chen Levy wrote:
> > Hello Experts
> >
> > We’ve recently upgraded some of our production servers to Tomcat
> > 9.0.65; every upgraded server crashed with java.lang.OutOfMemoryError
> > within an hour or so under load.
> >
> > The exact same setup (same application, Linux kernel, Java version
> > etc.) with Tomcat 9.0.63 does not exhibit this issue.
> >
> > A heap-dump through MAT gave the following leak suspect (leak report
> > attached):
> >
> > “
> >
> > 14,364 instances of
> > "org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper", loaded by
> > "java.net.URLClassLoader @ 0x6be257090" occupy 4,489,221,944 (91.95%)
> bytes.
> >
> > These instances are referenced from one instance of
> > "java.util.concurrent.ConcurrentHashMap$Node[]", loaded by " > class loader>", which occupies 590,736 (0.01%) bytes.
> >
> > Keywords
> >
> >      org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper
> >
> >      java.net.URLClassLoader @ 0x6be257090
> >
> >      java.util.concurrent.ConcurrentHashMap$Node[]
> >
> > “
> >
> > Please let me know if I should provide additional information
> 
> That looks like 14k current connections which isn't unreasonable for a
> Tomcat instance under load.
> 
> There are connector related changes between 9.0.63 and 9.0.65 but nothing
> that is obviously related to the issue you are seeing.
> 
> At this point there isn't enough information to differentiate between:
> - a regression introduced in Tomcat between 9.0.63 and 9.0.65
> - a change in Tomcat between 9.0.63 and 9.0.65 that exposed a bug in the
>deployed web application
> - a change in Tomcat between 9.0.63 and 9.0.65 that triggered an
>increase memory usage sufficient to trigger an OOME in your
>environment
> 
> What we would need to investigate this further is a test case that
> demonstrates a leak. It doesn't have to trigger an OOME - it just has to
> demonstrate the JVM retaining references to objects you'd expect to have
> been eligible for GC. If you can reduce it to a single request even better.
> 
> Mark


Mark, I believe a change in Tomcat 9.0.65 causes it to accumulate open 
connections:
I took a fresh Tomcat, unzipped and modified server.xml with only the following:
1. Changed port 8080 to port 80
2. Changed port 8443 to port 443
3. Uncommented the nio connector and added the snippet
   




I used Chrome to call the default index.html with Wireshark in the middle:
With 9.0.63 - 20 seconds after the last data frame, came a GOAWAY from the 
server.
With 9.0.65 - No GOAWAY was sent, and the server and client kept ACKing each 
other.

Tomcat 9.0.71 and 10.1.5 behaved similarly - no GOAWAY was sent.

Test was conducted with:
Wireshark Version 4.0.3 (v4.0.3-0-gc552f74cdc23)
Chrome Version 109.0.5414.120
JDK 17.0.6+10
Windows 11

Chen


Tomcat 9.0.65 suspected memory leak

2022-09-15 Thread Chen Levy
Hello Experts

We've recently upgraded some of our production servers to Tomcat 9.0.65; every 
upgraded server crashed with java.lang.OutOfMemoryError within an hour or so 
under load.
The exact same setup (same application, Linux kernel, Java version etc.) with 
Tomcat 9.0.63 does not exhibit this issue.

A heap-dump through MAT gave the following leak suspect (leak report attached):

"
14,364 instances of "org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper", 
loaded by "java.net.URLClassLoader @ 0x6be257090" occupy 4,489,221,944 (91.95%) 
bytes.

These instances are referenced from one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]", loaded by "", which occupies 590,736 (0.01%) bytes.

Keywords

org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper
java.net.URLClassLoader @ 0x6be257090
java.util.concurrent.ConcurrentHashMap$Node[]
"

Please let me know if I should provide additional information

Java: OpenJDK Runtime Environment Temurin-17.0.4+8 (build 17.0.4+8)
Linux: 4.14.281-212.502.amzn2.aarch64








Thanks
Chen
<>

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Tomcat on Amazon's Graviton2

2021-02-10 Thread Chen Levy
Hello Everyone

We've been running our Staging environment on ARM64 based servers (Amazon EC2 
T4g) for a while now, and plan to use similar machines to replace our 
Production environment's x86-64 based servers in the near future.
All of our servers are based on Amazon Linux 2 with AdoptOpenJDK 15.0.2 and 
Tomcat 9.
Measured with our application, performance between x86-64 and ARM64 with two 
similarly sized machines (e.g. T3.large vs T4g.large) is up by an average of 
22% on the T4g's.
These numbers are largely academic, and will stay that way until we actually 
deploy a few machines in Production.
The ARM64 machines are also cheaper by about 20%, so if all goes well, there 
aren't real downsides to the move.

I read this correspondence about a year ago:
https://marc.info/?l=tomcat-user=2=1=arm64=b

I'd like to ask the community if anyone has production experience with Tomcat 
and ARM64/Graviton2, and whether there are specific issues I should beware of.

Thanks
Chen




RE: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-10-01 Thread Chen Levy
> -Original Message-
> From: Mark Thomas 
> Sent: Tuesday, October 1, 2019 17:43
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak
> 
> Found it.
> 
> HTTP/2 on NIO is affected.
> HTTP/2 on APR/native is not affected.
> 
> Need to check on NIO2 but I suspect it is affected.
> 
> Patch to follow shortly.
> 
> Mark


Good, here's some more corroborating info:
Mark I followed your suggestion to test without HTTP/2, and one of my servers 
(v9.0.26) has been running without it for a day now, showing no memory 
accumulation
I do not use APR/Native

Chen

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-27 Thread Chen Levy
-Original Message-
From: Mark Thomas  
Sent: Friday, September 27, 2019 15:34
To: users@tomcat.apache.org
Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak

On 27/09/2019 16:34, Chen Levy wrote:
> On 26/09/2019 18:22, Chen Levy wrote:



>> The HashMap referenced in the report appears to be "waitingProcessors" 
>> inside AbstractProtocol which contain 262K entries.
> 
> OK. Those are asynchronous Servlets that are still in async mode.



> * I do not employ async servlets in my application

OK. Do you use WebSocket? There is a code path to add Processors to the 
waitingProcessors Map for WebSocket as well.

Mark


No, no WebSocket either; just plain old Servlets, Filters and the occasional JSP

Chen
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-27 Thread Chen Levy


-Original Message-
From: Mark Thomas  
Sent: Thursday, September 26, 2019 15:50
To: users@tomcat.apache.org
Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak

On 26/09/2019 18:22, Chen Levy wrote:
> Hello Experts
> 
> Several of my production servers were recently upgraded from Tomcat 9.0.14 to 
> 9.0.24; immediately after the upgrade the servers started accumulating memory 
> in a steady trend that was not observed before. In addition, CPU utilization 
> that used to hover around 2% not sits at 8%.
> For now the servers are still serving but I suspect they'll become 
> unresponsive in a few hours.
> I loaded a heap dump from one of the servers into MAT and received the 
> following Leak Suspect:
> 
> One instance of "org.apache.coyote.http11.Http11NioProtocol" loaded by 
> "java.net.URLClassLoader @ 0x503f02c40" occupies 9,282,972,608 (96.88%) 
> bytes. The memory is accumulated in one instance of 
> "java.util.concurrent.ConcurrentHashMap$Node[]" loaded by " loader>".
> 
> The HashMap referenced in the report appears to be "waitingProcessors" inside 
> AbstractProtocol which contain 262K entries.

OK. Those are asynchronous Servlets that are still in async mode.

While it is possible for an application to deliberately get itself into a state 
like this (infinite async timeouts and don't complete/dispatch the async 
requests) given that it doesn't happen with 9.0.14 but does with 9.0.24 (and 
.26) that suggests a Tomcat bug.

> The same issue was reproduced using v9.0.26 as well
> 
> Please let me know whether I should provide additional information

Can you do a binary search to determine which Tomcat 9.0.x release this problem 
was introduced in?

How easily can you reproduce this? Do you have something approaching a test 
case we could use to repeat the issue?

Meanwhile, I'll take a look at the changelog and see if anything jumps out as a 
possible cause.

Thanks,

Mark


> 
> Current setup of the production servers:
> AdoptOpenJDK (build 11.0.3+7)
> Amazon Linux 2
> 
> maxHttpHeaderSize="16384"
>maxThreads="500" minSpareThreads="25"
>enableLookups="false" disableUploadTimeout="true"
>connectionTimeout="1"
>compression="on"
>SSLEnabled="true" scheme="https" secure="true">
>keepAliveTimeout="2"
>  overheadDataThreadhold="0"/>
> 
>   certificateKeyAlias="tomcat"
>  certificateKeystorePassword=""
>  certificateKeystoreType="PKCS12"/>
> 
> 
> 
> Thanks
> Chen
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


Thanks for the attention Mark, here are some additional information and answers:
* Once the memory was completely consumed, the servers stopped responding with 
CPU stuck at 100%
* I do not employ async servlets in my application
* I cannot do a binary search for a version because of this change: 
https://github.com/apache/tomcat/commit/c16d9d810a1f64cd768ff33058936cf8907e3117
 which cause another memory leak and server failure between v9.0.16 and v9.0.21 
and was fixed in v9.0.24 (as far as I know)
* This is easily reproduced with the traffic in my farm and all the servers 
suffer the same. In a development environment it's more tricky; so currently I 
don't have a test case

Thanks
Chen


Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-26 Thread Chen Levy
Hello Experts

Several of my production servers were recently upgraded from Tomcat 9.0.14 to 
9.0.24; immediately after the upgrade the servers started accumulating memory 
in a steady trend that was not observed before. In addition, CPU utilization 
that used to hover around 2% not sits at 8%.
For now the servers are still serving but I suspect they'll become unresponsive 
in a few hours.
I loaded a heap dump from one of the servers into MAT and received the 
following Leak Suspect:

One instance of "org.apache.coyote.http11.Http11NioProtocol" loaded by 
"java.net.URLClassLoader @ 0x503f02c40" occupies 9,282,972,608 (96.88%) bytes. 
The memory is accumulated in one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]" loaded by "".

The HashMap referenced in the report appears to be "waitingProcessors" inside 
AbstractProtocol which contain 262K entries.

The same issue was reproduced using v9.0.26 as well

Please let me know whether I should provide additional information

Current setup of the production servers:
AdoptOpenJDK (build 11.0.3+7) 
Amazon Linux 2








Thanks
Chen

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



HTTP2 Connection Timeout

2019-08-05 Thread Chen Levy
Hello Experts

Several of my production servers were recently upgraded from Tomcat 9.0.14 to 
9.0.21; immediately after the upgrade the servers started accumulating memory 
and open-files (on Linux) in a steady trend that was not observed before.
After a couple of days (without reaching the memory or open-files limit and 
without throwing "OutOfMemoryError: Java heap space" or "IOException: Too many 
open files") the servers became unresponsive: any HTTPS request timed-out while 
HTTP requests continued to work correctly.
Restarting the servers resolved the symptoms but the behavior persists and a 
restart is necessary every couple of days.
I loaded a heap dump from an unresponsive server into MAT and received the 
following Leak Suspect:

105,871 instances of "org.apache.coyote.http2.Stream", loaded by 
"java.net.URLClassLoader..." occupy 7,581,549,904 (80.68%) bytes.
These instances are referenced from one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]", loaded by ""

The HashMap referenced in the report is "connections" inside ConnectionHandler.
I suspect that these objects accumulate as clients may not close their 
connections correctly; regardless, I'd expect Tomcat to close the connections 
upon timeout.
With keepAliveTimeout="2" defined on UpgradeProtocol, I tested one simple 
HTTP2 connection's persistence on Chrome's net-internals.
With 9.0.14 I can see the following at 20 seconds (as expected):
...
t=7065701 [st=   64]HTTP2_SESSION_UPDATE_RECV_WINDOW
--> delta = 6894
--> window_size = 15728640
t=7085708 [st=20071]HTTP2_SESSION_PING
--> is_ack = false
--> type = "received"
--> unique_id = 2
t=7085708 [st=20071]HTTP2_SESSION_PING
--> is_ack = true
--> type = "sent"
--> unique_id = 2
t=7085708 [st=20071]HTTP2_SESSION_CLOSE
--> description = "Connection closed"
--> net_error = -100 (ERR_CONNECTION_CLOSED)
t=7085708 [st=20071]HTTP2_SESSION_POOL_REMOVE_SESSION
t=7085708 [st=20071] -HTTP2_SESSION

With 9.0.21 the connection does not close, even after several minutes.
I believe the change in behavior stems the following commit: 
https://github.com/apache/tomcat/commit/c16d9d810a1f64cd768ff33058936cf8907e3117
 and so I may be doing something wrong.

Please let me know whether I have misconfigured, misunderstood, misdiagnosed, 
misbehaved or mis-something-else, and whether I should provide additional 
information

Current setup of the production servers:
AdoptOpenJDK (build 11.0.3+7) 
Amazon Linux 2








Thanks
Chen

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Excessive number of tomcat-users.xml open files

2018-11-27 Thread Chen Levy
Hello everyone

I have two Tomcat servers running under load, on two EC2 instances with 
amazon-linux-2 and JRE 10.0.2, one server is 9.0.12 and the other 9.0.13
The servers share the exact same application, configuration and load; however, 
the 9.0.13 server keeps hundreds of open files of the following form (obtained 
using 'lsof'):
https-jss 30403 30515   ec2-user 1874r  REG  259,1  
216492934 /opt/apache-tomcat-9.0.13/conf/tomcat-users.xml
http-nio- 30403 30485   ec2-user 1751r  REG  259,1  
216492934 /opt/apache-tomcat-9.0.13/conf/tomcat-users.xml

These open files do not exist on the 9.0.12 server
These servers deploy a single application with the default content of the 
webapps folder removed

It might be related to the following recent bug fix: 
https://bz.apache.org/bugzilla/show_bug.cgi?id=58590

Has anyone else seen a similar behavior?

Thanks
Chen



RE: log4j2 configuration in tomcat 8.5.5

2016-09-21 Thread Chen Levy
Bill,

From: Mark Thomas
Sent: Wednesday, September 21, 2016 17:58
To: Tomcat Users List
Subject: Re: log4j2 configuration in tomcat 8.5.5

On 21/09/2016 22:49, Bill Phillips wrote:
> My team has elected me to upgrade Tomcat from 7.X to 8.5.5+ on our very
> old and very large web application.
> 
> Under Tomcat 7.x, this application uses the tomcat-juli-adapters to replace
> JULI with log4j1.16. Worked like a charm for years.
> 
> 
> Under Tomcat 8.5.5, I'm getting exceptions such as:
>    java.lang.NoClassDefFoundError: org/apache/juli/WebappProperties
> 
> 
> Googling this led me to the discovery that log4j1.x support via
> tomcat-juli-adapters was
> discontinued in Tomcat 8.5, as described here:
> 
> https://bz.apache.org/bugzilla/show_bug.cgi?id=58588
> 
> 
> My understanding is that JULI can be replaced by log4j2 without the
> adapters,
> but I cannot find a description on how to do this.
> 
> 
> Can anyone describe to me how this is done?

https://logging.apache.org/log4j/log4j-2.6.1/log4j-jul/index.html

should point you in the right direction.

Mark


> 
> 
> It is no longer covered in the user documentation as it was on earlier
> versions:
> 
> https://tomcat.apache.org/tomcat-8.5-doc/logging.html
> 
> 
> 
> Thanks,
> 
> Bill
> 

You can read about how I configured my tomcat 8.5.4 to use log4j2 here:
http://mail-archives.apache.org/mod_mbox/tomcat-users/201607.mbox/%3CBAY406-EAS165A578E6D90447E4CC9B0B96010%40phx.gbl%3E
Chen


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



High CPU Consumption with Tomcat 8.5.4 NIO and SSL

2016-08-30 Thread Chen Levy
Hello All

Since Tomcat 8.0.x development is about to be stopped, we’ve decided to switch 
to 8.5.4, but since are facing the following issue:
A few hours after startup, the Java process starts consuming 100% CPU, and 
while Tomcat is still responsive, it is very slow. This is consistent, been 
going on for a couple of weeks and is only resolved each time by restarting 
Tomcat.

I’ve tried a few things:
Reverted back to Tomcat 8.0.33 -> problem solved
Switched from NIO to NIO2 -> problem solved, but other issues appeared with 
connections ending prematurely, so not a valid option
Switched to the new connector configuration scheme -> problem persists

Setup: 
Current Amazon Linux, Tomcat 8.5.4, Java 1.8.0_92-b14, no APR

  






Analyzing the threads, there’s one which takes all of the CPU time:

"https-jsse-nio-8443-ClientPoller-0" daemon prio=5 RUNNABLE
sun.misc.Unsafe.unpark(Native Method)
java.util.concurrent.locks.LockSupport.unpark(LockSupport.java:141)

java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor(AbstractQueuedSynchronizer.java:662)

java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1264)
java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)

java.util.concurrent.LinkedBlockingQueue.signalNotEmpty(LinkedBlockingQueue.java:176)

java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:430)
org.apache.tomcat.util.threads.TaskQueue.offer(TaskQueue.java:74)
org.apache.tomcat.util.threads.TaskQueue.offer(TaskQueue.java:31)

java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1361)

org.apache.tomcat.util.threads.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:163)

org.apache.tomcat.util.threads.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:141)

org.apache.tomcat.util.net.AbstractEndpoint.processSocket(AbstractEndpoint.java:828)

org.apache.tomcat.util.net.NioEndpoint$Poller.processKey(NioEndpoint.java:850)
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:826)
java.lang.Thread.run(Thread.java:745)


This is similar to an issue on a Windows machine reported here:
http://marc.info/?l=tomcat-user=147082413626214=2

I’ll appreciate any insight or advice
Thanks in advance
Chen



RE: Tomcat 8.5.4 and Log4j2

2016-07-28 Thread Chen Levy
From: Mark Thomas
Sent: Thursday, July 28, 2016 15:32
To: Tomcat Users List
Subject: Re: Tomcat 8.5.4 and Log4j2

On 28/07/2016 20:09, Chen Levy wrote:
> Hello all
> 
> I’ve been using Tomcat 8.0.X with Log4j2, both for Tomcat logging and for my 
> applicative logs, for a long time now.
> It was done using the following jars:
> extras/tomcat-juli.jar
> extras/tomcat-juli-adapters.jar jars
> 
> I’m in the process of upgrading to Tomcat 8.5.4 and according to 
> https://bz.apache.org/bugzilla/show_bug.cgi?id=58588 these jars are no longer 
> distributed.
> 
> I followed the instructions in 
> http://logging.apache.org/log4j/2.0/log4j-jul/index.html and performed the 
> following:
> 1. Added  -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager 
>  to the java command
> 2. Added  log4j-jul-2.6.2.jar  to the classpath
> 
> Other than adding these through setenv.bat, I did not modify anything with 
> the distribution (apache-tomcat-8.5.4-windows-x64.zip)
> 
> Now, when invoking startup.bat, Tomcat starts and exits immediately, without 
> any console or log output

use:
catalina.bat run

to start and report the error message.

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Thanks for the hint Mark
There was a ClassNotFoundException where log4j-juli required log4j-api in the 
classpath, then log4-core and finally disruptor.jar
So I moved these jars from Tomcat’s /lib folder, where I used to place them in 
v8.0, to /bin folder, next to tomcat-juli.jar; and added them all to the 
classpath.

I’m no sure if that is the right way, but it’s working. I’m pasting the content 
of my setenv.bat for those who may encounter this issue in the future (I’m 
using a setenv.sh file as well if anyone is interested):


rem Set the classpath for Log4j2
set 
"CLASSPATH=%CLASSPATH%;%CATALINA_HOME%\bin\log4j-jul-2.6.2.jar;%CATALINA_HOME%\bin\log4j-api-2.6.2.jar;%CATALINA_HOME%\bin\log4j-core-2.6.2.jar;%CATALINA_HOME%\bin\disruptor-3.3.4.jar"

rem Log4j2 configuration
set CATALINA_OPTS=%CATALINA_OPTS% 
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager 
-Dlog4j.configurationFile=file://%CATALINA_HOME%\conf\log4j2.xml


Please let me know if there’s another, better way to do it
Thanks
Chen


Tomcat 8.5.4 and Log4j2

2016-07-28 Thread Chen Levy
Hello all

I’ve been using Tomcat 8.0.X with Log4j2, both for Tomcat logging and for my 
applicative logs, for a long time now.
It was done using the following jars:
extras/tomcat-juli.jar
extras/tomcat-juli-adapters.jar jars

I’m in the process of upgrading to Tomcat 8.5.4 and according to 
https://bz.apache.org/bugzilla/show_bug.cgi?id=58588 these jars are no longer 
distributed.

I followed the instructions in 
http://logging.apache.org/log4j/2.0/log4j-jul/index.html and performed the 
following:
1. Added  -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager  
to the java command
2. Added  log4j-jul-2.6.2.jar  to the classpath

Other than adding these through setenv.bat, I did not modify anything with the 
distribution (apache-tomcat-8.5.4-windows-x64.zip)

Now, when invoking startup.bat, Tomcat starts and exits immediately, without 
any console or log output

Please advise

Thanks in advance
Chen