Default Application questions

2020-10-26 Thread jonmcalexander
I'm doing some documentation cleanup. When was the balancer app, if it existed, 
removed from the ootb tomcat applications? Was there ever a webdav app, or just 
the WebdavServlet class?

Thanks,


Dream * Excel * Explore * Inspire
Jon McAlexander
Infrastructure Engineer
Asst Vice President

Middleware Product Engineering
Enterprise CIO | Platform Services | Middleware | Infrastructure Solutions

8080 Cobblestone Rd | Urbandale, IA 50322
MAC: F4469-010
Tel 515-988-2508 | Cell 515-988-2508

jonmcalexan...@wellsfargo.com


This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.



RE: Weirdest Tomcat Behavior Ever?

2020-10-26 Thread Eric Robinson
> On 26/10/2020 10:26, Mark Thomas wrote:
> > On 24/10/2020 01:32, Eric Robinson wrote:
> >
> > 
> >
>  -Original Message-
>  From: Mark Thomas 
> >
> > 
> >
>  The failed request:
>  - Completes in ~6ms
> >>>
> >>> I think we've seen the failed requests take as much as 50ms.
> >
> > Ack. That is still orders of magnitude smaller that the timeout and
> > consistent with generation time of some of the larger responses.
> >
> > I wouldn't sat it confirms any of my previous conclusions but it
> > doesn't invalidate them either.
> >
>  Follow-up questions:
>  - JVM
>    - Vendor?
>    - OS package or direct from Vendor?
> 
> >>>
> >>> jdk-8u221-linux-x64.tar.gz downloaded from the Oracle web site.
> >
> > OK. That is post Java 8u202 so it should be a paid for, commercially
> > supported version of Java 8.
> >
> > The latest Java 8 release from Oracle is 8u271.
> >
> > The latest Java 8 release from AdoptOpenJDK is 8u272.
> >
> > I don't think we are quite at this point yet but what is your view on
> > updating to the latest Java 8 JDK (from either Oracle or AdoptOpenJDK).
> >
>  - Tomcat
>    - OS package, 3rd-party package or direct from ASF?
> 
> >>>
> >>> tomcat.noarch  7.0.76-6.el7 from CentOS base repository
> >>>
> >>
> >> Drat, slight correction. I now recall that although we initially installed 
> >> 7.0.76
> from the CentOS repo, the application vendor made us lower the version to
> 7.0.72, and I DO NOT know where we got that. However, it has not changed
> since October-ish of 2018.
> >
> > I've reviewed the 7.0.72 to 7.0.76 changelog and I don't see any
> > relevant changes.
> >
>  - Config
>    - Any changes at all around the time the problems started? I'm
>  thinking OS updates, VM restarted etc?
> 
> >>>
> >>> server.xml has not changed since 4/20/20, which was well before the
> >>> problem manifested, and all the other files in the conf folder are
> >>> even older than that. We're seeing this symptom on both production
> >>> servers. One of them was rebooted a week ago, but the other has been
> >>> up continuously for
> >>> 258 days.
> >
> > OK. That rules a few things out which is good but it does make the
> > trigger for this issue even more mysterious.
> >
> > Any changes in the Nginx configuration in the relevant timescale?
> >

The last change to the nginx config files was on 8/21. The first report of 
problems from the users in question was on 9/16. There is another set of users 
on a different tomcat instance who reported issues around 8/26, 5 days after 
nginx config change. It seems unlikely to be related. Also, I can't imagine 
what nginx could be sending that would induce the upstream tomcat to behave 
this way.

> > Any updates to the application in the relevant timescale?
> >

Their application was patched to a newer version on 6/5.

> > Any features users started using that hadn't been used before in that
> > timescale?

That one I couldn't answer, as we are only the hosting facility and we are not 
in the loop when it comes to the users' workflow, but it seems unlikely given 
the nature of their business.

> >
> > 
> >
>  Recommendations:
>  - Switch back to the BIO connector if you haven't already. It has fewer
>    moving parts than NIO so it is simpler debug.
>  - Add "%b" to the access log pattern for Tomcat's access log valve to
>    record the number of body (excluding headers) bytes Tomcat believes
> it
>    has written to the response.
> 
> 
>  Next steps:
>  - Wait for the issue to re-occur after the recommended changes above
> and
>    depending on what is recorded in the access log for %b for a failed
>    request, shift the focus accordingly.
>  - Answers to the additional questions would be nice but the access log
>    %b value for a failed request is the key piece of information required
>    at this point.
> 
> >>>
> >>> Good news! I enabled that parameter a few days ago and we have
> >>> already caught some instances of the problem occurring.
> >
> > Excellent!
> >
> >>> Here is the logging format...
> >>>
> >>>  >>> directory="logs"
> >>>prefix="localhost_access." suffix=".log" pattern="%h
> >>> %l %D %u %t %{JSESSIONID}c %{cookie}i %r %s %b %S %q" />
> >>>
> >>> Due to some sensitive content in the HTTP requests below, I have
> >>> globally replaced certain words and addresses with random-ish
> >>> strings, but I don't think I've done anything to impact the issue.
> >>>
> >>> Following is an example from Wednesday.
> >>>
> >>> This is a request being sent from the nginx proxy to the first of 2
> >>> upstream servers, 10.51.14.46
> >>>
> >>> 2020/10/21 15:51:22 [error] 39268#39268: *842342531 upstream
> >>> prematurely closed connection while reading response header from
> upstream, client:
> >>> 99.88.77.66, server: redacted.domain.com, request: "GET
> >>> /sandhut/jsp/catalog/xml/getWidgets.jsp?eDate=2020-10-

Re: Weirdest Tomcat Behavior Ever?

2020-10-26 Thread Konstantin Kolinko
вт, 27 окт. 2020 г. в 00:07, Eric Robinson :
>
> > On 26/10/2020 10:26, Mark Thomas wrote:
> > > On 24/10/2020 01:32, Eric Robinson wrote:
> > >
> > > At this point I'd echo Konstantin's recommendation to add the
> > > following system property:
> > > org.apache.catalina.connector.RECYCLE_FACADES=true
> > >
> > > You'd normally do this in $CATALINA_HOME/bin/setenv.sh (creating that
> > > file if necessary) with a line like:
> > >
> > > CATALINA_OPTS="$CATALINA_OPTS
> > > -Dorg.apache.catalina.connector.RECYCLE_FACADES=true"
> > >
> > > You can confirm that the setting has been applied by looking in the
> > > log for the start-up. You should see something like:
> > >
> > > Oct 26, 2020 10:18:45 AM
> > > org.apache.catalina.startup.VersionLoggerListener log
> > > INFO: Command line argument:
> > > -Dorg.apache.catalina.connector.RECYCLE_FACADES=true
> > >
> > >
> > > That option reduces the re-use of request, response and related
> > > objects between requests and, if an application is retaining
> > > references it shouldn't, you usually see a bunch of
> > > NullPointerExceptions in the logs when the application tries to re-use 
> > > those
> > objects.
> > >
> > > Meanwhile, I'm going to work on a custom patch for 7.0.72 to add some
> > > additional logging around the network writes.
> >
> > Patch files and instructions for use:
> >
> > http://home.apache.org/~markt/dev/v7.0.72-custom-patch-v1/
> >
> > Mark
>
> Hi Mark,
>
> A couple of questions.
>
> 1. Now that you have provided this patch, should I still enable 
> RECYCLE_FACADES=true?

Regarding the patch,
there is no source code for it, but I think that it adds debug
logging, nothing more.


RECYCLE_FACADES makes your configuration more safe, protecting Tomcat
from misbehaving web applications. I have that property set on all
Tomcat installations that I care about. Thus I think that you should
set it anyway.

I usually add that property into the conf/catalina.property file.

See the wiki for a more detailed answer.
https://cwiki.apache.org/confluence/display/TOMCAT/Troubleshooting+and+Diagnostics#TroubleshootingandDiagnostics-TroubleshootingunexpectedResponsestateproblems

My thought that you case could be caused by something like the "Java
ImageIO" issue mentioned there. If something in the web application
produces dangling references to java.io.OutputStream and they are
closed during garbage collection, corrupting Tomcat internals.

> 2. [...] Can you think of any potential issues where making this change for 
> one instance could have a negative effect on any of the other instances? 
> Probably not, but just being careful.

I hope that you can cope with the amount of logging that this generates.

Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: [OT] SSLException after Java upgrade

2020-10-26 Thread Christopher Schultz

Steve,

On 10/26/20 13:02, Steve Sanders wrote:

We ran into similar issues when upgrading to latest JDK 8 (and 11). We
found that the fix was to add the sun.security.ec.SunEC as a security
provider in java.security like so:

security.provider.9=sun.security.ec.SunEC


I'll have to try that. I can easily use my SSLTest tool[1] to test 
various permutations.



After adding this we were able to continue using our current certificates
and communicate with services using the updated ciphers. Depending on the
version / flavor of JDK you're using you may also need to apply the
unlimited strength JCE policy patch found here:
https://www.oracle.com/java/technologies/javase-jce8-downloads.html


If you still need this, then you really need to upgrade your Java. Java 
8 no longer requires application of a separate, "unlimited" policy file 
since u162, released January 2018.


-chris

[1] https://github.com/ChristopherSchultz/ssltest
[2] 
https://golb.hplar.ch/2017/10/JCE-policy-changes-in-Java-SE-8u151-and-8u152.html


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



mod_jk "Can not determine the proper size for pid_t" on macOS 10.15.7

2020-10-26 Thread Paquin, Brian
I’m trying to build httpd and mod_jk for the first time on a macOS 10.15.7 box. 
XCode 12.1 is installed and I was able to compile OpenSSL 1.1.1g.
I got an error “Can not determine the proper size for pid_t” when compiling 
httpd (v2.4.46) with included apr (v1.7.0).
This issue https://bz.apache.org/bugzilla/show_bug.cgi?id=64753 provided a diff 
patch that adds “#include ” in a number of locations.
Applying this patch allowed me to compile httpd!

Now I am trying to compile mod_jk (v1.2.48), and I get the same error.
Does someone have a patch file I can use to get around this issue?

$ ./configure CFLAGS='-arch x86_64' APXSLDFLAGS='-arch x86_64' 
--with-apxs=/usr/local/apache2/bin/apxs

$ make

Making all in common
/usr/local/apache-2.4.46/build/libtool --silent --mode=compile gcc -I. 
-I/usr/local/apache-2.4.46/include -arch x86_64 -DHAVE_CONFIG_H -arch x86_64  
-DHAVE_APR  -I/usr/local/apache-2.4.46/include 
-I/usr/local/apache-2.4.46/include -arch x86_64 -DHAVE_CONFIG_H -DDARWIN 
-DSIGPROCMASK_SETS_THREAD_MASK -DDARWIN_10 -c jk_ajp12_worker.c -o 
jk_ajp12_worker.lo
In file included from jk_ajp12_worker.c:25:
In file included from ./jk_ajp12_worker.h:26:
In file included from ./jk_logger.h:26:
In file included from ./jk_global.h:340:
./jk_types.h:56:2: error: Can not determine the proper size for pid_t
#error Can not determine the proper size for pid_t
 ^
./jk_types.h:62:2: error: Can not determine the proper size for pthread_t
#error Can not determine the proper size for pthread_t
 ^
2 errors generated.
make[1]: *** [jk_ajp12_worker.lo] Error 1
make: *** [all-recursive] Error 1
$

Brian



Re: [OT] SSLException after Java upgrade

2020-10-26 Thread Steve Sanders
Chris,

On Mon, Oct 26, 2020 at 2:34 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> If you still need this, then you really need to upgrade your Java. Java
> 8 no longer requires application of a separate, "unlimited" policy file
> since u162, released January 2018.
>

Good to know! We are on 8U265 by default. I probably should read the
release notes a bit closer. :)


RE: Weirdest Tomcat Behavior Ever?

2020-10-26 Thread Eric Robinson
> > On 26/10/2020 10:26, Mark Thomas wrote:
> > > On 24/10/2020 01:32, Eric Robinson wrote:
> > >
> > > 
> > >
> >  -Original Message-
> >  From: Mark Thomas 
> > >
> > > 
> > >
> >  The failed request:
> >  - Completes in ~6ms
> > >>>
> > >>> I think we've seen the failed requests take as much as 50ms.
> > >
> > > Ack. That is still orders of magnitude smaller that the timeout and
> > > consistent with generation time of some of the larger responses.
> > >
> > > I wouldn't sat it confirms any of my previous conclusions but it
> > > doesn't invalidate them either.
> > >
> >  Follow-up questions:
> >  - JVM
> >    - Vendor?
> >    - OS package or direct from Vendor?
> > 
> > >>>
> > >>> jdk-8u221-linux-x64.tar.gz downloaded from the Oracle web site.
> > >
> > > OK. That is post Java 8u202 so it should be a paid for, commercially
> > > supported version of Java 8.
> > >
> > > The latest Java 8 release from Oracle is 8u271.
> > >
> > > The latest Java 8 release from AdoptOpenJDK is 8u272.
> > >
> > > I don't think we are quite at this point yet but what is your view
> > > on updating to the latest Java 8 JDK (from either Oracle or
> AdoptOpenJDK).
> > >
> >  - Tomcat
> >    - OS package, 3rd-party package or direct from ASF?
> > 
> > >>>
> > >>> tomcat.noarch  7.0.76-6.el7 from CentOS base repository
> > >>>
> > >>
> > >> Drat, slight correction. I now recall that although we initially
> > >> installed 7.0.76
> > from the CentOS repo, the application vendor made us lower the version
> > to 7.0.72, and I DO NOT know where we got that. However, it has not
> > changed since October-ish of 2018.
> > >
> > > I've reviewed the 7.0.72 to 7.0.76 changelog and I don't see any
> > > relevant changes.
> > >
> >  - Config
> >    - Any changes at all around the time the problems started? I'm
> >  thinking OS updates, VM restarted etc?
> > 
> > >>>
> > >>> server.xml has not changed since 4/20/20, which was well before
> > >>> the problem manifested, and all the other files in the conf folder
> > >>> are even older than that. We're seeing this symptom on both
> > >>> production servers. One of them was rebooted a week ago, but the
> > >>> other has been up continuously for
> > >>> 258 days.
> > >
> > > OK. That rules a few things out which is good but it does make the
> > > trigger for this issue even more mysterious.
> > >
> > > Any changes in the Nginx configuration in the relevant timescale?
> > >
>
> The last change to the nginx config files was on 8/21. The first report of
> problems from the users in question was on 9/16. There is another set of
> users on a different tomcat instance who reported issues around 8/26, 5 days
> after nginx config change. It seems unlikely to be related. Also, I can't
> imagine what nginx could be sending that would induce the upstream tomcat
> to behave this way.
>
> > > Any updates to the application in the relevant timescale?
> > >
>
> Their application was patched to a newer version on 6/5.
>
> > > Any features users started using that hadn't been used before in
> > > that timescale?
>
> That one I couldn't answer, as we are only the hosting facility and we are not
> in the loop when it comes to the users' workflow, but it seems unlikely given
> the nature of their business.
>
> > >
> > > 
> > >
> >  Recommendations:
> >  - Switch back to the BIO connector if you haven't already. It has fewer
> >    moving parts than NIO so it is simpler debug.
> >  - Add "%b" to the access log pattern for Tomcat's access log valve to
> >    record the number of body (excluding headers) bytes Tomcat
> >  believes
> > it
> >    has written to the response.
> > 
> > 
> >  Next steps:
> >  - Wait for the issue to re-occur after the recommended changes
> >  above
> > and
> >    depending on what is recorded in the access log for %b for a failed
> >    request, shift the focus accordingly.
> >  - Answers to the additional questions would be nice but the access
> log
> >    %b value for a failed request is the key piece of information 
> >  required
> >    at this point.
> > 
> > >>>
> > >>> Good news! I enabled that parameter a few days ago and we have
> > >>> already caught some instances of the problem occurring.
> > >
> > > Excellent!
> > >
> > >>> Here is the logging format...
> > >>>
> > >>>  > >>> directory="logs"
> > >>>prefix="localhost_access." suffix=".log"
> > >>> pattern="%h %l %D %u %t %{JSESSIONID}c %{cookie}i %r %s %b %S
> %q"
> > >>> />
> > >>>
> > >>> Due to some sensitive content in the HTTP requests below, I have
> > >>> globally replaced certain words and addresses with random-ish
> > >>> strings, but I don't think I've done anything to impact the issue.
> > >>>
> > >>> Following is an example from Wednesday.
> > >>>
> > >>> This is a request being sent from the nginx proxy to the first of
> > >>> 2 upstream servers, 

RE: Weirdest Tomcat Behavior Ever?

2020-10-26 Thread Eric Robinson
> -Original Message-
> From: Eric Robinson 
> Sent: Monday, October 26, 2020 11:37 PM
> To: Tomcat Users List 
> Subject: RE: Weirdest Tomcat Behavior Ever?
>
> > > On 26/10/2020 10:26, Mark Thomas wrote:
> > > > On 24/10/2020 01:32, Eric Robinson wrote:
> > > >
> > > > 
> > > >
> > >  -Original Message-
> > >  From: Mark Thomas 
> > > >
> > > > 
> > > >
> > >  The failed request:
> > >  - Completes in ~6ms
> > > >>>
> > > >>> I think we've seen the failed requests take as much as 50ms.
> > > >
> > > > Ack. That is still orders of magnitude smaller that the timeout
> > > > and consistent with generation time of some of the larger responses.
> > > >
> > > > I wouldn't sat it confirms any of my previous conclusions but it
> > > > doesn't invalidate them either.
> > > >
> > >  Follow-up questions:
> > >  - JVM
> > >    - Vendor?
> > >    - OS package or direct from Vendor?
> > > 
> > > >>>
> > > >>> jdk-8u221-linux-x64.tar.gz downloaded from the Oracle web site.
> > > >
> > > > OK. That is post Java 8u202 so it should be a paid for,
> > > > commercially supported version of Java 8.
> > > >
> > > > The latest Java 8 release from Oracle is 8u271.
> > > >
> > > > The latest Java 8 release from AdoptOpenJDK is 8u272.
> > > >
> > > > I don't think we are quite at this point yet but what is your view
> > > > on updating to the latest Java 8 JDK (from either Oracle or
> > AdoptOpenJDK).
> > > >
> > >  - Tomcat
> > >    - OS package, 3rd-party package or direct from ASF?
> > > 
> > > >>>
> > > >>> tomcat.noarch  7.0.76-6.el7 from CentOS base repository
> > > >>>
> > > >>
> > > >> Drat, slight correction. I now recall that although we initially
> > > >> installed 7.0.76
> > > from the CentOS repo, the application vendor made us lower the
> > > version to 7.0.72, and I DO NOT know where we got that. However, it
> > > has not changed since October-ish of 2018.
> > > >
> > > > I've reviewed the 7.0.72 to 7.0.76 changelog and I don't see any
> > > > relevant changes.
> > > >
> > >  - Config
> > >    - Any changes at all around the time the problems started? I'm
> > >  thinking OS updates, VM restarted etc?
> > > 
> > > >>>
> > > >>> server.xml has not changed since 4/20/20, which was well before
> > > >>> the problem manifested, and all the other files in the conf
> > > >>> folder are even older than that. We're seeing this symptom on
> > > >>> both production servers. One of them was rebooted a week ago,
> > > >>> but the other has been up continuously for
> > > >>> 258 days.
> > > >
> > > > OK. That rules a few things out which is good but it does make the
> > > > trigger for this issue even more mysterious.
> > > >
> > > > Any changes in the Nginx configuration in the relevant timescale?
> > > >
> >
> > The last change to the nginx config files was on 8/21. The first
> > report of problems from the users in question was on 9/16. There is
> > another set of users on a different tomcat instance who reported
> > issues around 8/26, 5 days after nginx config change. It seems
> > unlikely to be related. Also, I can't imagine what nginx could be
> > sending that would induce the upstream tomcat to behave this way.
> >
> > > > Any updates to the application in the relevant timescale?
> > > >
> >
> > Their application was patched to a newer version on 6/5.
> >
> > > > Any features users started using that hadn't been used before in
> > > > that timescale?
> >
> > That one I couldn't answer, as we are only the hosting facility and we
> > are not in the loop when it comes to the users' workflow, but it seems
> > unlikely given the nature of their business.
> >
> > > >
> > > > 
> > > >
> > >  Recommendations:
> > >  - Switch back to the BIO connector if you haven't already. It has
> fewer
> > >    moving parts than NIO so it is simpler debug.
> > >  - Add "%b" to the access log pattern for Tomcat's access log valve to
> > >    record the number of body (excluding headers) bytes Tomcat
> > >  believes
> > > it
> > >    has written to the response.
> > > 
> > > 
> > >  Next steps:
> > >  - Wait for the issue to re-occur after the recommended changes
> > >  above
> > > and
> > >    depending on what is recorded in the access log for %b for a failed
> > >    request, shift the focus accordingly.
> > >  - Answers to the additional questions would be nice but the
> > >  access
> > log
> > >    %b value for a failed request is the key piece of information
> required
> > >    at this point.
> > > 
> > > >>>
> > > >>> Good news! I enabled that parameter a few days ago and we have
> > > >>> already caught some instances of the problem occurring.
> > > >
> > > > Excellent!
> > > >
> > > >>> Here is the logging format...
> > > >>>
> > > >>>  > > >>> directory="logs"
> > > >>>prefix="localhost_access." suffix=".log"
> > > >>> pattern="%h %l %D %u %t %{JSESSIONID}c 

Re: Weirdest Tomcat Behavior Ever?

2020-10-26 Thread Mark Thomas
On 24/10/2020 01:32, Eric Robinson wrote:



>>> -Original Message-
>>> From: Mark Thomas 



>>> The failed request:
>>> - Completes in ~6ms
>>
>> I think we've seen the failed requests take as much as 50ms.

Ack. That is still orders of magnitude smaller that the timeout and
consistent with generation time of some of the larger responses.

I wouldn't sat it confirms any of my previous conclusions but it doesn't
invalidate them either.

>>> Follow-up questions:
>>> - JVM
>>>   - Vendor?
>>>   - OS package or direct from Vendor?
>>>
>>
>> jdk-8u221-linux-x64.tar.gz downloaded from the Oracle web site.

OK. That is post Java 8u202 so it should be a paid for, commercially
supported version of Java 8.

The latest Java 8 release from Oracle is 8u271.

The latest Java 8 release from AdoptOpenJDK is 8u272.

I don't think we are quite at this point yet but what is your view on
updating to the latest Java 8 JDK (from either Oracle or AdoptOpenJDK).

>>> - Tomcat
>>>   - OS package, 3rd-party package or direct from ASF?
>>>
>>
>> tomcat.noarch  7.0.76-6.el7 from CentOS base repository
>>
> 
> Drat, slight correction. I now recall that although we initially installed 
> 7.0.76 from the CentOS repo, the application vendor made us lower the version 
> to 7.0.72, and I DO NOT know where we got that. However, it has not changed 
> since October-ish of 2018.

I've reviewed the 7.0.72 to 7.0.76 changelog and I don't see any
relevant changes.

>>> - Config
>>>   - Any changes at all around the time the problems started? I'm
>>> thinking OS updates, VM restarted etc?
>>>
>>
>> server.xml has not changed since 4/20/20, which was well before the
>> problem manifested, and all the other files in the conf folder are even older
>> than that. We're seeing this symptom on both production servers. One of
>> them was rebooted a week ago, but the other has been up continuously for
>> 258 days.

OK. That rules a few things out which is good but it does make the
trigger for this issue even more mysterious.

Any changes in the Nginx configuration in the relevant timescale?

Any updates to the application in the relevant timescale?

Any features users started using that hadn't been used before in that
timescale?



>>> Recommendations:
>>> - Switch back to the BIO connector if you haven't already. It has fewer
>>>   moving parts than NIO so it is simpler debug.
>>> - Add "%b" to the access log pattern for Tomcat's access log valve to
>>>   record the number of body (excluding headers) bytes Tomcat believes it
>>>   has written to the response.
>>>
>>>
>>> Next steps:
>>> - Wait for the issue to re-occur after the recommended changes above and
>>>   depending on what is recorded in the access log for %b for a failed
>>>   request, shift the focus accordingly.
>>> - Answers to the additional questions would be nice but the access log
>>>   %b value for a failed request is the key piece of information required
>>>   at this point.
>>>
>>
>> Good news! I enabled that parameter a few days ago and we have already
>> caught some instances of the problem occurring.

Excellent!

>> Here is the logging format...
>>
>> > directory="logs"
>>prefix="localhost_access." suffix=".log" pattern="%h %l %D %u 
>> %t
>> %{JSESSIONID}c %{cookie}i %r %s %b %S %q" />
>>
>> Due to some sensitive content in the HTTP requests below, I have globally
>> replaced certain words and addresses with random-ish strings, but I don't
>> think I've done anything to impact the issue.
>>
>> Following is an example from Wednesday.
>>
>> This is a request being sent from the nginx proxy to the first of 2 upstream
>> servers, 10.51.14.46
>>
>> 2020/10/21 15:51:22 [error] 39268#39268: *842342531 upstream prematurely
>> closed connection while reading response header from upstream, client:
>> 99.88.77.66, server: redacted.domain.com, request: "GET
>> /sandhut/jsp/catalog/xml/getWidgets.jsp?eDate=2020-10-
>> 21=64438=0=0=Yes
>> =0=75064=322095=8568=0.
>> 5650846=21102020155122.472656 HTTP/1.1", upstream:
>> "http://10.51.14.46:3016/sandhut/jsp/catalog/xml/getWidgets.jsp?eDate=20
>> 20-10-
>> 21=64438=0=0=Yes
>> =0=75064=322095=8568=0.
>> 5650846=21102020155122.472656", host:
>> "redacted.domain.com"
>>
>> Here is the matching localhost_access log entry from that server….
>>
>> 10.51.14.133 - 144 - [21/Oct/2020:15:51:22 -0400]
>> F405E25E49E3DCB81A36A87DED1FE573
>> JSESSIONID=F405E25E49E3DCB81A36A87DED1FE573;
>> srv_id=dea8d61a7d725e980a6093cb78d8ec73;
>> JSESSIONID=F405E25E49E3DCB81A36A87DED1FE573;
>> srv_id=dea8d61a7d725e980a6093cb78d8ec73 GET
>> /sandhut/jsp/catalog/xml/getWidgets.jsp?eDate=2020-10-
>> 21=64438=0=0=Yes
>> =0=75064=322095=8568=0.
>> 5650846=21102020155122.472656 HTTP/1.0 200 40423
>> F405E25E49E3DCB81A36A87DED1FE573 ?eDate=2020-10-
>> 21=64438=0=0=Yes
>> =0=75064=322095=8568=0.
>> 5650846=21102020155122.472656
>>
>> Tomcat appears to think it sent 40423 bytes. However, even though it shows
>> an HTTP 200 response, WireShark shows the 

Re: Weirdest Tomcat Behavior Ever?

2020-10-26 Thread Mark Thomas
On 26/10/2020 10:26, Mark Thomas wrote:
> On 24/10/2020 01:32, Eric Robinson wrote:
> 
> 
> 
 -Original Message-
 From: Mark Thomas 
> 
> 
> 
 The failed request:
 - Completes in ~6ms
>>>
>>> I think we've seen the failed requests take as much as 50ms.
> 
> Ack. That is still orders of magnitude smaller that the timeout and
> consistent with generation time of some of the larger responses.
> 
> I wouldn't sat it confirms any of my previous conclusions but it doesn't
> invalidate them either.
> 
 Follow-up questions:
 - JVM
   - Vendor?
   - OS package or direct from Vendor?

>>>
>>> jdk-8u221-linux-x64.tar.gz downloaded from the Oracle web site.
> 
> OK. That is post Java 8u202 so it should be a paid for, commercially
> supported version of Java 8.
> 
> The latest Java 8 release from Oracle is 8u271.
> 
> The latest Java 8 release from AdoptOpenJDK is 8u272.
> 
> I don't think we are quite at this point yet but what is your view on
> updating to the latest Java 8 JDK (from either Oracle or AdoptOpenJDK).
> 
 - Tomcat
   - OS package, 3rd-party package or direct from ASF?

>>>
>>> tomcat.noarch  7.0.76-6.el7 from CentOS base repository
>>>
>>
>> Drat, slight correction. I now recall that although we initially installed 
>> 7.0.76 from the CentOS repo, the application vendor made us lower the 
>> version to 7.0.72, and I DO NOT know where we got that. However, it has not 
>> changed since October-ish of 2018.
> 
> I've reviewed the 7.0.72 to 7.0.76 changelog and I don't see any
> relevant changes.
> 
 - Config
   - Any changes at all around the time the problems started? I'm
 thinking OS updates, VM restarted etc?

>>>
>>> server.xml has not changed since 4/20/20, which was well before the
>>> problem manifested, and all the other files in the conf folder are even 
>>> older
>>> than that. We're seeing this symptom on both production servers. One of
>>> them was rebooted a week ago, but the other has been up continuously for
>>> 258 days.
> 
> OK. That rules a few things out which is good but it does make the
> trigger for this issue even more mysterious.
> 
> Any changes in the Nginx configuration in the relevant timescale?
> 
> Any updates to the application in the relevant timescale?
> 
> Any features users started using that hadn't been used before in that
> timescale?
> 
> 
> 
 Recommendations:
 - Switch back to the BIO connector if you haven't already. It has fewer
   moving parts than NIO so it is simpler debug.
 - Add "%b" to the access log pattern for Tomcat's access log valve to
   record the number of body (excluding headers) bytes Tomcat believes it
   has written to the response.


 Next steps:
 - Wait for the issue to re-occur after the recommended changes above and
   depending on what is recorded in the access log for %b for a failed
   request, shift the focus accordingly.
 - Answers to the additional questions would be nice but the access log
   %b value for a failed request is the key piece of information required
   at this point.

>>>
>>> Good news! I enabled that parameter a few days ago and we have already
>>> caught some instances of the problem occurring.
> 
> Excellent!
> 
>>> Here is the logging format...
>>>
>>> >> directory="logs"
>>>prefix="localhost_access." suffix=".log" pattern="%h %l %D 
>>> %u %t
>>> %{JSESSIONID}c %{cookie}i %r %s %b %S %q" />
>>>
>>> Due to some sensitive content in the HTTP requests below, I have globally
>>> replaced certain words and addresses with random-ish strings, but I don't
>>> think I've done anything to impact the issue.
>>>
>>> Following is an example from Wednesday.
>>>
>>> This is a request being sent from the nginx proxy to the first of 2 upstream
>>> servers, 10.51.14.46
>>>
>>> 2020/10/21 15:51:22 [error] 39268#39268: *842342531 upstream prematurely
>>> closed connection while reading response header from upstream, client:
>>> 99.88.77.66, server: redacted.domain.com, request: "GET
>>> /sandhut/jsp/catalog/xml/getWidgets.jsp?eDate=2020-10-
>>> 21=64438=0=0=Yes
>>> =0=75064=322095=8568=0.
>>> 5650846=21102020155122.472656 HTTP/1.1", upstream:
>>> "http://10.51.14.46:3016/sandhut/jsp/catalog/xml/getWidgets.jsp?eDate=20
>>> 20-10-
>>> 21=64438=0=0=Yes
>>> =0=75064=322095=8568=0.
>>> 5650846=21102020155122.472656", host:
>>> "redacted.domain.com"
>>>
>>> Here is the matching localhost_access log entry from that server….
>>>
>>> 10.51.14.133 - 144 - [21/Oct/2020:15:51:22 -0400]
>>> F405E25E49E3DCB81A36A87DED1FE573
>>> JSESSIONID=F405E25E49E3DCB81A36A87DED1FE573;
>>> srv_id=dea8d61a7d725e980a6093cb78d8ec73;
>>> JSESSIONID=F405E25E49E3DCB81A36A87DED1FE573;
>>> srv_id=dea8d61a7d725e980a6093cb78d8ec73 GET
>>> /sandhut/jsp/catalog/xml/getWidgets.jsp?eDate=2020-10-
>>> 21=64438=0=0=Yes
>>> =0=75064=322095=8568=0.
>>> 5650846=21102020155122.472656 HTTP/1.0 200 40423
>>> 

Question regarding Invoker

2020-10-26 Thread jonmcalexander
I believe I have read that the Invoker Servlet was deprecated in Tomcat 6 and 
removed entirely in Tomcat 7 and above. Can someone confirm that this is 
correct? I couldn't find any announcement of this on tomcat.apache.org.

Thanks,


Dream * Excel * Explore * Inspire
Jon McAlexander
Infrastructure Engineer
Asst Vice President

Middleware Product Engineering
Enterprise CIO | Platform Services | Middleware | Infrastructure Solutions

8080 Cobblestone Rd | Urbandale, IA 50322
MAC: F4469-010
Tel 515-988-2508 | Cell 515-988-2508

jonmcalexan...@wellsfargo.com


This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.



Re: [OT] SSLException after Java upgrade

2020-10-26 Thread Steve Sanders
Hi Chris,

We ran into similar issues when upgrading to latest JDK 8 (and 11). We
found that the fix was to add the sun.security.ec.SunEC as a security
provider in java.security like so:

security.provider.9=sun.security.ec.SunEC

After adding this we were able to continue using our current certificates
and communicate with services using the updated ciphers. Depending on the
version / flavor of JDK you're using you may also need to apply the
unlimited strength JCE policy patch found here:
https://www.oracle.com/java/technologies/javase-jce8-downloads.html

Steve


[OT] SSLException after Java upgrade

2020-10-26 Thread Christopher Schultz

All,

(Note that this has nothing whatsoever to to with Apache Tomcat. These 
connections are between services running on Tomcat and others, but 
Tomcat's TLS code or configuration is in no way involved.)


I recently upgraded my OpenJDK Java 8 installations on a few servers and 
started getting this error when connecting between two services 
involving a specific server:


javax.net.ssl.SSLException: No preferred signature algorithm for 
CertificateVerify


I believe I have tracked this back to the fact that this server's client 
key/cert was using the secp256k1 curve instead of the more 
widely-supported secp256r1 curve (this is the "NIST P-256" curve). I 
think Java dropped support for the non-NIST curves at some point yet the 
documentation says that they are supported for compatibility[1].


I founds a bug in the JDK listed here [2] which may or may not be related.

There is a workaround mentioned in the bug report:

"
Configure server so that supported_signature_algorithms prefers 
signature algorithms supported by the SunPKCS11 provider 
(RSA_PKCS1_SHA256, RSA_PKCS1_SHA384, RSA_PKCS1_SHA_512, RSA_SHA224, 
RSA_PKCS1_SHA1).

"

I don't think this will apply to me, since this is all about RSA 
signatures, but I suppose it could be adapted to the EC signature 
algorithms (e.g. EC_PKCS1_SHA256 or whatever).


Does anyone know how to "configure [...] 
supported_signature_algorithms"? I've never heard of that setting before 
and some web searching isn't coming up with much for me.


Back to the deprecated curves. I can't find any reference to them being 
disabled by default, and the java.security file contains a disabled 
algorithms setting that doesn't mention EC crypto at all:


jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, \
EC keySize < 224, 3DES_EDE_CBC, anon, NULL

and also:

jdk.tls.legacyAlgorithms= \
K_NULL, C_NULL, M_NULL, \
DH_anon, ECDH_anon, \
RC4_128, RC4_40, DES_CBC, DES40_CBC, \
3DES_EDE_CBC

The documentation for legacyAlgorithms says that they will only be 
negotiated when there are no other (non legacy) options available. In my 
case, it was a complete failure.


I minted a new certificate using P-256 and I was able to make a 
connection again. So the certificate key algorithm was indeed the problem.


I finally found the reference I was looking for regarding Java actually 
disabling those curves[3]. It happened in Java 8 u231 about a year ago[4].


One can re-enable the negotiation of these algorithms by setting the 
system property "jdk.tls.namedGroups" to an appropriate setting.


This issue must have happened due to the upgrade of my Debian openjdk-8 
package, which finally included the (default) disabling of those algorithms.


I started this post to ask some questions from the community but I think 
it's turning out to be a little bit of a PSA because I ended up finding 
just about everything I needed to recover.


I'm still curious about the supported_signature_algorithms thing, though.

Thanks,
-chris

[1] 
https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#legacy-curves-retained-for-compatibility

[2] https://bugs.openjdk.java.net/browse/JDK-8223940
[3] https://java.com/en/configure_crypto.html#DisablenonNIST
[4] https://java.com/en/jre-jdk-cryptoroadmap.html

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Question regarding Invoker

2020-10-26 Thread Mark Thomas
On 26/10/2020 17:46, jonmcalexan...@wellsfargo.com.INVALID wrote:
> I believe I have read that the Invoker Servlet was deprecated in Tomcat 6 and 
> removed entirely in Tomcat 7 and above. Can someone confirm that this is 
> correct? I couldn't find any announcement of this on tomcat.apache.org.

Correct.

See the Tomcat 6.0.x changelog:

http://tomcat.apache.org/tomcat-6.0-doc/changelog.html

Mark


> 
> Thanks,
> 
> 
> Dream * Excel * Explore * Inspire
> Jon McAlexander
> Infrastructure Engineer
> Asst Vice President
> 
> Middleware Product Engineering
> Enterprise CIO | Platform Services | Middleware | Infrastructure Solutions
> 
> 8080 Cobblestone Rd | Urbandale, IA 50322
> MAC: F4469-010
> Tel 515-988-2508 | Cell 515-988-2508
> 
> jonmcalexan...@wellsfargo.com
> 
> 
> This message may contain confidential and/or privileged information. If you 
> are not the addressee or authorized to receive this for the addressee, you 
> must not use, copy, disclose, or take any action based on this message or any 
> information herein. If you have received this message in error, please advise 
> the sender immediately by reply e-mail and delete this message. Thank you for 
> your cooperation.
> 
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Question regarding Invoker

2020-10-26 Thread jonmcalexander
Thank you!


Dream * Excel * Explore * Inspire
Jon McAlexander
Infrastructure Engineer
Asst Vice President

Middleware Product Engineering
Enterprise CIO | Platform Services | Middleware | Infrastructure Solutions

8080 Cobblestone Rd | Urbandale, IA 50322
MAC: F4469-010
Tel 515-988-2508 | Cell 515-988-2508

jonmcalexan...@wellsfargo.com


This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.

-Original Message-
From: Mark Thomas  
Sent: Monday, October 26, 2020 1:06 PM
To: users@tomcat.apache.org
Subject: Re: Question regarding Invoker

On 26/10/2020 17:46, jonmcalexan...@wellsfargo.com.INVALID wrote:
> I believe I have read that the Invoker Servlet was deprecated in Tomcat 6 and 
> removed entirely in Tomcat 7 and above. Can someone confirm that this is 
> correct? I couldn't find any announcement of this on tomcat.apache.org.

Correct.

See the Tomcat 6.0.x changelog:

http://tomcat.apache.org/tomcat-6.0-doc/changelog.html

Mark


> 
> Thanks,
> 
> 
> Dream * Excel * Explore * Inspire
> Jon McAlexander
> Infrastructure Engineer
> Asst Vice President
> 
> Middleware Product Engineering
> Enterprise CIO | Platform Services | Middleware | Infrastructure Solutions
> 
> 8080 Cobblestone Rd | Urbandale, IA 50322
> MAC: F4469-010
> Tel 515-988-2508 | Cell 515-988-2508
> 
> jonmcalexan...@wellsfargo.com
> 
> 
> This message may contain confidential and/or privileged information. If you 
> are not the addressee or authorized to receive this for the addressee, you 
> must not use, copy, disclose, or take any action based on this message or any 
> information herein. If you have received this message in error, please advise 
> the sender immediately by reply e-mail and delete this message. Thank you for 
> your cooperation.
> 
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org