Re: Severe performance issues on images

2014-06-26 Thread Jim Lindqvist
Hi Christopher,

Thank you for your insights and sorry for a late reply.

This specific issue seemed do be because of limited bandwidth at the data
centre and it had now been fixed.
We are having some other problem as well, but these seem to come from
inefficient modules at the moment and we are investigating at full pace.


 What is the JkMount directive(s) that you are trying to undo with
 JkUnMount?

JkMount /* customer


 Whenever you turn off /Tomcat/ they get fast again? Do you just have
 to stop Tomcat or do you have to de-configure it in httpd.conf?

Only stopping Tomcat is required. This would correspond with the bandwidth
issue.


 Wait, you have a server with 64GiB of RAM? Cool. And it serves images
 for a living? Weird.

It is both cool and weird. :P
The reason for 64Gb is basically that when things break down, more ram and
cpu delays the problem. We are looking for ways to reduce memory usage.


 Up. Grade.

We are looking into it, but 7.0.26 seems to be associated with this Ubuntu
version.
Is it a huge difference? I really don't want to rock this boat any more
that absolutely necessary.


Again, thank you for your time and insights!


Best Regards


Jim




On 24 June 2014 14:45, Christopher Schultz ch...@christopherschultz.net
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Jim,

 On 6/23/14, 4:21 PM, Jim Lindqvist wrote:
  I have I server with Apache and Tomcat through jk_mod and the
  perfonrmance is awful. This is mostly confined to images as far as
  I know, but it is hard to tell.
 
  The images are served from Apache with the help of the following
  lines: # Serve static content from /resources and /data using
  Apache instead of Tomcat worker Alias /resources
  /var/lib/tomcat7/webapps/customer/resources JkUnMount /resources/*
  customer Alias /data /var/lib/tomcat7/webapps/customer/data
  JkUnMount /data/* customer

 What is the JkMount directive(s) that you are trying to undo with
 JkUnMount?

  It seems that whenever Tomcat is running, Apache grinds to almost
  standstill, but whenever I tun off apache images seems to spring to
  life again.

 Whenever you turn off /Tomcat/ they get fast again? Do you just have
 to stop Tomcat or do you have to de-configure it in httpd.conf?

  I could really use some input. I feel like I have tries all the
  settings I can find, but any changes seems to make the situation
  worse and it doesn't get better when I turn the settings back.

 Can you give us some performance numbers? How have you tested?

  All the guides I can find focus on servers from 64Mb - 512Mb, but I
  can get up to 64Gb and some serious processing power. I don't mind
  using a less than perfect configuration as long as it works.

 Wait, you have a server with 64GiB of RAM? Cool. And it serves images
 for a living? Weird.

  This is the output from version.sh: Using CATALINA_BASE:
  /usr/share/tomcat7 Using CATALINA_HOME:   /usr/share/tomcat7 Using
  CATALINA_TMPDIR: /usr/share/tomcat7/temp Using JRE_HOME:
  /usr/lib/jvm/java-7-oracle Using CLASSPATH:
 
 /usr/share/tomcat7/bin/bootstrap.jar:/usr/share/tomcat7/bin/tomcat-juli.jar
 
 
 Server version: Apache Tomcat/7.0.26

 Up. Grade.

 What version of Apache httpd? What version of mod_jk? What is your
 mod_jk configuration (workers.properties)?

 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: GPGTools - http://gpgtools.org
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQIcBAEBCAAGBQJTqXMGAAoJEBzwKT+lPKRYPhkP+wV2ecvTUrKUy+r/B1fYfXxQ
 rpczIZfYXpYnRlz9dOFtd5H1H6fuxa851Biwk7tCiC+tB1OJl8sY1OFHzX5AzheH
 dty5n1jIzZgIOF+fURgkMvz7Ukx8gOFD5uxttt02ZzsYeKtJWeTWm7yviPa09LYv
 bu9k7d11WQNAF1R83HPfJhkELV7kT/vAAGkFLfVq/d83EsjUx8kSzyqJu1WDw1T6
 AzBKxJh4+JN4+zayymfF0HsHWL/VfwyYiQAAnwd7NpeWEmtpgUEy9heEze0Y1le0
 8zCBhIkcLTyKU+ipZIW4av2k6vIuPhrgzjx2GuizRqXHTiqFmfFt3T8RPdEcPPyB
 UqYqIxgtNxWFA6pzNDdoQI5KKY920TSpdACBr8HBQBUQpSIuUjns+bXDAKdAkLQd
 UiqDz8cSxXVJnmyshv+ZIK16hkU9xfalqYB7LoqPgBdCESnJHUYssNT6SfTScLRC
 jeCyvELYv0aSaeKX/9PlC3D3bmU5L+FS06bRxmF2N8Q8YPFVRrw8mWYaIA2HFcwm
 LbJEoVEYGZcE1qoh4hoqp/bzfjXWiJ8i9+4ldqsKHakVfyCxnmbHoiDjoZABE4kk
 fzOWIEapbjVshHr1FdPo/xyb2HIzgUPqTgZwdf0bM5MabY2aUzOKRETgL8z8b8Ck
 vTOtBPphAfBnDrX5bUkm
 =6/W1
 -END PGP SIGNATURE-

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




[ANN] Apache Tomcat 8.0.9 (stable) available

2014-06-26 Thread Mark Thomas
The Apache Tomcat team announces the immediate availability of Apache
Tomcat 8.0.9, the first stable release of the 8.0.x series.

Apache Tomcat 8 is an open source software implementation of the Java
Servlet, JavaServer Pages, Java Unified Expression Language and Java
WebSocket technologies.

Apache Tomcat 8 is aligned with Java EE 7. In addition to supporting
updated versions of the Java EE specifications, Tomcat 8 includes a
number of improvements compared to Tomcat 7. The notable changes
include:

- Support for Java Servlet 3.1, JavaServer Pages 2.3, Java Unified
  Expression Language 3.0 and Java WebSocket 1.0.

- The default connector implementation is now the Java non-blocking
  implementation (NIO) for both HTTP and AJP.

- A new resources implementation that replaces Aliases, VirtualLoader,
  VirtualDirContext, JAR resources and external repositories with a
  single, consistent approach for configuring additional web
  application resources. The new resources implementation can also be
  used to implement overlays (using a master WAR as the basis for
  multiple web applications that each have their own
  customizations).


Apache Tomcat 8.0.9 includes numerous fixes for issues identified
in 8.0.8 as well as a number of other enhancements and changes. The
notable changes since 8.0.8 include:

- Start to move towards RFC6265 for cookie handling

- Better error handling when the error occurs after the response has
  been committed

- Various Jasper improvements to make it easier for other containers
  (e.g. Jetty) to consume


Please refer to the change log for the complete list of changes:
http://tomcat.apache.org/tomcat-8.0-doc/changelog.html

Note: This version has 4 zip binaries: a generic one and three
  bundled with Tomcat native binaries for Windows operating systems
  running on different CPU architectures.

Downloads:
http://tomcat.apache.org/download-80.cgi

Migration guides from Apache Tomcat 5.5.x, 6.0.x and 7.0.x:
http://tomcat.apache.org/migration.html

Enjoy!

- The Apache Tomcat team

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Connection count explosion due to thread http-nio-80-ClientPoller-x death

2014-06-26 Thread Lars Engholm Johansen
Thanks for all the replies guys.

Have you observed a performance increase by setting
 acceptorThreadCount to 4 instead of a lower number? I'm just curious.


No, but this was the consensus after elongated discussions in my team. We
have 12 cpu cores - better save than sorry. I know that the official docs
reads although you would never really need more than 2 :-)

The GC that Andre suggested was to get rid of some of CLOSE_WAIT
 connections in netstat output, in case if those are owned by some
 abandoned and non properly closed I/O classes that are still present
 in JVM memory.


Please check out the open connections graph at http://imgur.com/s4fOUte
As far as I interpret, we only have a slight connection count growth during
the days until the poller thread die. These may or may not disappear by
forcing a GC, but the amount is not problematic until we hit the
http-nio-80-ClientPoller-x
thread death.

The insidious part is that everything may look fine for a long time (apart
 from an occasional long list of CLOSE_WAIT connections).  A GC will happen
 from time to time (*), which will get rid of these connections.  And those
 CLOSE_WAIT connections do not consume a lot of resources, so you'll never
 notice.
 Until at some point, the number of these CLOSE_WAIT connections gets just
 at the point where the OS can't swallow any more of them, and then you have
 a big problem.
 (*) and this is the insidious squared part : the smaller the Heap, the
 more often a GC will happen, so the sooner these CLOSE_WAIT connections
 will disappear.  Conversely, by increasing the Heap size, you leave more
 time between GCs, and make the problem more likely to happen.


You are correct. The bigger the Heap size the rarer a GC will happen - and
we have set aside 32GiB of ram. But again, referring to my connection
count graph, a missing close in the code does not seem to be the culprit.

A critical error (java.lang.ThreadDeath,
 java.lang.VirtualMachineError) will cause death of a thread.
 A subtype of the latter is java.lang.OutOfMemoryError.


I just realized that StackOverflowError is also a subclass of
VirtualMachineError,
and remembered that we due to company historical reasons had configured the
JVM stack size to 256KiB (down from the default 1GiB on 64 bit machines).
This was to support a huge number of threads on limited memory in the past.
I have now removed the -Xss jvm parameter and are exited if this solves our
poller thread problems.
Thanks for the hint, Konstantin.

I promise to report back to you guys :-)



On Fri, Jun 20, 2014 at 2:49 AM, Filip Hanik fi...@hanik.com wrote:

 Our sites still functions normally with no cpu spikes during this build up
 until around 60,000 connections, but then the server refuses further
 connections and a manual Tomcat restart is required.

 yes, the connection limit is a 16 bit short count minus some reserved
 addresses. So your system should become unresponsive, you've run out of
 ports (the 16 bit value in a TCP connection).

 netstat -na should give you your connection state when this happens, and
 that is helpful debug information.

 Filip




 On Thu, Jun 19, 2014 at 2:44 PM, André Warnier a...@ice-sa.com wrote:

  Konstantin Kolinko wrote:
 
  2014-06-19 17:10 GMT+04:00 Lars Engholm Johansen lar...@gmail.com:
 
  I will try to force a GC next time I am at the console about to
 restart a
  Tomcat where one of the http-nio-80-ClientPoller-x threads have died
 and
  connection count is exploding.
 
  But I do not see this as a solution - can you somehow deduct why this
  thread died from the outcome from a GC?
 
 
  Nobody said that a thread died because of GC.
 
  The GC that Andre suggested was to get rid of some of CLOSE_WAIT
  connections in netstat output, in case if those are owned by some
  abandoned and non properly closed I/O classes that are still present
  in JVM memory.
 
 
  Exactly, thanks Konstantin for clarifying.
 
  I was going per the following in the original post :
 
  Our sites still functions normally with no cpu spikes during this build
 up
  until around 60,000 connections, but then the server refuses further
  connections and a manual Tomcat restart is required.
 
  CLOSE_WAIT is a normal state for a TCP connection, but it should not
  normally last long.
  It indicates basically that the other side has closed the connection, and
  that this side should do the same. But it doesn't, and as long as it
  doesn't the connection remains in the CLOSE_WAIT state.  It's like
  half-closed, but not entirely, and as long as it isn't, the OS cannot
 get
  rid of it.
  For a more precise explanation, Google for TCP CLOSE_WAIT state.
 
  I have noticed in the past, with some Linux versions, that when the
 number
  of such CLOSE_WAIT connections goes above a certain level (several
  hundred), the TCP/IP stack can become totally unresponsive and not accept
  any new connections at all, on any port.
  In my case, this was due to the following kind of scenario :
  Some 

Re: Connection count explosion due to thread http-nio-80-ClientPoller-x death

2014-06-26 Thread André Warnier

Lars Engholm Johansen wrote:

Thanks for all the replies guys.

Have you observed a performance increase by setting

acceptorThreadCount to 4 instead of a lower number? I'm just curious.



No, but this was the consensus after elongated discussions in my team. We
have 12 cpu cores - better save than sorry. I know that the official docs
reads although you would never really need more than 2 :-)

The GC that Andre suggested was to get rid of some of CLOSE_WAIT

connections in netstat output, in case if those are owned by some
abandoned and non properly closed I/O classes that are still present
in JVM memory.



Please check out the open connections graph at http://imgur.com/s4fOUte
As far as I interpret, we only have a slight connection count growth during
the days until the poller thread die. These may or may not disappear by
forcing a GC, but the amount is not problematic until we hit the
http-nio-80-ClientPoller-x
thread death.


Just to make sure : what kind of connections does this graph actually show ? in which TCP 
state ? does it count only the established, or also the FIN_WAIT, CLOSE_WAIT, 
LISTEN etc.. ?




The insidious part is that everything may look fine for a long time (apart

from an occasional long list of CLOSE_WAIT connections).  A GC will happen
from time to time (*), which will get rid of these connections.  And those
CLOSE_WAIT connections do not consume a lot of resources, so you'll never
notice.
Until at some point, the number of these CLOSE_WAIT connections gets just
at the point where the OS can't swallow any more of them, and then you have
a big problem.
(*) and this is the insidious squared part : the smaller the Heap, the
more often a GC will happen, so the sooner these CLOSE_WAIT connections
will disappear.  Conversely, by increasing the Heap size, you leave more
time between GCs, and make the problem more likely to happen.



You are correct. The bigger the Heap size the rarer a GC will happen - and
we have set aside 32GiB of ram. But again, referring to my connection
count graph, a missing close in the code does not seem to be the culprit.

A critical error (java.lang.ThreadDeath,

java.lang.VirtualMachineError) will cause death of a thread.
A subtype of the latter is java.lang.OutOfMemoryError.



I just realized that StackOverflowError is also a subclass of
VirtualMachineError,
and remembered that we due to company historical reasons had configured the
JVM stack size to 256KiB (down from the default 1GiB on 64 bit machines).
This was to support a huge number of threads on limited memory in the past.
I have now removed the -Xss jvm parameter and are exited if this solves our
poller thread problems.
Thanks for the hint, Konstantin.

I promise to report back to you guys :-)



On Fri, Jun 20, 2014 at 2:49 AM, Filip Hanik fi...@hanik.com wrote:


Our sites still functions normally with no cpu spikes during this build up
until around 60,000 connections, but then the server refuses further
connections and a manual Tomcat restart is required.

yes, the connection limit is a 16 bit short count minus some reserved
addresses. So your system should become unresponsive, you've run out of
ports (the 16 bit value in a TCP connection).

netstat -na should give you your connection state when this happens, and
that is helpful debug information.

Filip




On Thu, Jun 19, 2014 at 2:44 PM, André Warnier a...@ice-sa.com wrote:


Konstantin Kolinko wrote:


2014-06-19 17:10 GMT+04:00 Lars Engholm Johansen lar...@gmail.com:


I will try to force a GC next time I am at the console about to

restart a

Tomcat where one of the http-nio-80-ClientPoller-x threads have died

and

connection count is exploding.

But I do not see this as a solution - can you somehow deduct why this
thread died from the outcome from a GC?


Nobody said that a thread died because of GC.

The GC that Andre suggested was to get rid of some of CLOSE_WAIT
connections in netstat output, in case if those are owned by some
abandoned and non properly closed I/O classes that are still present
in JVM memory.


Exactly, thanks Konstantin for clarifying.

I was going per the following in the original post :

Our sites still functions normally with no cpu spikes during this build

up

until around 60,000 connections, but then the server refuses further
connections and a manual Tomcat restart is required.

CLOSE_WAIT is a normal state for a TCP connection, but it should not
normally last long.
It indicates basically that the other side has closed the connection, and
that this side should do the same. But it doesn't, and as long as it
doesn't the connection remains in the CLOSE_WAIT state.  It's like
half-closed, but not entirely, and as long as it isn't, the OS cannot

get

rid of it.
For a more precise explanation, Google for TCP CLOSE_WAIT state.

I have noticed in the past, with some Linux versions, that when the

number

of such CLOSE_WAIT connections goes above a certain level (several
hundred), the TCP/IP stack can become 

sha1 in digest access authentication

2014-06-26 Thread Federico Viscomi
 Hi.
I am running tomcat 7.0.54 and Jdk 1.8.0_05 on Windows 7.
Does it support sha1 as hash algorithm in digest access authentication?
If it doesn't, is there any version of tomcat that supports it?

Kind regards,
Federico.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: CVE-2014-0224

2014-06-26 Thread Jeffrey Janner
 From: Jeffrey Janner [mailto:jeffrey.jan...@polydyne.com] 
 Sent: Wednesday, June 25, 2014 6:05 PM
 To: 'Tomcat Users List'
 Subject: CVE-2014-0224

 Does anyone know of a way to mitigate this vulnerability until the latest 
 OpenSSL patch can be applied to the Native Libraries?
 Perhaps limiting the cipher list to the list of strongest ciphers available 
 that are supported by the major browsers?
 Is there a listing somewhere of the cipher lists supported by those browsers?

Answering my own post after doing a little googling (Google is Your Friend. 
Trust the Google.) Actually, Redhat is providing the answer:

There is no known mitigation for this issue. The only way to fix it is to 
install updated OpenSSL packages and restart affected services.

The vulnerability can only be exploited if both server and client are 
vulnerable to this issue. In the event that one of the two is vulnerable, there 
is no risk of exploitation.




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Browsers suddenly start timing out when accessing port 80 of secure site

2014-06-26 Thread Terence M. Bandoian

On 6/24/2014 12:25 PM, Bruce Lombardi wrote:

Thanks for the response Konstantinos. I'll look into the HSTS header. The 
behavior you describe may be what is happening.

Bruce

Sent from my iPad


On Jun 24, 2014, at 8:51 AM, Konstantin Preißer kpreis...@apache.org wrote:

Hi,


-Original Message-
From: Christopher Schultz [mailto:ch...@christopherschultz.net]
Sent: Tuesday, June 24, 2014 2:42 PM
To: Tomcat Users List
Subject: Re: Browsers suddenly start timing out when accessing port 80 of
secure site

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Bruce,


On 6/23/14, 2:30 PM, Bruce Lombardi wrote:
Moving the SSL port from 8443 to 443 has solved the problem. It
appears that when the url www.something.net is entered, Firefox
remembers that this is an SSL site and automatically add the s
to get https. In fact after the timeout the url line in the
browser shows https:www.something.net. Obviously, this is
defaulting to the standard SSL port (443), which does not work if
8443 is used. Moving the port to 443 solved the problem.

If you read about setting up Tomcat, the default SSL port is 8443.
Maybe this is done for testing, but it never seems to be explained
that there might be problems with 8443.

I have never experienced the behavior you describe. Certain clients do
cache responses from servers, so it's possible that you had a bad setup
at some point that redirected :80 - :443 and then Firefox wouldn't
forget that response and change to :8443.

It might also be possible that the website used HSTS which forces compliant browsers (hopefully IE 
too in near future) to only view a site in HTTPS. I haven't tested how Firefox handles this, but I 
can imagine that when the website on :8443 sets an HSTS header and the user enters 
www.example.com, that Firefox automatically switches this to 
https://www.example.com/; which is Port 443.


Regards,
Konstantin Preißer



There is a nice description on Mozilla:

https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security

Thanks for pointing this out.

-Terence Bandoian

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Connection count explosion due to thread http-nio-80-ClientPoller-x death

2014-06-26 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Lars,

On 6/26/14, 9:56 AM, Lars Engholm Johansen wrote:
 Thanks for all the replies guys.
 
 Have you observed a performance increase by setting
 acceptorThreadCount to 4 instead of a lower number? I'm just
 curious.
 
 
 No, but this was the consensus after elongated discussions in my
 team. We have 12 cpu cores - better save than sorry. I know that
 the official docs reads although you would never really need more
 than 2 :-)

Okay. You might want to do some actual benchmarking. You may find that
more contention for an exclusive lock actually /decreases/ performance.

 The GC that Andre suggested was to get rid of some of CLOSE_WAIT 
 connections in netstat output, in case if those are owned by
 some abandoned and non properly closed I/O classes that are still
 present in JVM memory.
 
 
 Please check out the open connections graph at
 http://imgur.com/s4fOUte As far as I interpret, we only have a
 slight connection count growth during the days until the poller
 thread die. These may or may not disappear by forcing a GC, but the
 amount is not problematic until we hit the 
 http-nio-80-ClientPoller-x thread death.

Like I said, when the poller thread(s) die, you are totally screwed.

 The insidious part is that everything may look fine for a long time
 (apart
 from an occasional long list of CLOSE_WAIT connections).  A GC
 will happen from time to time (*), which will get rid of these
 connections.  And those CLOSE_WAIT connections do not consume a
 lot of resources, so you'll never notice. Until at some point,
 the number of these CLOSE_WAIT connections gets just at the point
 where the OS can't swallow any more of them, and then you have a
 big problem. (*) and this is the insidious squared part : the
 smaller the Heap, the more often a GC will happen, so the sooner
 these CLOSE_WAIT connections will disappear.  Conversely, by
 increasing the Heap size, you leave more time between GCs, and
 make the problem more likely to happen.
 
 
 You are correct. The bigger the Heap size the rarer a GC will
 happen - and we have set aside 32GiB of ram. But again, referring
 to my connection count graph, a missing close in the code does
 not seem to be the culprit.
 
 A critical error (java.lang.ThreadDeath,
 java.lang.VirtualMachineError) will cause death of a thread. A
 subtype of the latter is java.lang.OutOfMemoryError.
 
 
 I just realized that StackOverflowError is also a subclass of 
 VirtualMachineError, and remembered that we due to company
 historical reasons had configured the JVM stack size to 256KiB
 (down from the default 1GiB on 64 bit machines). This was to
 support a huge number of threads on limited memory in the past. I
 have now removed the -Xss jvm parameter and are exited if this
 solves our poller thread problems. Thanks for the hint,
 Konstantin.

Definitely let us know. A StackOverflowError should be relatively
rare, but if you have set your stack size to something very low, this
can happen.

Remember, since you are using the NIO connector, you don't need a huge
number of threads to support a huge number of connections. The stack
size relates to the number of threads you want to have active.

It looks like you haven't set the number of request-processor threads,
so you'll get the default value of 200.

The default stack size for Oracle's HotSpot JVM is 1MiB, not 1GiB.
200MiB in a 64-bit heap shouldn't be too much for your JVM, and will
hopefully cut-down on your stack problems.

Do you have a lot of recursive algorithms, or anything that calls
very-deep stacks?

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTrKXlAAoJEBzwKT+lPKRYbT8P+wR71t/UOmD1VoXZaCnSsq/m
spUp1sQMTIsk1OT1rQXQiHe77XMBaVPEsaa9/DVnzk2T3ZJwab6uzCB+fQcULL9c
62ucaHRLJiMF91G7uHMd96Ep7hw2+vrAT3HIlZbKex6Eat4bgCVSbM+Xj9SAwN3v
l0e66E+Pw1v1n0p5tK8W4JeOTx5I0+rF/ebwIY80y00umy6eW7lTnVmG6gauFsHb
XUaqqzOh6zqArXjWdhbhIBfrShjsUWfKDNQnuZ4JRwmBYqsm7paNcZ5mxixizv/a
LTOaLn8dSktWQsvf1F/qQVcCE23WIag5xg+jykAJY2kyU4LIvjdvXUPMcWKkVpZj
CtwKIdoaK33+Mt0w01yoEyfpWghtiwy2rJvEZOZYR9n8krpAOXCXnHa7PuyRN5lZ
LF82CPYgztkLC9OmlZoAub5qFeUYzfpdiDP6MBnSp54cFYRwvUtKgLt2YGIrWlwI
r9/NlGVpzT8lurzHkxAe65rNifRqsrl3Duz9ORHNMVGepRzpJJP2e3fjG79/S0JB
zMo6j8arETnwkVxYtLTu/VqCjirn79jyZaqOtAyHJa+41/e+GbxMNzXk0FUevZ42
ySHCySEo13wdx4QcWN9CtTGSD9OR3hdK7E/56RGtgvgIZFyiiGT3Kl8HFA0a2bo0
iKnMg6CXezCCJkPq6K4E
=VM0k
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Connection count explosion due to thread http-nio-80-ClientPoller-x death

2014-06-26 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

André,

On 6/26/14, 11:09 AM, André Warnier wrote:
 Lars Engholm Johansen wrote:
 Thanks for all the replies guys.
 
 Have you observed a performance increase by setting
 acceptorThreadCount to 4 instead of a lower number? I'm just
 curious.
 
 
 No, but this was the consensus after elongated discussions in my
 team. We have 12 cpu cores - better save than sorry. I know that
 the official docs reads although you would never really need
 more than 2 :-)
 
 The GC that Andre suggested was to get rid of some of CLOSE_WAIT
 connections in netstat output, in case if those are owned by
 some abandoned and non properly closed I/O classes that are
 still present in JVM memory.
 
 
 Please check out the open connections graph at
 http://imgur.com/s4fOUte As far as I interpret, we only have a
 slight connection count growth during the days until the poller
 thread die. These may or may not disappear by forcing a GC, but
 the amount is not problematic until we hit the 
 http-nio-80-ClientPoller-x thread death.
 
 Just to make sure : what kind of connections does this graph
 actually show ? in which TCP state ? does it count only the
 established, or also the FIN_WAIT, CLOSE_WAIT, LISTEN etc..
 ?

I think the state of the connections is a red herring: Tomcat will
hold those connections forever because the poller thread has died.
Nothing else matters.

Even if the CLOSE_WAIT connections were somehow cleared, Tomcat would
never respond properly to another request, ever. A Tomcat restart is
required if the poller thread dies.

One could argue that the poller threads should maybe try harder not
to die, but sometimes you can't stop thread death.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTrKZWAAoJEBzwKT+lPKRYZm8QAJ9fD8acAGZY/Tddt4TvUfat
rjpEhyfhUkBIMZkPito/pKHsT8EuwP36g3spa8tmj94gSB+ajgqDjJ6jZ47AN/eG
/0mSEMaedsaEXdO5e3RELDsDv16/fS0+OStvkG0/K22bWgf4Lbh7V9sC+LtIF04S
szo++GV9ykgv9fmFVyxXKRwpDdNWxNzJvgGCi/gXo/1bpSYUTwRUQcXb0aANvU2i
90KAj4ng9SJqyGwLKvYencYH7Ga2vqmuePHNLKKtiNT6iRLz9ZI8O1qW+SzJIG+e
moqS0VOz8C9v2yk1Dl7Ox7gw9A1dAd4GhLwtpsAcJFdpA4PKzto1hvKIuCLr3j7h
7pyw0/N2Nldl+nEOOhQiRU41e2L+wci0Rln2b83azvuqO2GrxUlJVthqjbTvLjMX
TbFYAjpIUPZFlIMKtefXA+cPF7JUh1expXk5J6/l1u6hWHR8a/uQ8G5M/5DK+ObV
1n5f/xU9eoLjbGs7/RC87VhWsxW/WutPh68cPKeC2oH6Hk1VL0lkjUiQ3i4DC+ym
yx4BvO1HUqY9uvjoGF0XoJRTcVlOyUP1G3zpxBBvL7ZpaU+r/c5MjjcMrgg6yzZe
NhMYKGJJ+qIfAtZHCm3snPCFYPysz9JwuUIGPB6ZBOukzDUAi59YLM/u9CIexkiJ
hZpHrNNu9zANMbFvt6jh
=SJdz
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Severe performance issues on images

2014-06-26 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Jim,

On 6/26/14, 2:47 AM, Jim Lindqvist wrote:
 Hi Christopher,
 
 Thank you for your insights and sorry for a late reply.
 
 This specific issue seemed do be because of limited bandwidth at
 the data centre and it had now been fixed. We are having some other
 problem as well, but these seem to come from inefficient modules at
 the moment and we are investigating at full pace.
 
 
 What is the JkMount directive(s) that you are trying to undo
 with JkUnMount?
 
 JkMount /* customer

Is there a compelling reason to use Apache httpd at all, if you are
going to forward everything.

 Whenever you turn off /Tomcat/ they get fast again? Do you just
 have to stop Tomcat or do you have to de-configure it in
 httpd.conf?
 
 Only stopping Tomcat is required. This would correspond with the
 bandwidth issue.
 
 
 Wait, you have a server with 64GiB of RAM? Cool. And it serves
 images for a living? Weird.
 
 It is both cool and weird. :P The reason for 64Gb is basically that
 when things break down, more ram and cpu delays the problem. We are
 looking for ways to reduce memory usage.
 
 
 Up. Grade.
 
 We are looking into it, but 7.0.26 seems to be associated with this
 Ubuntu version. Is it a huge difference? I really don't want to
 rock this boat any more that absolutely necessary.

You might want to think about abandoning the package-managed versions
of Tomcat. When even Debian has 7.0.28 available, you should consider
7.0.26 a dinosaur.

The greatest thing about Debian is that it is rock-solid. The worst
thing about Debian is that its packages are years out of date. The
best thing about Ubuntu is ... I dunno... upgrading packages every 18
hours? I think I liked Gentoo better than Ubuntu.

Anyhow, moving to a non-package-managed Tomcat is less scary than it
sounds. You can even run multiple versions simultaneously, which I
would highly recommend you understand and follow (read the Advanced
section in README.txt in any official distribution).

Since Tomcat does not provide patches for individual security
vulnerabilities (instead, new releases are ... uh, released), the
downstream consumers of Tomcat are reticent to update their packages
with any regularity. The situation becomes that most package-managed
versions of Tomcat are horribly out of date and may actually be
dangerous to run.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTrKg+AAoJEBzwKT+lPKRYn90P/0VFjHgETrNWx5Nh4gQzcUCm
0vEicllAAAann0tYkcHFoBYO+QrMUgKyHgRxCJkP/XRKc2AAM3XHYwHFmpEHCa/Z
rjiZkwYVwrUvbF5ImSiAEM2/8TNu0u2Tjnnv4NqpD6NU4QNBa5GgtO8Y81vorMaj
PziRbjbVEuXXzqHBBH26IM6wZo7tRbLQq0CBMoJJhrM8rWMp3vKcM/kgiRxkT+D6
Eu92tr47qHcn9G7qTWGEel+LsLuE/XYirSSiM1cbJ+jmcp8LpHdbdqpzeSSMCk46
6le0RAiqd1JpNjfwPhXnKQ4YPcaRx+waus4Z3hB3Z5hKTr3WCSQiRqzZeCZvifWM
G4RL7aqmq5ecpemVJUH6/vGFy82gYf5v9SFm4vu/VVxC8QEV7+VmML3qCoBI7QyW
9NkT34YfOqkoZTfs98J5LX7FJ+mp/uIM0NzGo0TkkGrV+fY9xZfAiLpTpTkyCeBc
zx+cc8mYiQF8HrN6KC4pqMv1AY1uVM3SpIiM+6WqHxU9MBRTbEmGAHg6Soqy9U98
3/8HrEHrH0aCb8x5DJ0RsJX5W2amV5X29KONkssDS6jB2r/IjjXKJUCehZCy9iyy
5NVxtBbxtTNLxrJY+l7+tBlFlblDomvBA84QdvdDHv/s3kECFR7zpbhjcB5s9LKC
Ng4VTZ+zS6RsUyoVapVi
=ATZQ
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Deploying a relative docBase outside of appBase

2014-06-26 Thread Peter Rifel
Hello,

I am in the process of upgrading from Tomcat 7.0.54 to 8.0.9 and am running 
into an issue with the location of my exploded war directories.  In Tomcat 7 I 
had a ROOT.xml file in conf/Catalina/hostname/ which contained my Context 
with a docBase=../../www.war parameter.  This was able to reach my www.war 
directory that lived next to my tomcat directory (one directory above 
CATALINA_HOME, two above webapps).

This doesn't work in Tomcat 8, giving an IAE:

SEVERE: ContainerBase.addChild: start:
org.apache.catalina.LifecycleException: Failed to start component 
[StandardEngine[Catalina].StandardHost[hostname].StandardContext[]]
…
Caused by: java.lang.IllegalArgumentException: The main resource set specified 
[/path/to/tomcat/webapps/www.war] is not valid


I stepped through the source code and arrived at 
o.a.c.webresources.StandardRoot.startInternal().  If the docBase is not 
absolute, we append the appBase with the docBase's getName() which returns just 
the last name in the pathname's name sequence according the javadocs.

Should this be getPath() instead?  getPath() would return the full relative 
path that when combined with the appBase, the canonical path will be the 
correct path to the application.  Is this a bug or is it intentional and is 
there a better way I should be configuring my context?

I've been testing this in Java 8 but it happens in 7 as well.  Here is my 
current version info if it matters:

Server version: Apache Tomcat/8.0.9
Server built:   Jun 19 2014 01:54:25
Server number:  8.0.9.0
OS Name:Mac OS X
OS Version: 10.9.3
Architecture:   x86_64
JVM Version:1.8.0_05-b13
JVM Vendor: Oracle Corporation


Thanks in advance,

Peter


RE: Deploying a relative docBase outside of appBase

2014-06-26 Thread Caldarale, Charles R
 From: Peter Rifel [mailto:pri...@mixpo.com] 
 Subject: Deploying a relative docBase outside of appBase

 In Tomcat 7 I had a ROOT.xml file in conf/Catalina/hostname/ which 
 contained 
 my Context with a docBase=../../www.war parameter.  This was able to reach 
 my 
 www.war directory that lived next to my tomcat directory (one directory above 
 CATALINA_HOME, two above webapps).

 This doesn't work in Tomcat 8, giving an IAE

Try this instead:

Context docBase=${catalina.home}/../www.war /

There might also be some confusion due to the .war extension on the directory 
name, but I thought that was fixed a while back.

 If the docBase is not absolute, we append the appBase with the docBase's 
 getName() 
 which returns just the last name in the pathname's name sequence according 
 the 
 javadocs.

 Should this be getPath() instead?  getPath() would return the full relative 
 path 
 that when combined with the appBase, the canonical path will be the correct 
 path 
 to the application.

That does look suspicious.
 
 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org