Thanks for your reply, we'll give a try to your suggestions.
2015-04-29 23:15 GMT+02:00 Christopher Schultz ch...@christopherschultz.net
:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Thomas,
On 4/25/15 4:25 AM, Thomas Boniface wrote:
When talking about the strategy for our next test
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Thomas,
On 4/25/15 4:25 AM, Thomas Boniface wrote:
When talking about the strategy for our next test on the release we
checked at the tomcat connector configuration but we are unsure how
to applies your advices:
1. Check the nginx
Hi,
When talking about the strategy for our next test on the release we checked
at the tomcat connector configuration but we are unsure how to applies your
advices:
1. Check the nginx configuration. Specifically, the keep-alive and
timeout associated with the proxy configuration.
2. Make sure
I just want to keep you updated and tell you that all your replies are very
helpful. It give me clues on what to look for and sometimes confirm some of
our suspicion.
I have transmitted some of the element collected in this thread to our
platform team but we were not able to setup new test so far
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Frederik,
On 4/22/15 10:53 AM, Frederik Nosi wrote:
Hi, On 04/22/2015 04:35 PM, Christopher Schultz wrote: Neill,
On 4/22/15 9:12 AM, Neill Lima wrote:
If I am not wrong, if the application in question is
monitored in VisualVM through JMX
Hi Andre,
If I am not wrong, if the application in question is monitored in VisualVM
through JMX (https://visualvm.java.net/) you could trigger a Force GC from
its monitoring console.
In order to do that, these startup params might be necessary in the Java
app side :
Rainer Jung wrote:
Am 22.04.2015 um 11:58 schrieb Thomas Boniface:
What concerns me the most is the CLOSE_WAIT on tomcat side because
when an
fd peak appears the web application appears to be stuck. It feels like
all
its connections are consumed and none can be established from nginx
anymore.
Hello Christopher S.,
I know it won't. I just wanted to provide insight into Andre W.'s approach.
Thanks,
Neill
On Wed, Apr 22, 2015 at 4:58 PM, André Warnier a...@ice-sa.com wrote:
Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Neill,
On 4/22/15 9:12 AM,
On 04/22/2015 05:15 PM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Frederik,
On 4/22/15 10:53 AM, Frederik Nosi wrote:
Hi, On 04/22/2015 04:35 PM, Christopher Schultz wrote: Neill,
On 4/22/15 9:12 AM, Neill Lima wrote:
If I am not wrong, if the application in
Hi,
On 04/22/2015 04:35 PM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Neill,
On 4/22/15 9:12 AM, Neill Lima wrote:
If I am not wrong, if the application in question is monitored in
VisualVM through JMX (https://visualvm.java.net/) you could trigger
a Force GC
Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Neill,
On 4/22/15 9:12 AM, Neill Lima wrote:
If I am not wrong, if the application in question is monitored in
VisualVM through JMX (https://visualvm.java.net/) you could trigger
a Force GC from its monitoring console.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Neill,
On 4/22/15 9:12 AM, Neill Lima wrote:
If I am not wrong, if the application in question is monitored in
VisualVM through JMX (https://visualvm.java.net/) you could trigger
a Force GC from its monitoring console.
You can do this, but it
Am 22.04.2015 um 00:08 schrieb André Warnier:
...
The OP has a complex setup, where we are not even sure that the various
connections in various states are even related directly to Tomcat or not.
Graphically, we have this :
client -- TCP -- nginx -- TCP -- Tomcat -- webapp -- TCP --
external
Rainer Jung wrote:
Am 22.04.2015 um 00:08 schrieb André Warnier:
...
The OP has a complex setup, where we are not even sure that the various
connections in various states are even related directly to Tomcat or not.
Graphically, we have this :
client -- TCP -- nginx -- TCP -- Tomcat -- webapp
What concerns me the most is the CLOSE_WAIT on tomcat side because when an
fd peak appears the web application appears to be stuck. It feels like all
its connections are consumed and none can be established from nginx
anymore. Shouldn't the CLOSE_WAIT connection be recycled to received new
Am 22.04.2015 um 11:38 schrieb André Warnier:
Rainer Jung wrote:
See my response from 1.5 days ago which contains the individual
statistics for each of the above three TCP parts.
Yes, sorry Rainer, I did not read that as carefully as I should have.
No worries at all. Lots of stuff going
Am 22.04.2015 um 11:58 schrieb Thomas Boniface:
What concerns me the most is the CLOSE_WAIT on tomcat side because when an
fd peak appears the web application appears to be stuck. It feels like all
its connections are consumed and none can be established from nginx
anymore. Shouldn't the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
André,
On 4/21/15 10:56 AM, André Warnier wrote:
Thomas Boniface wrote:
The file descriptor peak show up in our monitoring application.
We have some charts showing the number of file descriptors owned
by the tomcat process (ls /proc/$(pgrep -u
I guess I get what you mean. Do you know if the same kind of explanation
applies to these ?
Apr 20, 2015 12:11:05 AM org.apache.coyote.AbstractProcessor setErrorState
INFO: An error occurred in processing while on a non-container thread. The
connection will be closed immediately
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Thomas,
On 4/20/15 8:11 AM, Thomas Boniface wrote:
I have tried to find help regarding an issue we experience with
our platform leading to random file descriptor peaks. This happens
more often on heavy load but can also happen on low traffic
The file descriptor peak show up in our monitoring application. We have
some charts showing the number of file descriptors owned by the tomcat
process (ls /proc/$(pgrep -u tomcat7)/fd/ | wc -l).
The calatalina.out log shows errors, the most frequent being a
java.io.IOException: Broken pipe.
Apr
Thomas Boniface wrote:
The file descriptor peak show up in our monitoring application. We have
some charts showing the number of file descriptors owned by the tomcat
process (ls /proc/$(pgrep -u tomcat7)/fd/ | wc -l).
The calatalina.out log shows errors, the most frequent being a
Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
André,
On 4/21/15 10:56 AM, André Warnier wrote:
Thomas Boniface wrote:
The file descriptor peak show up in our monitoring application.
We have some charts showing the number of file descriptors owned
by the tomcat
Increasing the amount of opened file descriptors is an accepted fine-tune (*if
your application it is handling the threads properly*)
ulimit -n
ulimit -n [new_value]
ulimit -n
If even after allowing more fds the performance is not adequate, some sort
of scaling (H/V) is necessary.
On Mon, Apr
Am 20.04.2015 um 14:11 schrieb Thomas Boniface:
Hi,
I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic periods.
Our application is using servlet 3.0 async
Hi,
I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic periods.
Our application is using servlet 3.0 async features and an async connector.
We noticed that
Thomas Boniface wrote:
I did some captures during a peak this morning, I have some lsof and
netstat data.
It seems to me that most file descriptors used by tomcat are some http
connections:
thomas@localhost ~/ads3/tbo11h12 cat lsof| wc -l
17772
thomas@localhost ~/ads3/tbo11h12 cat
I did some captures during a peak this morning, I have some lsof and
netstat data.
It seems to me that most file descriptors used by tomcat are some http
connections:
thomas@localhost ~/ads3/tbo11h12 cat lsof| wc -l
17772
thomas@localhost ~/ads3/tbo11h12 cat lsof | grep TCP | wc -l
Am 20.04.2015 um 15:41 schrieb Rainer Jung:
Am 20.04.2015 um 14:11 schrieb Thomas Boniface:
Hi,
I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic
Hi,
Both nginx and tomcat are hosted on the same server when listing the
connections I see both the connections from nginx to tomcat (the first one
create) and the one from tomcat to nginx used to reply. I may have
presented things the bad way though (I'm not too good regarding system
level).
I
Thanks for your time Rainer,
I get what you mean regarding the application getting slow. This server was
also logging the garbage collection activity and it seems normal even when
the problem is occuring, there is no big variation in the time taken to do
a garbage collection operation.
I don't
Am 20.04.2015 um 17:40 schrieb Thomas Boniface:
Hi,
Both nginx and tomcat are hosted on the same server when listing the
connections I see both the connections from nginx to tomcat (the first one
create) and the one from tomcat to nginx used to reply. I may have
presented things the bad way
32 matches
Mail list logo