Chris,
What do you suggest now to debug this issue ? Check with Nginx support if
they can verify it ?
On Thu, Jun 25, 2020 at 8:17 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/25/20 11:06, Ayub Khan wrote:
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ayub,
On 6/25/20 11:06, Ayub Khan wrote:
> Was just thinking if the file descriptors belonged to nginx why do
> they disappear as soon as I restart tomcat ? I tried restarting
> nginx and the open file descriptors don't disappear.
When you restart
Chris,
Was just thinking if the file descriptors belonged to nginx why do they
disappear as soon as I restart tomcat ? I tried restarting nginx and the
open file descriptors don't disappear.
When I execute lsof -p I do not see file descriptors in close
wait state
On Wed, 24 Jun 2020, 20:32 Ayub
Chris,
Ok, I will investigate nginx side as well. Thank you for the pointers
On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
ch...@christopherschultz.net> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
> Ayub,
>
> On 6/24/20 11:05, Ayub Khan wrote:
> > If some open file is owne
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ayub,
On 6/24/20 11:05, Ayub Khan wrote:
> If some open file is owned by nginx why would it show up if I run
> the below command> sudo lsof -p $(cat /var/run/tomcat8.pid)
Because network connections have two ends.
- -chris
> On Wed, Jun 24, 2020
Chris,
If some open file is owned by nginx why would it show up if I run the below
command
sudo lsof -p $(cat /var/run/tomcat8.pid)
On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ayub,
On 6/23/20 19:17, Ayub Khan wrote:
> Yes we have nginx as reverse proxy, below is the nginx config. We
> notice this issue only when there is high number of requests,
> during non peak hours we do not see this issue.> location
> /myapp/myservi
Chris,
Yes we have nginx as reverse proxy, below is the nginx config. We notice
this issue only when there is high number of requests, during non peak
hours we do not see this issue.
location /myapp/myservice{
#local machine
proxy_pass http://localhost:8080;
proxy_http_versio
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ayub,
On 6/23/20 16:23, Ayub Khan wrote:
> I executed *sudo lsof -p $(cat /var/run/tomcat8.pid) *and I saw
> the below output, some in CLOSE_WAIT and others in ESTABLISHED. If
> there are 200 open file descriptors 160 are in CLOSE_WAIT state.
> Wh
Felix,
I executed *sudo lsof -p $(cat /var/run/tomcat8.pid) *and I saw the below
output, some in CLOSE_WAIT and others in ESTABLISHED. If there are 200 open
file descriptors 160 are in CLOSE_WAIT state. When the count for CLOSE_WAIT
increases I just have to restart tomcat.
java65189 tomcat8
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ayub,
On 6/22/20 05:41, Ayub Khan wrote:
> I am using HikariCP for connection pooling. If the database is
> leaking connections then I should see connection not available
> exception.
That probably depends upon your connection pool configuration.
Am 22.06.20 um 13:22 schrieb Ayub Khan:
> Felix,
>
> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/ and from the output
> I see majority of them are related to sockets as shown below, some of them
> point to the jar file of tomcat and others to the log file which is created.
>
> socket:
Felix,
I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/ and from the output
I see majority of them are related to sockets as shown below, some of them
point to the jar file of tomcat and others to the log file which is created.
socket:[2084570754]
socket:[2084579487]
socket:[2084578478]
Am 22.06.20 um 11:41 schrieb Ayub Khan:
> Chris,
>
> I am using HikariCP for connection pooling. If the database is leaking
> connections then I should see connection not available exception.
>
> How do I find out which file descriptors are leaking ? these are not
files
> open on disk as there i
Chris,
I am using HikariCP for connection pooling. If the database is leaking
connections then I should see connection not available exception.
How do I find out which file descriptors are leaking ? these are not files
open on disk as there is no explicit disk file I/O in this application.
I ju
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Calder,
On 6/20/20 13:24, calder wrote:
> On Fri, Jun 19, 2020, 15:46 Ayub Khan wrote:
>
>> tomcat 8.5 broken pipe increases open files on ubuntu AWS
>>
>
>
> If there is slow response from db
>
>
> Might be a good idea to investigate the reason fo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ayub,
On 6/20/20 11:51, Ayub Khan wrote:
> Sorry we are using 8.0.32 version of tomcat.
>
> below is the configuration:
>
> Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built: Jan
> 24 2020 16:24:30 UTC Server number: 8.0.32.0 OS Name:
>
Calden,
we are not using dbcp for this project. Also even if this error is thrown
why does the file descriptor keep increasing?
On Sat, Jun 20, 2020 at 8:24 PM calder wrote:
> On Fri, Jun 19, 2020, 15:46 Ayub Khan wrote:
>
> > tomcat 8.5 broken pipe increases open files on ubuntu AWS
> >
>
>
On Fri, Jun 19, 2020, 15:46 Ayub Khan wrote:
> tomcat 8.5 broken pipe increases open files on ubuntu AWS
>
If there is slow response from db
Might be a good idea to investigate the reason for the "slow response"
I see this stack trace and the open files goes high
[ snip ]
Caused by: java
Christopher,
Sorry we are using 8.0.32 version of tomcat.
below is the configuration:
Server version: Apache Tomcat/8.0.32 (Ubuntu)
Server built: Jan 24 2020 16:24:30 UTC
Server number: 8.0.32.0
OS Name:Linux
OS Version: 4.4.0-1087-aws
Architecture: amd64
JVM Version:1.8.0_
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ayub,
On 6/19/20 16:46, Ayub Khan wrote:
> tomcat 8.5 broken pipe increases open files on ubuntu AWS
Which exact version of Tomcat 8.5? If you aren't running the latest
version (8.5.56), please upgrade and re-test.
> If there is slow response from
tomcat 8.5 broken pipe increases open files on ubuntu AWS
If there is slow response from db I see this stack trace and the open files
goes high and the only way to open files go down is to remove the instance
from Amazon load balancer.
Is there a way to keep the open files low even when Broken pi
22 matches
Mail list logo