Re: broken pipe error keeps increasing open files

2020-06-25 Thread Ayub Khan
Chris,

What do you suggest now to debug this issue ?  Check with Nginx support if
they can verify it ?

On Thu, Jun 25, 2020 at 8:17 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/25/20 11:06, Ayub Khan wrote:
> > Was just thinking if the file descriptors belonged to nginx why do
> > they disappear as soon as I restart tomcat ? I tried restarting
> > nginx and the open file descriptors don't disappear.
>
> When you restart Tomcat, the OS cleans-up the TCP/IP stack. Tomcat is
> waiting on some cleanup information on those sockets. nginx has
> evidently given-up on them and so the OS has adopted them.
>
> > When I execute lsof -p  I do not see file descriptors
> > in close wait state
>
> Because nginx has cleaned them up already.
>
> I could encourage you to take a look at the TCP/IP state diagram to
> see how everything works. Beware: it's very complicated.
>
> I can tell you that Tomcat isn't connecting to itself (unless your
> application is doing that). Unless some other process is connecting to
> your Tomcat's port 8080, it seems like nginx is the only possibility.
>
> Note: you can run lsof without a process parameter. You can search all
> open files for who owns file handle X and see who owns it. My guess is
> you'll get "kernel" if you look.
>
> - -chris
>
> > On Wed, 24 Jun 2020, 20:32 Ayub Khan,  wrote:
> >
> >> Chris,
> >>
> >> Ok, I will investigate nginx side as well. Thank you for the
> >> pointers
> >>
> >> On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
> >> ch...@christopherschultz.net> wrote:
> >>
> >
> > Ayub,
> >
> > On 6/24/20 11:05, Ayub Khan wrote:
> > If some open file is owned by nginx why would it show up if
> > I run the below command> sudo lsof -p $(cat
> > /var/run/tomcat8.pid)
> >
> > Because network connections have two ends.
> >
> > -chris
> >
> > On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/23/20 19:17, Ayub Khan wrote:
>  Yes we have nginx as reverse proxy, below is the
>  nginx config. We notice this issue only when there is
>  high number of requests, during non peak hours we do
>  not see this issue.> location /myapp/myservice{
>  #local machine proxy_pass http://localhost:8080;
>  proxy_http_version 1.1;
> 
>  proxy_set_headerConnection $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost $host; proxy_set_header
>  X-Real-IP $remote_addr; proxy_set_header
>  X-Forwarded-For $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> >
> > You might want to read about tuning nginx to drop
> > connections after a certain period of time, number of
> > requests, etc. Looks like either a bug in nginx or a
> > misconfiguration which allows connections to stick-around
> > like this. You may have to ask the nginx people. I have no
> > experience with nginx myself, while others here may have
> > some experience.
> >
>  location / { #  if using AWS Load balancer, this bit
>  checks for the presence of the https proto flag.  if
>  regular http is found, then issue a redirect
> > to hit
>  the https endpoint instead if
>  ($http_x_forwarded_proto != 'https') { rewrite ^
>  https://$host$request_uri? permanent; }
> 
>  proxy_pass  http://127.0.0.1:8080;
>  proxy_http_version 1.1;
> 
>  proxy_set_headerConnection $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost $host; proxy_set_header
>  X-Real-IP $remote_addr; proxy_set_header
>  X-Forwarded-For $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> 
>  *below is the connector*
> 
>    protocol="org.apache.coyote.http11.Http11NioProtocol"
>   connectionTimeout="2000" maxThreads="5"
>  URIEncoding="UTF-8" redirectPort="8443" />
> >
> > 50k threads is a LOT of threads. Do you expect to handle
> > 50k requests simultaneously?
> >
>  these ports are random, I am not sure who owns the
>  process.
> 
>  localhost:http-alt->localhost:55866 (CLOSE_WAIT) ,
>  here port 55866 is a random port.
> > I'm sure you'll find that 55866 is owned by nginx. netstat
> > will tell you .
> >
> > I think you need to look at your nginx configuration. It
> > would also be a great time to upgrade to a supported
> > version of Tomcat. I would recommend 8.5.56 or 9.0.36.
> >
> > -chris
> >
>  On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz
> 

Re: broken pipe error keeps increasing open files

2020-06-25 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/25/20 11:06, Ayub Khan wrote:
> Was just thinking if the file descriptors belonged to nginx why do
> they disappear as soon as I restart tomcat ? I tried restarting
> nginx and the open file descriptors don't disappear.

When you restart Tomcat, the OS cleans-up the TCP/IP stack. Tomcat is
waiting on some cleanup information on those sockets. nginx has
evidently given-up on them and so the OS has adopted them.

> When I execute lsof -p  I do not see file descriptors
> in close wait state

Because nginx has cleaned them up already.

I could encourage you to take a look at the TCP/IP state diagram to
see how everything works. Beware: it's very complicated.

I can tell you that Tomcat isn't connecting to itself (unless your
application is doing that). Unless some other process is connecting to
your Tomcat's port 8080, it seems like nginx is the only possibility.

Note: you can run lsof without a process parameter. You can search all
open files for who owns file handle X and see who owns it. My guess is
you'll get "kernel" if you look.

- -chris

> On Wed, 24 Jun 2020, 20:32 Ayub Khan,  wrote:
>
>> Chris,
>>
>> Ok, I will investigate nginx side as well. Thank you for the
>> pointers
>>
>> On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
>> ch...@christopherschultz.net> wrote:
>>
>
> Ayub,
>
> On 6/24/20 11:05, Ayub Khan wrote:
> If some open file is owned by nginx why would it show up if
> I run the below command> sudo lsof -p $(cat
> /var/run/tomcat8.pid)
>
> Because network connections have two ends.
>
> -chris
>
> On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/23/20 19:17, Ayub Khan wrote:
 Yes we have nginx as reverse proxy, below is the
 nginx config. We notice this issue only when there is
 high number of requests, during non peak hours we do
 not see this issue.> location /myapp/myservice{
 #local machine proxy_pass http://localhost:8080;
 proxy_http_version 1.1;

 proxy_set_headerConnection $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost $host; proxy_set_header
 X-Real-IP $remote_addr; proxy_set_header
 X-Forwarded-For $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> You might want to read about tuning nginx to drop
> connections after a certain period of time, number of
> requests, etc. Looks like either a bug in nginx or a
> misconfiguration which allows connections to stick-around
> like this. You may have to ask the nginx people. I have no
> experience with nginx myself, while others here may have
> some experience.
>
 location / { #  if using AWS Load balancer, this bit
 checks for the presence of the https proto flag.  if
 regular http is found, then issue a redirect
> to hit
 the https endpoint instead if
 ($http_x_forwarded_proto != 'https') { rewrite ^
 https://$host$request_uri? permanent; }

 proxy_pass  http://127.0.0.1:8080;
 proxy_http_version 1.1;

 proxy_set_headerConnection $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost $host; proxy_set_header
 X-Real-IP $remote_addr; proxy_set_header
 X-Forwarded-For $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }

 *below is the connector*

 >>> protocol="org.apache.coyote.http11.Http11NioProtocol"
  connectionTimeout="2000" maxThreads="5"
 URIEncoding="UTF-8" redirectPort="8443" />
>
> 50k threads is a LOT of threads. Do you expect to handle
> 50k requests simultaneously?
>
 these ports are random, I am not sure who owns the
 process.

 localhost:http-alt->localhost:55866 (CLOSE_WAIT) ,
 here port 55866 is a random port.
> I'm sure you'll find that 55866 is owned by nginx. netstat
> will tell you .
>
> I think you need to look at your nginx configuration. It
> would also be a great time to upgrade to a supported
> version of Tomcat. I would recommend 8.5.56 or 9.0.36.
>
> -chris
>
 On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz
 < ch...@christopherschultz.net> wrote:

 Ayub,

 On 6/23/20 16:23, Ayub Khan wrote:
>>> I executed  *sudo lsof -p $(cat
>>> /var/run/tomcat8.pid) *and I saw the below
>>> output, some in CLOSE_WAIT and others in
>>> ESTABLISHED. If there are 200 open file
>>> descriptors 160 are in CLOSE_WAIT state. When
>>> the count for CLOSE_WAIT increases I just have

Re: broken pipe error keeps increasing open files

2020-06-25 Thread Ayub Khan
Chris,

Was just thinking if the file descriptors belonged to nginx why do they
disappear as soon as I restart tomcat ? I tried restarting nginx and the
open file descriptors don't disappear.

When I execute lsof -p  I do not see file descriptors in close
wait state

On Wed, 24 Jun 2020, 20:32 Ayub Khan,  wrote:

> Chris,
>
> Ok, I will investigate nginx side as well. Thank you for the pointers
>
> On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
> ch...@christopherschultz.net> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>>
>> Ayub,
>>
>> On 6/24/20 11:05, Ayub Khan wrote:
>> > If some open file is owned by nginx why would it show up if I run
>> > the below command> sudo lsof -p $(cat /var/run/tomcat8.pid)
>>
>> Because network connections have two ends.
>>
>> - -chris
>>
>> > On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
>> > ch...@christopherschultz.net> wrote:
>> >
>> > Ayub,
>> >
>> > On 6/23/20 19:17, Ayub Khan wrote:
>>  Yes we have nginx as reverse proxy, below is the nginx
>>  config. We notice this issue only when there is high number
>>  of requests, during non peak hours we do not see this issue.>
>>  location /myapp/myservice{ #local machine proxy_pass
>>  http://localhost:8080; proxy_http_version  1.1;
>> 
>>  proxy_set_headerConnection  $connection_upgrade;
>>  proxy_set_headerUpgrade $http_upgrade;
>>  proxy_set_headerHost$host;
>>  proxy_set_header X-Real-IP   $remote_addr;
>>  proxy_set_header X-Forwarded-For
>>  $proxy_add_x_forwarded_for;
>> 
>> 
>>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
>> >
>> > You might want to read about tuning nginx to drop connections after
>> > a certain period of time, number of requests, etc. Looks like
>> > either a bug in nginx or a misconfiguration which allows
>> > connections to stick-around like this. You may have to ask the
>> > nginx people. I have no experience with nginx myself, while others
>> > here may have some experience.
>> >
>>  location / { #  if using AWS Load balancer, this bit checks
>>  for the presence of the https proto flag.  if regular http is
>>  found, then issue a redirect
>> > to hit
>>  the https endpoint instead if ($http_x_forwarded_proto !=
>>  'https') { rewrite ^ https://$host$request_uri? permanent; }
>> 
>>  proxy_pass  http://127.0.0.1:8080;
>>  proxy_http_version 1.1;
>> 
>>  proxy_set_headerConnection  $connection_upgrade;
>>  proxy_set_headerUpgrade $http_upgrade;
>>  proxy_set_headerHost$host;
>>  proxy_set_header X-Real-IP   $remote_addr;
>>  proxy_set_header X-Forwarded-For
>>  $proxy_add_x_forwarded_for;
>> 
>> 
>>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
>> 
>>  *below is the connector*
>> 
>>  >  protocol="org.apache.coyote.http11.Http11NioProtocol"
>>  connectionTimeout="2000" maxThreads="5"
>>  URIEncoding="UTF-8" redirectPort="8443" />
>> >
>> > 50k threads is a LOT of threads. Do you expect to handle 50k
>> > requests simultaneously?
>> >
>>  these ports are random, I am not sure who owns the process.
>> 
>>  localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port
>>  55866 is a random port.
>> > I'm sure you'll find that 55866 is owned by nginx. netstat will
>> > tell you .
>> >
>> > I think you need to look at your nginx configuration. It would also
>> > be a great time to upgrade to a supported version of Tomcat. I
>> > would recommend 8.5.56 or 9.0.36.
>> >
>> > -chris
>> >
>>  On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
>>  ch...@christopherschultz.net> wrote:
>> 
>>  Ayub,
>> 
>>  On 6/23/20 16:23, Ayub Khan wrote:
>> >>> I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)
>> >>> *and I saw the below output, some in CLOSE_WAIT and
>> >>> others in ESTABLISHED. If there are 200 open file
>> >>> descriptors 160 are in CLOSE_WAIT state. When the count
>> >>> for CLOSE_WAIT increases I just have to restart
>> >>> tomcat.
>> >>>
>> >>> java65189 tomcat8  715u IPv6
>> >>> 237878311 0t0 TCP localhost:http-alt->localhost:43760
>> >>> (CLOSE_WAIT) java 65189 tomcat8  716u IPv6
>> >>> 237848923   0t0 TCP
>> >>> localhost:http-alt->localhost:40568 (CLOSE_WAIT)
>> 
>>  These are connections from some process into Tomcat listening
>>  on port 8080 (that's what localhost:http-alt is). So what
>>  process owns the outgoing connection on port 40568 on the
>>  same host?
>> 
>>  Are you using a reverse proxy?
>> 
>> >>> most of the open files are in CLOSE_WAIT state I do not
>> >>> see anything related to database ip.
>> 
>>  Agreed. It looks like you have a reverse proxy who is
>>  losing-track of connections, or who is 

Re: broken pipe error keeps increasing open files

2020-06-24 Thread Ayub Khan
Chris,

Ok, I will investigate nginx side as well. Thank you for the pointers

On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
> Ayub,
>
> On 6/24/20 11:05, Ayub Khan wrote:
> > If some open file is owned by nginx why would it show up if I run
> > the below command> sudo lsof -p $(cat /var/run/tomcat8.pid)
>
> Because network connections have two ends.
>
> - -chris
>
> > On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/23/20 19:17, Ayub Khan wrote:
>  Yes we have nginx as reverse proxy, below is the nginx
>  config. We notice this issue only when there is high number
>  of requests, during non peak hours we do not see this issue.>
>  location /myapp/myservice{ #local machine proxy_pass
>  http://localhost:8080; proxy_http_version  1.1;
> 
>  proxy_set_headerConnection  $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost$host;
>  proxy_set_header X-Real-IP   $remote_addr;
>  proxy_set_header X-Forwarded-For
>  $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> >
> > You might want to read about tuning nginx to drop connections after
> > a certain period of time, number of requests, etc. Looks like
> > either a bug in nginx or a misconfiguration which allows
> > connections to stick-around like this. You may have to ask the
> > nginx people. I have no experience with nginx myself, while others
> > here may have some experience.
> >
>  location / { #  if using AWS Load balancer, this bit checks
>  for the presence of the https proto flag.  if regular http is
>  found, then issue a redirect
> > to hit
>  the https endpoint instead if ($http_x_forwarded_proto !=
>  'https') { rewrite ^ https://$host$request_uri? permanent; }
> 
>  proxy_pass  http://127.0.0.1:8080;
>  proxy_http_version 1.1;
> 
>  proxy_set_headerConnection  $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost$host;
>  proxy_set_header X-Real-IP   $remote_addr;
>  proxy_set_header X-Forwarded-For
>  $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> 
>  *below is the connector*
> 
>    protocol="org.apache.coyote.http11.Http11NioProtocol"
>  connectionTimeout="2000" maxThreads="5"
>  URIEncoding="UTF-8" redirectPort="8443" />
> >
> > 50k threads is a LOT of threads. Do you expect to handle 50k
> > requests simultaneously?
> >
>  these ports are random, I am not sure who owns the process.
> 
>  localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port
>  55866 is a random port.
> > I'm sure you'll find that 55866 is owned by nginx. netstat will
> > tell you .
> >
> > I think you need to look at your nginx configuration. It would also
> > be a great time to upgrade to a supported version of Tomcat. I
> > would recommend 8.5.56 or 9.0.36.
> >
> > -chris
> >
>  On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
>  ch...@christopherschultz.net> wrote:
> 
>  Ayub,
> 
>  On 6/23/20 16:23, Ayub Khan wrote:
> >>> I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)
> >>> *and I saw the below output, some in CLOSE_WAIT and
> >>> others in ESTABLISHED. If there are 200 open file
> >>> descriptors 160 are in CLOSE_WAIT state. When the count
> >>> for CLOSE_WAIT increases I just have to restart
> >>> tomcat.
> >>>
> >>> java65189 tomcat8  715u IPv6
> >>> 237878311 0t0 TCP localhost:http-alt->localhost:43760
> >>> (CLOSE_WAIT) java 65189 tomcat8  716u IPv6
> >>> 237848923   0t0 TCP
> >>> localhost:http-alt->localhost:40568 (CLOSE_WAIT)
> 
>  These are connections from some process into Tomcat listening
>  on port 8080 (that's what localhost:http-alt is). So what
>  process owns the outgoing connection on port 40568 on the
>  same host?
> 
>  Are you using a reverse proxy?
> 
> >>> most of the open files are in CLOSE_WAIT state I do not
> >>> see anything related to database ip.
> 
>  Agreed. It looks like you have a reverse proxy who is
>  losing-track of connections, or who is (re)opening
>  connections when it may be unnecessar y.
> 
>  Can you share your  configuration from
>  server.xml? Remember to remove any secrets.
> 
>  -chris
> 
> >>> On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
> >>> felix.schumac...@internetallee.de> wrote:
> >>>
> 
>  Am 22.06.20 um 13:22 schrieb Ayub Khan:
> > Felix,
> >
> > I executed ls -l 

Re: broken pipe error keeps increasing open files

2020-06-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Ayub,

On 6/24/20 11:05, Ayub Khan wrote:
> If some open file is owned by nginx why would it show up if I run
> the below command> sudo lsof -p $(cat /var/run/tomcat8.pid)

Because network connections have two ends.

- -chris

> On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/23/20 19:17, Ayub Khan wrote:
 Yes we have nginx as reverse proxy, below is the nginx
 config. We notice this issue only when there is high number
 of requests, during non peak hours we do not see this issue.>
 location /myapp/myservice{ #local machine proxy_pass
 http://localhost:8080; proxy_http_version  1.1;

 proxy_set_headerConnection  $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost$host;
 proxy_set_header X-Real-IP   $remote_addr;
 proxy_set_header X-Forwarded-For
 $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> You might want to read about tuning nginx to drop connections after
> a certain period of time, number of requests, etc. Looks like
> either a bug in nginx or a misconfiguration which allows
> connections to stick-around like this. You may have to ask the
> nginx people. I have no experience with nginx myself, while others
> here may have some experience.
>
 location / { #  if using AWS Load balancer, this bit checks
 for the presence of the https proto flag.  if regular http is
 found, then issue a redirect
> to hit
 the https endpoint instead if ($http_x_forwarded_proto !=
 'https') { rewrite ^ https://$host$request_uri? permanent; }

 proxy_pass  http://127.0.0.1:8080;
 proxy_http_version 1.1;

 proxy_set_headerConnection  $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost$host;
 proxy_set_header X-Real-IP   $remote_addr;
 proxy_set_header X-Forwarded-For
 $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }

 *below is the connector*

 >>> protocol="org.apache.coyote.http11.Http11NioProtocol"
 connectionTimeout="2000" maxThreads="5"
 URIEncoding="UTF-8" redirectPort="8443" />
>
> 50k threads is a LOT of threads. Do you expect to handle 50k
> requests simultaneously?
>
 these ports are random, I am not sure who owns the process.

 localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port
 55866 is a random port.
> I'm sure you'll find that 55866 is owned by nginx. netstat will
> tell you .
>
> I think you need to look at your nginx configuration. It would also
> be a great time to upgrade to a supported version of Tomcat. I
> would recommend 8.5.56 or 9.0.36.
>
> -chris
>
 On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
 ch...@christopherschultz.net> wrote:

 Ayub,

 On 6/23/20 16:23, Ayub Khan wrote:
>>> I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)
>>> *and I saw the below output, some in CLOSE_WAIT and
>>> others in ESTABLISHED. If there are 200 open file
>>> descriptors 160 are in CLOSE_WAIT state. When the count
>>> for CLOSE_WAIT increases I just have to restart
>>> tomcat.
>>>
>>> java65189 tomcat8  715u IPv6
>>> 237878311 0t0 TCP localhost:http-alt->localhost:43760
>>> (CLOSE_WAIT) java 65189 tomcat8  716u IPv6
>>> 237848923   0t0 TCP
>>> localhost:http-alt->localhost:40568 (CLOSE_WAIT)

 These are connections from some process into Tomcat listening
 on port 8080 (that's what localhost:http-alt is). So what
 process owns the outgoing connection on port 40568 on the
 same host?

 Are you using a reverse proxy?

>>> most of the open files are in CLOSE_WAIT state I do not
>>> see anything related to database ip.

 Agreed. It looks like you have a reverse proxy who is
 losing-track of connections, or who is (re)opening
 connections when it may be unnecessar y.

 Can you share your  configuration from
 server.xml? Remember to remove any secrets.

 -chris

>>> On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
>>> felix.schumac...@internetallee.de> wrote:
>>>

 Am 22.06.20 um 13:22 schrieb Ayub Khan:
> Felix,
>
> I executed ls -l /proc/$(cat
> /var/run/tomcat8.pid)/fd/ and from the
 output
> I see majority of them are related to sockets as
> shown below, some of
 them
> point to the jar file of tomcat and others to the
> log file which is
 created.
>
> socket:[2084570754] socket:[2084579487]
> socket:[2084578478] socket:[2084570167]

 Can you try the 

Re: broken pipe error keeps increasing open files

2020-06-24 Thread Ayub Khan
Chris,

If some open file is owned by nginx why would it show up if I run the below
command

sudo lsof -p $(cat /var/run/tomcat8.pid)



On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/23/20 19:17, Ayub Khan wrote:
> > Yes we have nginx as reverse proxy, below is the nginx config. We
> > notice this issue only when there is high number of requests,
> > during non peak hours we do not see this issue.> location
> > /myapp/myservice{ #local machine proxy_pass
> > http://localhost:8080; proxy_http_version  1.1;
> >
> > proxy_set_headerConnection  $connection_upgrade;
> > proxy_set_headerUpgrade $http_upgrade;
> > proxy_set_headerHost$host; proxy_set_header
> > X-Real-IP   $remote_addr; proxy_set_header
> > X-Forwarded-For $proxy_add_x_forwarded_for;
> >
> >
> > proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> You might want to read about tuning nginx to drop connections after a
> certain period of time, number of requests, etc. Looks like either a
> bug in nginx or a misconfiguration which allows connections to
> stick-around like this. You may have to ask the nginx people. I have
> no experience with nginx myself, while others here may have some
> experience.
>
> > location / { #  if using AWS Load balancer, this bit checks for the
> > presence of the https proto flag.  if regular http is found, then
> > issue a redirect
> to hit
> > the https endpoint instead if ($http_x_forwarded_proto != 'https')
> > { rewrite ^ https://$host$request_uri? permanent; }
> >
> > proxy_pass  http://127.0.0.1:8080; proxy_http_version
> > 1.1;
> >
> > proxy_set_headerConnection  $connection_upgrade;
> > proxy_set_headerUpgrade $http_upgrade;
> > proxy_set_headerHost$host; proxy_set_header
> > X-Real-IP   $remote_addr; proxy_set_header
> > X-Forwarded-For $proxy_add_x_forwarded_for;
> >
> >
> > proxy_buffers 16 16k; proxy_buffer_size 32k; }
> >
> > *below is the connector*
> >
> >  > protocol="org.apache.coyote.http11.Http11NioProtocol"
> > connectionTimeout="2000" maxThreads="5" URIEncoding="UTF-8"
> > redirectPort="8443" />
>
> 50k threads is a LOT of threads. Do you expect to handle 50k requests
> simultaneously?
>
> > these ports are random, I am not sure who owns the process.
> >
> > localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port 55866
> > is a random port.
> I'm sure you'll find that 55866 is owned by nginx. netstat will tell you
> .
>
> I think you need to look at your nginx configuration. It would also be
> a great time to upgrade to a supported version of Tomcat. I would
> recommend 8.5.56 or 9.0.36.
>
> - -chris
>
> > On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/23/20 16:23, Ayub Khan wrote:
>  I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)  *and I
>  saw the below output, some in CLOSE_WAIT and others in
>  ESTABLISHED. If there are 200 open file descriptors 160 are
>  in CLOSE_WAIT state. When the count for CLOSE_WAIT increases
>  I just have to restart tomcat.
> 
>  java65189 tomcat8  715u IPv6  237878311
>  0t0 TCP localhost:http-alt->localhost:43760 (CLOSE_WAIT) java
>  65189 tomcat8  716u IPv6  237848923   0t0
>  TCP localhost:http-alt->localhost:40568 (CLOSE_WAIT)
> >
> > These are connections from some process into Tomcat listening on
> > port 8080 (that's what localhost:http-alt is). So what process owns
> > the outgoing connection on port 40568 on the same host?
> >
> > Are you using a reverse proxy?
> >
>  most of the open files are in CLOSE_WAIT state I do not see
>  anything related to database ip.
> >
> > Agreed. It looks like you have a reverse proxy who is losing-track
> > of connections, or who is (re)opening connections when it may be
> > unnecessar y.
> >
> > Can you share your  configuration from server.xml?
> > Remember to remove any secrets.
> >
> > -chris
> >
>  On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
>  felix.schumac...@internetallee.de> wrote:
> 
> >
> > Am 22.06.20 um 13:22 schrieb Ayub Khan:
> >> Felix,
> >>
> >> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
> >> and from the
> > output
> >> I see majority of them are related to sockets as shown
> >> below, some of
> > them
> >> point to the jar file of tomcat and others to the log
> >> file which is
> > created.
> >>
> >> socket:[2084570754] socket:[2084579487]
> >> socket:[2084578478] socket:[2084570167]
> >
> > Can you try the other command (lsof -p $(cat
> > ...tomcat.pid))? It should give a bit more details on the
> > used sockets that the proc directory.
> >
> > Felix
> >
> >>
> >> On 

Re: broken pipe error keeps increasing open files

2020-06-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/23/20 19:17, Ayub Khan wrote:
> Yes we have nginx as reverse proxy, below is the nginx config. We
> notice this issue only when there is high number of requests,
> during non peak hours we do not see this issue.> location
> /myapp/myservice{ #local machine proxy_pass
> http://localhost:8080; proxy_http_version  1.1;
>
> proxy_set_headerConnection  $connection_upgrade;
> proxy_set_headerUpgrade $http_upgrade;
> proxy_set_headerHost$host; proxy_set_header
> X-Real-IP   $remote_addr; proxy_set_header
> X-Forwarded-For $proxy_add_x_forwarded_for;
>
>
> proxy_buffers 16 16k; proxy_buffer_size 32k; }

You might want to read about tuning nginx to drop connections after a
certain period of time, number of requests, etc. Looks like either a
bug in nginx or a misconfiguration which allows connections to
stick-around like this. You may have to ask the nginx people. I have
no experience with nginx myself, while others here may have some
experience.

> location / { #  if using AWS Load balancer, this bit checks for the
> presence of the https proto flag.  if regular http is found, then
> issue a redirect
to hit
> the https endpoint instead if ($http_x_forwarded_proto != 'https')
> { rewrite ^ https://$host$request_uri? permanent; }
>
> proxy_pass  http://127.0.0.1:8080; proxy_http_version
> 1.1;
>
> proxy_set_headerConnection  $connection_upgrade;
> proxy_set_headerUpgrade $http_upgrade;
> proxy_set_headerHost$host; proxy_set_header
> X-Real-IP   $remote_addr; proxy_set_header
> X-Forwarded-For $proxy_add_x_forwarded_for;
>
>
> proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> *below is the connector*
>
>  protocol="org.apache.coyote.http11.Http11NioProtocol"
> connectionTimeout="2000" maxThreads="5" URIEncoding="UTF-8"
> redirectPort="8443" />

50k threads is a LOT of threads. Do you expect to handle 50k requests
simultaneously?

> these ports are random, I am not sure who owns the process.
>
> localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port 55866
> is a random port.
I'm sure you'll find that 55866 is owned by nginx. netstat will tell you
.

I think you need to look at your nginx configuration. It would also be
a great time to upgrade to a supported version of Tomcat. I would
recommend 8.5.56 or 9.0.36.

- -chris

> On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/23/20 16:23, Ayub Khan wrote:
 I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)  *and I
 saw the below output, some in CLOSE_WAIT and others in
 ESTABLISHED. If there are 200 open file descriptors 160 are
 in CLOSE_WAIT state. When the count for CLOSE_WAIT increases
 I just have to restart tomcat.

 java65189 tomcat8  715u IPv6  237878311
 0t0 TCP localhost:http-alt->localhost:43760 (CLOSE_WAIT) java
 65189 tomcat8  716u IPv6  237848923   0t0
 TCP localhost:http-alt->localhost:40568 (CLOSE_WAIT)
>
> These are connections from some process into Tomcat listening on
> port 8080 (that's what localhost:http-alt is). So what process owns
> the outgoing connection on port 40568 on the same host?
>
> Are you using a reverse proxy?
>
 most of the open files are in CLOSE_WAIT state I do not see
 anything related to database ip.
>
> Agreed. It looks like you have a reverse proxy who is losing-track
> of connections, or who is (re)opening connections when it may be
> unnecessar y.
>
> Can you share your  configuration from server.xml?
> Remember to remove any secrets.
>
> -chris
>
 On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
 felix.schumac...@internetallee.de> wrote:

>
> Am 22.06.20 um 13:22 schrieb Ayub Khan:
>> Felix,
>>
>> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
>> and from the
> output
>> I see majority of them are related to sockets as shown
>> below, some of
> them
>> point to the jar file of tomcat and others to the log
>> file which is
> created.
>>
>> socket:[2084570754] socket:[2084579487]
>> socket:[2084578478] socket:[2084570167]
>
> Can you try the other command (lsof -p $(cat
> ...tomcat.pid))? It should give a bit more details on the
> used sockets that the proc directory.
>
> Felix
>
>>
>> On Mon, Jun 22, 2020 at 1:28 PM Felix Schumacher <
>> felix.schumac...@internetallee.de> wrote:
>>
>>> Am 22.06.20 um 11:41 schrieb Ayub Khan:
 Chris,

 I am using HikariCP for connection pooling. If the
 database is leaking connections then I should see
 connection not available exception.

 How do I find out which file descriptors are leaking
 ? these are not
>>> files
 open on disk as there is no 

Re: broken pipe error keeps increasing open files

2020-06-23 Thread Ayub Khan
Chris,

Yes we have nginx as reverse proxy, below is the nginx config. We notice
this issue only when there is high number of requests, during non peak
hours we do not see this issue.

location /myapp/myservice{
#local machine
proxy_pass  http://localhost:8080;
proxy_http_version  1.1;

proxy_set_headerConnection  $connection_upgrade;
proxy_set_headerUpgrade $http_upgrade;
proxy_set_headerHost$host;
proxy_set_headerX-Real-IP   $remote_addr;
proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for;


proxy_buffers 16 16k;
proxy_buffer_size 32k;
}

location / {
#  if using AWS Load balancer, this bit checks for the presence of the
https proto flag.  if regular http is found, then issue a redirect to hit
the https endpoint instead
if ($http_x_forwarded_proto != 'https') {
rewrite ^ https://$host$request_uri? permanent;
}

proxy_pass  http://127.0.0.1:8080;
proxy_http_version  1.1;

proxy_set_headerConnection  $connection_upgrade;
proxy_set_headerUpgrade $http_upgrade;
proxy_set_headerHost$host;
proxy_set_headerX-Real-IP   $remote_addr;
proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for;


proxy_buffers 16 16k;
proxy_buffer_size 32k;
}

*below is the connector*




these ports are random, I am not sure who owns the process.

localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port 55866 is a
random port.



On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/23/20 16:23, Ayub Khan wrote:
> > I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)  *and I saw
> > the below output, some in CLOSE_WAIT and others in ESTABLISHED. If
> > there are 200 open file descriptors 160 are in CLOSE_WAIT state.
> > When the count for CLOSE_WAIT increases I just have to restart
> > tomcat.
> >
> > java65189 tomcat8  715u IPv6  237878311   0t0
> > TCP localhost:http-alt->localhost:43760 (CLOSE_WAIT) java65189
> > tomcat8  716u IPv6  237848923   0t0   TCP
> > localhost:http-alt->localhost:40568 (CLOSE_WAIT)
>
> These are connections from some process into Tomcat listening on port
> 8080 (that's what localhost:http-alt is). So what process owns the
> outgoing connection on port 40568 on the same host?
>
> Are you using a reverse proxy?
>
> > most of the open files are in CLOSE_WAIT state I do not see
> > anything related to database ip.
>
> Agreed. It looks like you have a reverse proxy who is losing-track of
> connections, or who is (re)opening connections when it may be unnecessar
> y.
>
> Can you share your  configuration from server.xml? Remember
> to remove any secrets.
>
> - -chris
>
> > On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
> > felix.schumac...@internetallee.de> wrote:
> >
> >>
> >> Am 22.06.20 um 13:22 schrieb Ayub Khan:
> >>> Felix,
> >>>
> >>> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/ and
> >>> from the
> >> output
> >>> I see majority of them are related to sockets as shown below,
> >>> some of
> >> them
> >>> point to the jar file of tomcat and others to the log file
> >>> which is
> >> created.
> >>>
> >>> socket:[2084570754] socket:[2084579487] socket:[2084578478]
> >>> socket:[2084570167]
> >>
> >> Can you try the other command (lsof -p $(cat ...tomcat.pid))? It
> >> should give a bit more details on the used sockets that the proc
> >> directory.
> >>
> >> Felix
> >>
> >>>
> >>> On Mon, Jun 22, 2020 at 1:28 PM Felix Schumacher <
> >>> felix.schumac...@internetallee.de> wrote:
> >>>
>  Am 22.06.20 um 11:41 schrieb Ayub Khan:
> > Chris,
> >
> > I am using HikariCP for connection pooling. If the database
> > is leaking connections then I should see connection not
> > available exception.
> >
> > How do I find out which file descriptors are leaking ?
> > these are not
>  files
> > open on disk as there is no explicit disk file I/O in this
> > application.
> >
> > I just use the below command to check for open file
> > descriptors:
> >
> > watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc
> > -l"
>  You could have a look at the name of the files in the pids
>  proc
> >> directory.
> 
>  $ ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
> 
>  Or you could use the tool lsof to find the open file
>  descriptors.
> 
>  $ lsof -p $(cat /var/run/tomcat8.pid)
> 
>  For both calls you should first change to the uid of the
>  tomcat user or use sudo as in your example.
> 
>  Felix
> 
> > Thanks and Regards Ayub
> >
> > On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,

Re: broken pipe error keeps increasing open files

2020-06-23 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/23/20 16:23, Ayub Khan wrote:
> I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)  *and I saw
> the below output, some in CLOSE_WAIT and others in ESTABLISHED. If
> there are 200 open file descriptors 160 are in CLOSE_WAIT state.
> When the count for CLOSE_WAIT increases I just have to restart
> tomcat.
>
> java65189 tomcat8  715u IPv6  237878311   0t0
> TCP localhost:http-alt->localhost:43760 (CLOSE_WAIT) java65189
> tomcat8  716u IPv6  237848923   0t0   TCP
> localhost:http-alt->localhost:40568 (CLOSE_WAIT)

These are connections from some process into Tomcat listening on port
8080 (that's what localhost:http-alt is). So what process owns the
outgoing connection on port 40568 on the same host?

Are you using a reverse proxy?

> most of the open files are in CLOSE_WAIT state I do not see
> anything related to database ip.

Agreed. It looks like you have a reverse proxy who is losing-track of
connections, or who is (re)opening connections when it may be unnecessar
y.

Can you share your  configuration from server.xml? Remember
to remove any secrets.

- -chris

> On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
> felix.schumac...@internetallee.de> wrote:
>
>>
>> Am 22.06.20 um 13:22 schrieb Ayub Khan:
>>> Felix,
>>>
>>> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/ and
>>> from the
>> output
>>> I see majority of them are related to sockets as shown below,
>>> some of
>> them
>>> point to the jar file of tomcat and others to the log file
>>> which is
>> created.
>>>
>>> socket:[2084570754] socket:[2084579487] socket:[2084578478]
>>> socket:[2084570167]
>>
>> Can you try the other command (lsof -p $(cat ...tomcat.pid))? It
>> should give a bit more details on the used sockets that the proc
>> directory.
>>
>> Felix
>>
>>>
>>> On Mon, Jun 22, 2020 at 1:28 PM Felix Schumacher <
>>> felix.schumac...@internetallee.de> wrote:
>>>
 Am 22.06.20 um 11:41 schrieb Ayub Khan:
> Chris,
>
> I am using HikariCP for connection pooling. If the database
> is leaking connections then I should see connection not
> available exception.
>
> How do I find out which file descriptors are leaking ?
> these are not
 files
> open on disk as there is no explicit disk file I/O in this
> application.
>
> I just use the below command to check for open file
> descriptors:
>
> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc
> -l"
 You could have a look at the name of the files in the pids
 proc
>> directory.

 $ ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/

 Or you could use the tool lsof to find the open file
 descriptors.

 $ lsof -p $(cat /var/run/tomcat8.pid)

 For both calls you should first change to the uid of the
 tomcat user or use sudo as in your example.

 Felix

> Thanks and Regards Ayub
>
> On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/20/20 11:51, Ayub Khan wrote:
 Sorry we are using  8.0.32 version of tomcat.

 below is the configuration:

 Server version: Apache Tomcat/8.0.32 (Ubuntu) Server
 built:   Jan 24 2020 16:24:30 UTC Server number:
 8.0.32.0 OS Name: Linux OS Version:
 4.4.0-1087-aws Architecture:   amd64 JVM Version:
 1.8.0_181-b13 JVM Vendor: Oracle Corporation

 I use the below command to check the file
 descriptors:

 watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ |
 wc -l"
> So you know there is some kind of increase in file-handle
> use, but you don't know what types of file handles are
> increasing, right?
>
> Can you try to find out which kinds of file handles are
> increasing?
>
> I have a sneaking suspicion that it's your database
> connections and not actually files open on the disk.
>
> Are you using a database connection pool? If not, you
> should really use one and limit the number of connections
> to something sane. If you are using one, are you monitoring
> it to see how many connections are actually being used? Are
> you sure you are using proper resource management[1]? Even
> a single code-path that leaks connections can leak them
> quickly under load.
>
 When there an issue related to broken files, this
 value keeps increasing, the only way to bring it down
 is to remove vm instance from AWS load balancer.>
 Which version of tomcat should I install ?
> Tomcat 8.0.x hasn't been supported since its last release
> on 29 June 2018. That was 8.0.53. Your release is from 8
> February 2016 and is dangerously out of date (unless you
> are using the Ubuntu-packaged version, in which case I hope
> they 

Re: broken pipe error keeps increasing open files

2020-06-23 Thread Ayub Khan
Felix,

I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)  *and I saw the below
output, some in CLOSE_WAIT and others in ESTABLISHED. If there are 200 open
file descriptors 160 are in CLOSE_WAIT state. When the count for CLOSE_WAIT
increases I just have to restart tomcat.

java65189 tomcat8  715u IPv6  237878311   0t0   TCP
localhost:http-alt->localhost:43760 (CLOSE_WAIT)
java65189 tomcat8  716u IPv6  237848923   0t0   TCP
localhost:http-alt->localhost:40568 (CLOSE_WAIT)

most of the open files are in CLOSE_WAIT state I do not see anything
related to database ip.



On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
felix.schumac...@internetallee.de> wrote:

>
> Am 22.06.20 um 13:22 schrieb Ayub Khan:
> > Felix,
> >
> > I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/ and  from the
> output
> > I see majority of them are related to sockets as shown below, some of
> them
> > point to the jar file of tomcat and others to the log file which is
> created.
> >
> >  socket:[2084570754]
> >  socket:[2084579487]
> >  socket:[2084578478]
> > socket:[2084570167]
>
> Can you try the other command (lsof -p $(cat ...tomcat.pid))? It should
> give a bit more details on the used sockets that the proc directory.
>
> Felix
>
> >
> > On Mon, Jun 22, 2020 at 1:28 PM Felix Schumacher <
> > felix.schumac...@internetallee.de> wrote:
> >
> >> Am 22.06.20 um 11:41 schrieb Ayub Khan:
> >>> Chris,
> >>>
> >>> I am using HikariCP for connection pooling. If the database is leaking
> >>> connections then I should see connection not available exception.
> >>>
> >>> How do I find out which file descriptors are leaking ?  these are not
> >> files
> >>> open on disk as there is no explicit disk file I/O in this application.
> >>>
> >>> I just use the below command to check for open file descriptors:
> >>>
> >>> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
> >> You could have a look at the name of the files in the pids proc
> directory.
> >>
> >>  $ ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
> >>
> >> Or you could use the tool lsof to find the open file descriptors.
> >>
> >>  $ lsof -p $(cat /var/run/tomcat8.pid)
> >>
> >> For both calls you should first change to the uid of the tomcat user or
> >> use sudo as in your example.
> >>
> >> Felix
> >>
> >>> Thanks and Regards
> >>> Ayub
> >>>
> >>> On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
> >>> ch...@christopherschultz.net> wrote:
> >>>
> >>> Ayub,
> >>>
> >>> On 6/20/20 11:51, Ayub Khan wrote:
> >> Sorry we are using  8.0.32 version of tomcat.
> >>
> >> below is the configuration:
> >>
> >> Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built:   Jan
> >> 24 2020 16:24:30 UTC Server number:  8.0.32.0 OS Name:
> >> Linux OS Version: 4.4.0-1087-aws Architecture:   amd64 JVM
> >> Version:1.8.0_181-b13 JVM Vendor: Oracle Corporation
> >>
> >> I use the below command to check the file descriptors:
> >>
> >> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
> >>> So you know there is some kind of increase in file-handle use, but you
> >>> don't know what types of file handles are increasing, right?
> >>>
> >>> Can you try to find out which kinds of file handles are increasing?
> >>>
> >>> I have a sneaking suspicion that it's your database connections and
> >>> not actually files open on the disk.
> >>>
> >>> Are you using a database connection pool? If not, you should really
> >>> use one and limit the number of connections to something sane. If you
> >>> are using one, are you monitoring it to see how many connections are
> >>> actually being used? Are you sure you are using proper resource
> >>> management[1]? Even a single code-path that leaks connections can leak
> >>> them quickly under load.
> >>>
> >> When there an issue related to broken files, this value keeps
> >> increasing, the only way to bring it down is to remove vm instance
> >> from AWS load balancer.> Which version of tomcat should I install
> >> ?
> >>> Tomcat 8.0.x hasn't been supported since its last release on 29 June
> >>> 2018. That was 8.0.53. Your release is from 8 February 2016 and is
> >>> dangerously out of date (unless you are using the Ubuntu-packaged
> >>> version, in which case I hope they kept-up with security patches thee
> >>> past 4 years).
> >>>
> >>> -chris
> >>>
> >> On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
> >> ch...@christopherschultz.net> wrote:
> >>
> >> Ayub,
> >>
> >> On 6/19/20 16:46, Ayub Khan wrote:
> > tomcat 8.5 broken pipe increases open files on ubuntu AWS
> >> Which exact version of Tomcat 8.5? If you aren't running the
> >> latest version (8.5.56), please upgrade and re-test.
> >>
> > If there is slow response from db I see this stack trace and
> > the open files goes high and the only way to open files go
> > down is to remove the 

Re: broken pipe error keeps increasing open files

2020-06-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Ayub,

On 6/22/20 05:41, Ayub Khan wrote:
> I am using HikariCP for connection pooling. If the database is
> leaking connections then I should see connection not available
> exception.

That probably depends upon your connection pool configuration. It's
definitely possible to configure a connection pool to leak connections
without complaint.

> How do I find out which file descriptors are leaking ?  these are
> not files open on disk as there is no explicit disk file I/O in
> this application.
You might have to use a profiler.

> I just use the below command to check for open file descriptors:
>
> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"

Felix's suggestions for how to check the types of file handles are
good. You should be able to get host:port information from the sockets
you are seeing and, I suspect, you'll find that they all point to your
database.

- -chris

> On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/20/20 11:51, Ayub Khan wrote:
 Sorry we are using  8.0.32 version of tomcat.

 below is the configuration:

 Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built:
 Jan 24 2020 16:24:30 UTC Server number:  8.0.32.0 OS Name:
 Linux OS Version: 4.4.0-1087-aws Architecture:   amd64
 JVM Version:1.8.0_181-b13 JVM Vendor: Oracle
 Corporation

 I use the below command to check the file descriptors:

 watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
>
> So you know there is some kind of increase in file-handle use, but
> you don't know what types of file handles are increasing, right?
>
> Can you try to find out which kinds of file handles are
> increasing?
>
> I have a sneaking suspicion that it's your database connections
> and not actually files open on the disk.
>
> Are you using a database connection pool? If not, you should
> really use one and limit the number of connections to something
> sane. If you are using one, are you monitoring it to see how many
> connections are actually being used? Are you sure you are using
> proper resource management[1]? Even a single code-path that leaks
> connections can leak them quickly under load.
>
 When there an issue related to broken files, this value
 keeps increasing, the only way to bring it down is to remove
 vm instance from AWS load balancer.> Which version of tomcat
 should I install ?
>
> Tomcat 8.0.x hasn't been supported since its last release on 29
> June 2018. That was 8.0.53. Your release is from 8 February 2016
> and is dangerously out of date (unless you are using the
> Ubuntu-packaged version, in which case I hope they kept-up with
> security patches thee past 4 years).
>
> -chris
>
 On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
 ch...@christopherschultz.net> wrote:

 Ayub,

 On 6/19/20 16:46, Ayub Khan wrote:
>>> tomcat 8.5 broken pipe increases open files on ubuntu
>>> AWS

 Which exact version of Tomcat 8.5? If you aren't running the
 latest version (8.5.56), please upgrade and re-test.

>>> If there is slow response from db I see this stack
>>> trace and the open files goes high and the only way to
>>> open files go down is to remove the instance from
>>> Amazon load balancer.
>>>
>>> Is there a way to keep the open files low even when
>>> Broken pipe error is thrown ?

 What is your evidence that file handles are being left open?

 Which file handles are being left open?

 -chris
>
> --
- ---
>
>
>
>
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail:
> users-h...@tomcat.apache.org
>
>

>>
>> -
>>
>>
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
>
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7w0U0ACgkQHPApP6U8
pFgnCw/9GxKtFIdNS71zYPAYgEY0O6bh4+SyvD7cWsNRwAEtN4jwwmvmbk+9ldGa
5u8jalMeWjBaYtVev+T7XQ0EOJN9ZuunyXHzKR8LeIqPCzka/34inQKBhlwOFyR+
lQUhod8VsDrQJ4lb2ZxC1yS/mnUFdKeV2u2Px9mIr5WdkfVAxYWCOuKTW9DrW3HE
nbPG5t3oiLS8Rx+xXssyaTii5DAaQm0X0FYkkPp1GQ8Tj0/fylsE17nV8LrxLTb5
qhv0l6avR2/IsXaD6DC8MjdUO6kz5hkWNLtO/WcKrIFI+xPcYs2+Hog7Hrajetwp
QR8f7R7ry0W0HLy3p3R10jMwDQzOWjn2RNxQqL2qfNGmOqMbTi2XskQhtkKPoM9Y
LqZJsG0Ax2kNr6N4YeYpotRVcGqCJDgLwQ3HMjkHW/Y1M4O2NicphW9iFd5L+X9Z
fh4KFz5XbAUM4py7IuMtj1SH/FrOMyfBtKl6ECJ98VbtoBPU0I6Kmc0lgbf3h9wD
yphZ+3pW6a9Oeroq7aRTd35XdFqLdi9eg4Ze19/KpAr0hw3pgPvQqZiqq5qhpOrv
HhNWfHhvzeXmmab0DeLB9On+FDp30svWv9hmAR1ZB8ViBeCDoFjaUxv1UltqS6NQ
m9UsvtNR3035EpiSw9p5Jh5+eiG6swSnJXQH4yZxkVNAsHthyBA=
=KsIE
-END 

Re: broken pipe error keeps increasing open files

2020-06-22 Thread Felix Schumacher


Am 22.06.20 um 13:22 schrieb Ayub Khan:
> Felix,
>
> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/ and  from the output
> I see majority of them are related to sockets as shown below, some of them
> point to the jar file of tomcat and others to the log file which is created.
>
>  socket:[2084570754]
>  socket:[2084579487]
>  socket:[2084578478]
> socket:[2084570167]

Can you try the other command (lsof -p $(cat ...tomcat.pid))? It should
give a bit more details on the used sockets that the proc directory.

Felix

>
> On Mon, Jun 22, 2020 at 1:28 PM Felix Schumacher <
> felix.schumac...@internetallee.de> wrote:
>
>> Am 22.06.20 um 11:41 schrieb Ayub Khan:
>>> Chris,
>>>
>>> I am using HikariCP for connection pooling. If the database is leaking
>>> connections then I should see connection not available exception.
>>>
>>> How do I find out which file descriptors are leaking ?  these are not
>> files
>>> open on disk as there is no explicit disk file I/O in this application.
>>>
>>> I just use the below command to check for open file descriptors:
>>>
>>> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
>> You could have a look at the name of the files in the pids proc directory.
>>
>>  $ ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
>>
>> Or you could use the tool lsof to find the open file descriptors.
>>
>>  $ lsof -p $(cat /var/run/tomcat8.pid)
>>
>> For both calls you should first change to the uid of the tomcat user or
>> use sudo as in your example.
>>
>> Felix
>>
>>> Thanks and Regards
>>> Ayub
>>>
>>> On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
>>> ch...@christopherschultz.net> wrote:
>>>
>>> Ayub,
>>>
>>> On 6/20/20 11:51, Ayub Khan wrote:
>> Sorry we are using  8.0.32 version of tomcat.
>>
>> below is the configuration:
>>
>> Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built:   Jan
>> 24 2020 16:24:30 UTC Server number:  8.0.32.0 OS Name:
>> Linux OS Version: 4.4.0-1087-aws Architecture:   amd64 JVM
>> Version:1.8.0_181-b13 JVM Vendor: Oracle Corporation
>>
>> I use the below command to check the file descriptors:
>>
>> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
>>> So you know there is some kind of increase in file-handle use, but you
>>> don't know what types of file handles are increasing, right?
>>>
>>> Can you try to find out which kinds of file handles are increasing?
>>>
>>> I have a sneaking suspicion that it's your database connections and
>>> not actually files open on the disk.
>>>
>>> Are you using a database connection pool? If not, you should really
>>> use one and limit the number of connections to something sane. If you
>>> are using one, are you monitoring it to see how many connections are
>>> actually being used? Are you sure you are using proper resource
>>> management[1]? Even a single code-path that leaks connections can leak
>>> them quickly under load.
>>>
>> When there an issue related to broken files, this value keeps
>> increasing, the only way to bring it down is to remove vm instance
>> from AWS load balancer.> Which version of tomcat should I install
>> ?
>>> Tomcat 8.0.x hasn't been supported since its last release on 29 June
>>> 2018. That was 8.0.53. Your release is from 8 February 2016 and is
>>> dangerously out of date (unless you are using the Ubuntu-packaged
>>> version, in which case I hope they kept-up with security patches thee
>>> past 4 years).
>>>
>>> -chris
>>>
>> On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
>> ch...@christopherschultz.net> wrote:
>>
>> Ayub,
>>
>> On 6/19/20 16:46, Ayub Khan wrote:
> tomcat 8.5 broken pipe increases open files on ubuntu AWS
>> Which exact version of Tomcat 8.5? If you aren't running the
>> latest version (8.5.56), please upgrade and re-test.
>>
> If there is slow response from db I see this stack trace and
> the open files goes high and the only way to open files go
> down is to remove the instance from Amazon load balancer.
>
> Is there a way to keep the open files low even when Broken
> pipe error is thrown ?
>> What is your evidence that file handles are being left open?
>>
>> Which file handles are being left open?
>>
>> -chris
>>>
>> -
>>>
>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>>
>>>
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: 

Re: broken pipe error keeps increasing open files

2020-06-22 Thread Ayub Khan
Felix,

I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/ and  from the output
I see majority of them are related to sockets as shown below, some of them
point to the jar file of tomcat and others to the log file which is created.

 socket:[2084570754]
 socket:[2084579487]
 socket:[2084578478]
socket:[2084570167]

On Mon, Jun 22, 2020 at 1:28 PM Felix Schumacher <
felix.schumac...@internetallee.de> wrote:

>
> Am 22.06.20 um 11:41 schrieb Ayub Khan:
> > Chris,
> >
> > I am using HikariCP for connection pooling. If the database is leaking
> > connections then I should see connection not available exception.
> >
> > How do I find out which file descriptors are leaking ?  these are not
> files
> > open on disk as there is no explicit disk file I/O in this application.
> >
> > I just use the below command to check for open file descriptors:
> >
>
> > watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
>
> You could have a look at the name of the files in the pids proc directory.
>
>  $ ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
>
> Or you could use the tool lsof to find the open file descriptors.
>
>  $ lsof -p $(cat /var/run/tomcat8.pid)
>
> For both calls you should first change to the uid of the tomcat user or
> use sudo as in your example.
>
> Felix
>
> >
> > Thanks and Regards
> > Ayub
> >
> > On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/20/20 11:51, Ayub Khan wrote:
> > >>> Sorry we are using  8.0.32 version of tomcat.
> > >>>
> > >>> below is the configuration:
> > >>>
> > >>> Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built:   Jan
> > >>> 24 2020 16:24:30 UTC Server number:  8.0.32.0 OS Name:
> > >>> Linux OS Version: 4.4.0-1087-aws Architecture:   amd64 JVM
> > >>> Version:1.8.0_181-b13 JVM Vendor: Oracle Corporation
> > >>>
> > >>> I use the below command to check the file descriptors:
> > >>>
> > >>> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
> >
> > So you know there is some kind of increase in file-handle use, but you
> > don't know what types of file handles are increasing, right?
> >
> > Can you try to find out which kinds of file handles are increasing?
> >
> > I have a sneaking suspicion that it's your database connections and
> > not actually files open on the disk.
> >
> > Are you using a database connection pool? If not, you should really
> > use one and limit the number of connections to something sane. If you
> > are using one, are you monitoring it to see how many connections are
> > actually being used? Are you sure you are using proper resource
> > management[1]? Even a single code-path that leaks connections can leak
> > them quickly under load.
> >
> > >>> When there an issue related to broken files, this value keeps
> > >>> increasing, the only way to bring it down is to remove vm instance
> > >>> from AWS load balancer.> Which version of tomcat should I install
> > >>> ?
> >
> > Tomcat 8.0.x hasn't been supported since its last release on 29 June
> > 2018. That was 8.0.53. Your release is from 8 February 2016 and is
> > dangerously out of date (unless you are using the Ubuntu-packaged
> > version, in which case I hope they kept-up with security patches thee
> > past 4 years).
> >
> > -chris
> >
> > >>> On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
> > >>> ch...@christopherschultz.net> wrote:
> > >>>
> > >>> Ayub,
> > >>>
> > >>> On 6/19/20 16:46, Ayub Khan wrote:
> > >> tomcat 8.5 broken pipe increases open files on ubuntu AWS
> > >>>
> > >>> Which exact version of Tomcat 8.5? If you aren't running the
> > >>> latest version (8.5.56), please upgrade and re-test.
> > >>>
> > >> If there is slow response from db I see this stack trace and
> > >> the open files goes high and the only way to open files go
> > >> down is to remove the instance from Amazon load balancer.
> > >>
> > >> Is there a way to keep the open files low even when Broken
> > >> pipe error is thrown ?
> > >>>
> > >>> What is your evidence that file handles are being left open?
> > >>>
> > >>> Which file handles are being left open?
> > >>>
> > >>> -chris
> > 
> > 
> -
> > 
> > 
> > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >  For additional commands, e-mail: users-h...@tomcat.apache.org
> > 
> > 
> > >>>
> >>
> >> -
> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

-- 

Sun Certified Enterprise Architect 

Re: broken pipe error keeps increasing open files

2020-06-22 Thread Felix Schumacher


Am 22.06.20 um 11:41 schrieb Ayub Khan:
> Chris,
>
> I am using HikariCP for connection pooling. If the database is leaking
> connections then I should see connection not available exception.
>
> How do I find out which file descriptors are leaking ?  these are not
files
> open on disk as there is no explicit disk file I/O in this application.
>
> I just use the below command to check for open file descriptors:
>

> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"

You could have a look at the name of the files in the pids proc directory.

 $ ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/

Or you could use the tool lsof to find the open file descriptors.

 $ lsof -p $(cat /var/run/tomcat8.pid)

For both calls you should first change to the uid of the tomcat user or
use sudo as in your example.

Felix

>
> Thanks and Regards
> Ayub
>
> On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/20/20 11:51, Ayub Khan wrote:
> >>> Sorry we are using  8.0.32 version of tomcat.
> >>>
> >>> below is the configuration:
> >>>
> >>> Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built:   Jan
> >>> 24 2020 16:24:30 UTC Server number:  8.0.32.0 OS Name:
> >>> Linux OS Version: 4.4.0-1087-aws Architecture:   amd64 JVM
> >>> Version:    1.8.0_181-b13 JVM Vendor: Oracle Corporation
> >>>
> >>> I use the below command to check the file descriptors:
> >>>
> >>> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
>
> So you know there is some kind of increase in file-handle use, but you
> don't know what types of file handles are increasing, right?
>
> Can you try to find out which kinds of file handles are increasing?
>
> I have a sneaking suspicion that it's your database connections and
> not actually files open on the disk.
>
> Are you using a database connection pool? If not, you should really
> use one and limit the number of connections to something sane. If you
> are using one, are you monitoring it to see how many connections are
> actually being used? Are you sure you are using proper resource
> management[1]? Even a single code-path that leaks connections can leak
> them quickly under load.
>
> >>> When there an issue related to broken files, this value keeps
> >>> increasing, the only way to bring it down is to remove vm instance
> >>> from AWS load balancer.> Which version of tomcat should I install
> >>> ?
>
> Tomcat 8.0.x hasn't been supported since its last release on 29 June
> 2018. That was 8.0.53. Your release is from 8 February 2016 and is
> dangerously out of date (unless you are using the Ubuntu-packaged
> version, in which case I hope they kept-up with security patches thee
> past 4 years).
>
> -chris
>
> >>> On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
> >>> ch...@christopherschultz.net> wrote:
> >>>
> >>> Ayub,
> >>>
> >>> On 6/19/20 16:46, Ayub Khan wrote:
> >> tomcat 8.5 broken pipe increases open files on ubuntu AWS
> >>>
> >>> Which exact version of Tomcat 8.5? If you aren't running the
> >>> latest version (8.5.56), please upgrade and re-test.
> >>>
> >> If there is slow response from db I see this stack trace and
> >> the open files goes high and the only way to open files go
> >> down is to remove the instance from Amazon load balancer.
> >>
> >> Is there a way to keep the open files low even when Broken
> >> pipe error is thrown ?
> >>>
> >>> What is your evidence that file handles are being left open?
> >>>
> >>> Which file handles are being left open?
> >>>
> >>> -chris
> 
>  -
> 
> 
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>  For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 
> >>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
>


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-22 Thread Ayub Khan
Chris,

I am using HikariCP for connection pooling. If the database is leaking
connections then I should see connection not available exception.

How do I find out which file descriptors are leaking ?  these are not files
open on disk as there is no explicit disk file I/O in this application.

I just use the below command to check for open file descriptors:

watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"

Thanks and Regards
Ayub

On Sun, Jun 21, 2020 at 8:18 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/20/20 11:51, Ayub Khan wrote:
> > Sorry we are using  8.0.32 version of tomcat.
> >
> > below is the configuration:
> >
> > Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built:   Jan
> > 24 2020 16:24:30 UTC Server number:  8.0.32.0 OS Name:
> > Linux OS Version: 4.4.0-1087-aws Architecture:   amd64 JVM
> > Version:1.8.0_181-b13 JVM Vendor: Oracle Corporation
> >
> > I use the below command to check the file descriptors:
> >
> > watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"
>
> So you know there is some kind of increase in file-handle use, but you
> don't know what types of file handles are increasing, right?
>
> Can you try to find out which kinds of file handles are increasing?
>
> I have a sneaking suspicion that it's your database connections and
> not actually files open on the disk.
>
> Are you using a database connection pool? If not, you should really
> use one and limit the number of connections to something sane. If you
> are using one, are you monitoring it to see how many connections are
> actually being used? Are you sure you are using proper resource
> management[1]? Even a single code-path that leaks connections can leak
> them quickly under load.
>
> > When there an issue related to broken files, this value keeps
> > increasing, the only way to bring it down is to remove vm instance
> > from AWS load balancer.> Which version of tomcat should I install
> > ?
>
> Tomcat 8.0.x hasn't been supported since its last release on 29 June
> 2018. That was 8.0.53. Your release is from 8 February 2016 and is
> dangerously out of date (unless you are using the Ubuntu-packaged
> version, in which case I hope they kept-up with security patches thee
> past 4 years).
>
> - -chris
>
> > On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/19/20 16:46, Ayub Khan wrote:
>  tomcat 8.5 broken pipe increases open files on ubuntu AWS
> >
> > Which exact version of Tomcat 8.5? If you aren't running the
> > latest version (8.5.56), please upgrade and re-test.
> >
>  If there is slow response from db I see this stack trace and
>  the open files goes high and the only way to open files go
>  down is to remove the instance from Amazon load balancer.
> 
>  Is there a way to keep the open files low even when Broken
>  pipe error is thrown ?
> >
> > What is your evidence that file handles are being left open?
> >
> > Which file handles are being left open?
> >
> > -chris
> >>
> >> -
> >>
> >>
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> >
> -BEGIN PGP SIGNATURE-
> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7vlj0ACgkQHPApP6U8
> pFiCqBAAqDO2+sUHpjcL8c5qwQbc0EG1XjgawI/ggjegVNSLTZ6mjdCVmxU3V10T
> IAdAmckT+o5qkZKQSVAQHpNW7i5A21U0dOUF1r1EWq0WZIN3fE1dcpQVsOlvUSqD
> qZ+gnGzg3UasSMJFyOj2hnU1+PBnhSFWvKcHwTda9gelqmgHp24rwpCvHFFFmH+2
> I/QrHoQc1W9F0bUmGOsfBy5+eC981KMUfZrIJmUBQFOhopaD19W+yofZCl0S15E7
> dxfIcqvFSF7FzZ1FraKGVd+nEUlW6PLuNqHkqpNGIstMWuQuo1RkukdXeP3FqjeA
> 36W5GOo6tlJTPGjbGaVPLnKw+hQuQa9he5Dv4E03C9vk1cbU6g+3hguKGhIfV3FM
> 2Pg4SE4LwVv3LMY1uxuuHGPgrs6CqHmMaDA6FQARhm/5iD5WJQBrXwzCYztHK/Z+
> +QILfdBa1VZl9qyujBneY9oJ9yc7HeBb1DDZAAqZFrOJctRmDrdZQrUwZ+b6vUzL
> kBr7fCSnzEqUbPzv5DiY6qfD2x2f1GfJFLY6LuJ36g4P0YSItRTaR2nX/VOzVM9D
> oZsDFZhlSZa0U7KpdaQDpi1dw6wgW83gfc7GxrbYvQsgXZPnm25JeU/UI5p1O+cQ
> q+iynFZ8poxSOIjx+2A9anwCXAiiWbyVRJ6Rmy2NwkCbXH/VhwY=
> =oNTV
> -END PGP SIGNATURE-
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

-- 

Sun Certified Enterprise Architect 1.5
Sun Certified Java Programmer 1.4
Microsoft Certified Systems Engineer 2000
http://in.linkedin.com/pub/ayub-khan/a/811/b81
mobile:+966-502674604
--
It is proved that Hard Work and kowledge will get you close but attitude
will get you there. However, it's the Love
of God that will put you over the top!!


Re: broken pipe error keeps increasing open files

2020-06-21 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Calder,

On 6/20/20 13:24, calder wrote:
> On Fri, Jun 19, 2020, 15:46 Ayub Khan  wrote:
>
>> tomcat 8.5 broken pipe increases open files on ubuntu AWS
>>
>
>
> If there is slow response from db
>
>
> Might be a good idea to investigate the reason for the "slow
> response"
>
> I see this stack trace and the open files goes high
>
>
> [ snip ]
>
>
> Caused by: java.io.IOException: Broken pipe
>> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>
>
>
> If I remember correctly, we had this issue about three+ years ago.
> Is your app using the "dbcp" (1.4) library?  We ended up moving to
> "dbcp2", in addition to optimizing the DB queries.

DBCP 2 is preferred to DBCP 1, but not because of bugs. It's preferred
because of an improved architecture, performance, etc.

The version of DBCP that comes bundled with Tomcat is usually all
anyone really needs.

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7vlowACgkQHPApP6U8
pFiOThAAq6dba6E/h49MP2e1ArNOEv5lsJf3oxaZFSlbLfYMG/WnKXve1s03/Wsy
tJJXVuzaDvVigs9yue+ucZw0bdZGAEtAS5g/Qw3GMjc2tCs6ze1MA5hQLHqXbOM3
gU5KRkyn5TBTCF/vB6EPSr25RQeuZLjFquDrhyaLn5QPsJ1np+vfCHezLjt4L99m
oa4ouequVMtSB+9SA9+3/4xym+Fw7Vba4t5Bx6uUqdNIeIjwF2PXOtmEOB+2HVTq
ijCvC7yBOvHdtGr79KLV7Eukg7uOriTNvSXweI809ufLny9SZ4+pwQp/wMKlFdof
uMBKKwBngllFSZvVtpjm4I4IrhNr0fzzI8OlLBck5pLFNP02mOjsEwM150gIJ11g
15nrdf3F8Xfv4SEsCVasTHNQSuBEfhb0qNNfO/0OKdQQuHCCdGHCJ3ILmyWKSBPp
tOb7yK8MwescUkzfAP82D1Lw796XLKyTdTuNcqt3IIHSrPmaO4beSQFafPPKZnlP
LqU+IG8Eiq+vOkQQpWbaYv2zF7F2Grpr6XGO29/nu7Pu2drQyAO4YFtxji8k4xfu
LOeaxQ53/K4WDrZVgh1wZ+7T2sQzEuLYUnpaSJIuwXXOhBnTP7s6CLTL1ule997o
93CHE9m6j2XBIioHy15Y0MkL4qlHorf6qIHANxsmdFwhSMYJ9j4=
=kkHx
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-21 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/20/20 11:51, Ayub Khan wrote:
> Sorry we are using  8.0.32 version of tomcat.
>
> below is the configuration:
>
> Server version: Apache Tomcat/8.0.32 (Ubuntu) Server built:   Jan
> 24 2020 16:24:30 UTC Server number:  8.0.32.0 OS Name:
> Linux OS Version: 4.4.0-1087-aws Architecture:   amd64 JVM
> Version:1.8.0_181-b13 JVM Vendor: Oracle Corporation
>
> I use the below command to check the file descriptors:
>
> watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"

So you know there is some kind of increase in file-handle use, but you
don't know what types of file handles are increasing, right?

Can you try to find out which kinds of file handles are increasing?

I have a sneaking suspicion that it's your database connections and
not actually files open on the disk.

Are you using a database connection pool? If not, you should really
use one and limit the number of connections to something sane. If you
are using one, are you monitoring it to see how many connections are
actually being used? Are you sure you are using proper resource
management[1]? Even a single code-path that leaks connections can leak
them quickly under load.

> When there an issue related to broken files, this value keeps
> increasing, the only way to bring it down is to remove vm instance
> from AWS load balancer.> Which version of tomcat should I install
> ?

Tomcat 8.0.x hasn't been supported since its last release on 29 June
2018. That was 8.0.53. Your release is from 8 February 2016 and is
dangerously out of date (unless you are using the Ubuntu-packaged
version, in which case I hope they kept-up with security patches thee
past 4 years).

- -chris

> On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/19/20 16:46, Ayub Khan wrote:
 tomcat 8.5 broken pipe increases open files on ubuntu AWS
>
> Which exact version of Tomcat 8.5? If you aren't running the
> latest version (8.5.56), please upgrade and re-test.
>
 If there is slow response from db I see this stack trace and
 the open files goes high and the only way to open files go
 down is to remove the instance from Amazon load balancer.

 Is there a way to keep the open files low even when Broken
 pipe error is thrown ?
>
> What is your evidence that file handles are being left open?
>
> Which file handles are being left open?
>
> -chris
>>
>> -
>>
>>
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
>
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7vlj0ACgkQHPApP6U8
pFiCqBAAqDO2+sUHpjcL8c5qwQbc0EG1XjgawI/ggjegVNSLTZ6mjdCVmxU3V10T
IAdAmckT+o5qkZKQSVAQHpNW7i5A21U0dOUF1r1EWq0WZIN3fE1dcpQVsOlvUSqD
qZ+gnGzg3UasSMJFyOj2hnU1+PBnhSFWvKcHwTda9gelqmgHp24rwpCvHFFFmH+2
I/QrHoQc1W9F0bUmGOsfBy5+eC981KMUfZrIJmUBQFOhopaD19W+yofZCl0S15E7
dxfIcqvFSF7FzZ1FraKGVd+nEUlW6PLuNqHkqpNGIstMWuQuo1RkukdXeP3FqjeA
36W5GOo6tlJTPGjbGaVPLnKw+hQuQa9he5Dv4E03C9vk1cbU6g+3hguKGhIfV3FM
2Pg4SE4LwVv3LMY1uxuuHGPgrs6CqHmMaDA6FQARhm/5iD5WJQBrXwzCYztHK/Z+
+QILfdBa1VZl9qyujBneY9oJ9yc7HeBb1DDZAAqZFrOJctRmDrdZQrUwZ+b6vUzL
kBr7fCSnzEqUbPzv5DiY6qfD2x2f1GfJFLY6LuJ36g4P0YSItRTaR2nX/VOzVM9D
oZsDFZhlSZa0U7KpdaQDpi1dw6wgW83gfc7GxrbYvQsgXZPnm25JeU/UI5p1O+cQ
q+iynFZ8poxSOIjx+2A9anwCXAiiWbyVRJ6Rmy2NwkCbXH/VhwY=
=oNTV
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-20 Thread Ayub Khan
Calden,

we are not using dbcp for this project. Also even if this error is thrown
why does the file descriptor keep increasing?


On Sat, Jun 20, 2020 at 8:24 PM calder  wrote:

> On Fri, Jun 19, 2020, 15:46 Ayub Khan  wrote:
>
> > tomcat 8.5 broken pipe increases open files on ubuntu AWS
> >
>
>
> If there is slow response from db
>
>
> Might be a good idea to investigate the reason for the "slow response"
>
> I see this stack trace and the open files goes high
>
>
> [ snip ]
>
>
> Caused by: java.io.IOException: Broken pipe
> > at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>
>
>
> If I remember correctly, we had this issue about three+ years ago.Is
> your app using the "dbcp" (1.4) library?  We ended up moving to "dbcp2", in
> addition to optimizing the DB queries.
>


-- 

Sun Certified Enterprise Architect 1.5
Sun Certified Java Programmer 1.4
Microsoft Certified Systems Engineer 2000
http://in.linkedin.com/pub/ayub-khan/a/811/b81
mobile:+966-502674604
--
It is proved that Hard Work and kowledge will get you close but attitude
will get you there. However, it's the Love
of God that will put you over the top!!


Re: broken pipe error keeps increasing open files

2020-06-20 Thread calder
On Fri, Jun 19, 2020, 15:46 Ayub Khan  wrote:

> tomcat 8.5 broken pipe increases open files on ubuntu AWS
>


If there is slow response from db


Might be a good idea to investigate the reason for the "slow response"

I see this stack trace and the open files goes high


[ snip ]


Caused by: java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)



If I remember correctly, we had this issue about three+ years ago.Is
your app using the "dbcp" (1.4) library?  We ended up moving to "dbcp2", in
addition to optimizing the DB queries.


Re: broken pipe error keeps increasing open files

2020-06-20 Thread Ayub Khan
Christopher,

Sorry we are using  8.0.32 version of tomcat.

below is the configuration:

Server version: Apache Tomcat/8.0.32 (Ubuntu)
Server built:   Jan 24 2020 16:24:30 UTC
Server number:  8.0.32.0
OS Name:Linux
OS Version: 4.4.0-1087-aws
Architecture:   amd64
JVM Version:1.8.0_181-b13
JVM Vendor: Oracle Corporation

I use the below command to check the file descriptors:

watch "sudo ls /proc/`cat /var/run/tomcat8.pid`/fd/ | wc -l"

When there an issue related to broken files, this value keeps increasing,
the only way to bring it down is to remove vm instance from AWS load
balancer.

Which version of tomcat should I install ?






On Sat, Jun 20, 2020 at 6:28 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/19/20 16:46, Ayub Khan wrote:
> > tomcat 8.5 broken pipe increases open files on ubuntu AWS
>
> Which exact version of Tomcat 8.5? If you aren't running the latest
> version (8.5.56), please upgrade and re-test.
>
> > If there is slow response from db I see this stack trace and the
> > open files goes high and the only way to open files go down is to
> > remove the instance from Amazon load balancer.
> >
> > Is there a way to keep the open files low even when Broken pipe
> > error is thrown ?
>
> What is your evidence that file handles are being left open?
>
> Which file handles are being left open?
>
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7uKvMACgkQHPApP6U8
> pFhs7A/+Nxx+iNaiw2tl30cpjkUyIlsS+7Zs43tTafLiMuWWSPP7AoqWe7cf9r2y
> APXbvEG0Wu6ptLTUlQ/Qsgr4XqQE9dNekLxP98EtOp/a2K3XMGZ0pZlkX4FuWTmc
> AUJdodwYTu9/5MKirx2D9qNblXn71dw1CLiw5BLW7k4LPgxNpx0/aNgTFw7ywIT8
> AjwOyyOLBHN1Q+dO0aGuYcrEOob26YzChobEsKXWSVQWe1TOqPJDsUEh+DYWzVZM
> 1e3Pmd2dCag4I69WRdTJVOXo1ETxMF8sT9faimxsfp1YU6xzrxz/a+JeIzB74oVh
> m0Xc+clQuVKEitYOhtaZhBN4REYx115f4Jp2R/8z7jASuji/k3vJ6jIz72ExUqtU
> G29xOQV9o08JM0jTZRwmKAp6SqlkPlXXZqEX+wMsOVsY0RRh80c/2mnR411OhD6w
> 3OZTGnqhRZgQ3AHCqDnkmFxnHmaHdCaZR034lrSM5s5mAt5y7pcefPJyHewkQlyW
> GFLt8V4Xpl8nSETkcy36o/anqqrNzy+bufi1bL7K+UXsuoWy+1hpyavesYR7Uz+p
> 4JdFvQU6QhU7Gz/8BJXK0Jr/cYi7NTHDFpdTR9iedjF7Wf1db2rT0WZh8+63j/L8
> 6IVDCDqkKBfd5SoMyaoGkL9Dein+v2evNRL06/rqPvRHSB2/gdE=
> =8OK0
> -END PGP SIGNATURE-
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

-- 

Sun Certified Enterprise Architect 1.5
Sun Certified Java Programmer 1.4
Microsoft Certified Systems Engineer 2000
http://in.linkedin.com/pub/ayub-khan/a/811/b81
mobile:+966-502674604
--
It is proved that Hard Work and kowledge will get you close but attitude
will get you there. However, it's the Love
of God that will put you over the top!!


Re: broken pipe error keeps increasing open files

2020-06-20 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/19/20 16:46, Ayub Khan wrote:
> tomcat 8.5 broken pipe increases open files on ubuntu AWS

Which exact version of Tomcat 8.5? If you aren't running the latest
version (8.5.56), please upgrade and re-test.

> If there is slow response from db I see this stack trace and the
> open files goes high and the only way to open files go down is to
> remove the instance from Amazon load balancer.
>
> Is there a way to keep the open files low even when Broken pipe
> error is thrown ?

What is your evidence that file handles are being left open?

Which file handles are being left open?

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7uKvMACgkQHPApP6U8
pFhs7A/+Nxx+iNaiw2tl30cpjkUyIlsS+7Zs43tTafLiMuWWSPP7AoqWe7cf9r2y
APXbvEG0Wu6ptLTUlQ/Qsgr4XqQE9dNekLxP98EtOp/a2K3XMGZ0pZlkX4FuWTmc
AUJdodwYTu9/5MKirx2D9qNblXn71dw1CLiw5BLW7k4LPgxNpx0/aNgTFw7ywIT8
AjwOyyOLBHN1Q+dO0aGuYcrEOob26YzChobEsKXWSVQWe1TOqPJDsUEh+DYWzVZM
1e3Pmd2dCag4I69WRdTJVOXo1ETxMF8sT9faimxsfp1YU6xzrxz/a+JeIzB74oVh
m0Xc+clQuVKEitYOhtaZhBN4REYx115f4Jp2R/8z7jASuji/k3vJ6jIz72ExUqtU
G29xOQV9o08JM0jTZRwmKAp6SqlkPlXXZqEX+wMsOVsY0RRh80c/2mnR411OhD6w
3OZTGnqhRZgQ3AHCqDnkmFxnHmaHdCaZR034lrSM5s5mAt5y7pcefPJyHewkQlyW
GFLt8V4Xpl8nSETkcy36o/anqqrNzy+bufi1bL7K+UXsuoWy+1hpyavesYR7Uz+p
4JdFvQU6QhU7Gz/8BJXK0Jr/cYi7NTHDFpdTR9iedjF7Wf1db2rT0WZh8+63j/L8
6IVDCDqkKBfd5SoMyaoGkL9Dein+v2evNRL06/rqPvRHSB2/gdE=
=8OK0
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org