Re: [External] - Re: Too Many Open Files error trying to start Artemis server from a SystemD service

2022-07-07 Thread Richard Bergmann
Thank you!  I guess my Google chops are waning as I found this after your post: 
https://access.redhat.com/solutions/1257953

Regards,

Rich Bergmann

From: SL 
Sent: Thursday, July 7, 2022 10:26 AM
To: users@activemq.apache.org 
Subject: [External] - Re: Too Many Open Files error trying to start Artemis 
server from a SystemD service

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


add

LimitNOFILE=81920

to the [Service] section of your systemd unit file

Regards,

Le 07/07/2022 à 16:09, Richard Bergmann a écrit :
> I have been struggling with the Too Many Open Files issue running Artemis as 
> a service within a service (if that is not confusing enough).  My 
> /etc/security/limits.conf files contains:
>
> -
> .
> .
> .
> # End of file
> *softnofile 81920
> *hardnofile 81920
> -
>
> Originally it was set to 8192 and I was able to start the service from the 
> command line: /opt/artemis/bin/artemis-service start
>
> When I did so and ran:  lsof -p  | wc -l
>
> it reported some 5K files open by the process, hence the reason it wouldn't 
> run with the default limit of 1024.
>
> I then tried to start the SystemD service: sudo systemctl start artemis
>
> using this in /etc/systemd/system/artemis.service:
>
> -
> [Unit]
> Description=ActiveMQ Artemis Service
> After=network.target
>
> [Service]
> ExecStart=/opt/artemis/bin/artemis-service start
> ExecStop=/opt/artemis/bin/artemis-service stop
> Type=forking
> User=
> Group=
>
> [Install]
> WantedBy=multi-user.target
> -
>
> and it failed with the Too Many Open Files error.  So I increased it to the 
> 81920 shown above, rebooted, and I STILL get the Too Many Open Files error.
>
> Is there something special about the way services are started such that it 
> doesn't use /etc/security/limits.conf to determine the number of open files 
> allowed for a process?
>
> Regards,
>
> Rich Bergmann
> 
> The information contained in this e-mail and any attachments from COLSA 
> Corporation may contain company sensitive and/or proprietary information, and 
> is intended only for the named recipient to whom it was originally addressed. 
> If you are not the intended recipient, any disclosure, distribution, or 
> copying of this e-mail or its attachments is strictly prohibited. If you have 
> received this e-mail in error, please notify the sender immediately by return 
> e-mail and permanently delete the e-mail and any attachments.
>
>
> COLSA Proprietary
>

The information contained in this e-mail and any attachments from COLSA 
Corporation may contain company sensitive and/or proprietary information, and 
is intended only for the named recipient to whom it was originally addressed. 
If you are not the intended recipient, any disclosure, distribution, or copying 
of this e-mail or its attachments is strictly prohibited. If you have received 
this e-mail in error, please notify the sender immediately by return e-mail and 
permanently delete the e-mail and any attachments.


Antwort: [Ext] Re: [External] - Antwort: [Ext] Too Many Open Files error trying to start Artemis server from a SystemD service

2022-07-07 Thread Herbert . Helmstreit
on RHEL LimitNOFILE is definitely a good idea.
I do not have a user in the systemd service, so it runs under system 
defaults, maybe that helps me out.





Von:"Richard Bergmann" 
An: "users@activemq.apache.org" 
Datum:  07.07.2022 16:52
Betreff:[Ext] Re: [External] - Antwort: [Ext] Too Many Open Files 
error trying to start Artemis server from a SystemD service



Yes, ulimit -n reports 81920.

From: herbert.helmstr...@systema.com 
Sent: Thursday, July 7, 2022 10:21 AM
To: users@activemq.apache.org 
Subject: [External] - Antwort: [Ext] Too Many Open Files error trying to 
start Artemis server from a SystemD service

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you recognize the sender and know 
the content is safe.

Hello Richard

8k should be enough sockets. You must at least re-login or reboot to make 
this active.
Did you verify with ulimit -n ? What did you get?

btw. After=network.target is sub-optimal.
After=network-online.target
is better.
Check this out 
https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/<
https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.freedesktop.org%2Fwiki%2FSoftware%2Fsystemd%2FNetworkTarget%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=JBD7f6FQvzRRxyk3p8dCJRJOwPTNdJHhcXThKtVH5V0%3D=0
>

Best Regards

Herbert


Herbert Helmstreit
Dipl.-Phys.
Software Engineer

[SYSTEMA Logo]<
https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.systema.com%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=J8%2FypKJmI01tdDHelk4m1TEMIHDnV%2FMLMpja5K2nCqs%3D=0
>

Phone: +49 941 / 7 83 92 36
Fax: +49 351 / 88 24 772

herbert.helmstr...@systema.com<mailto:herbert.helmstr...@systema.com> | 
www.systema.com<
https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.systema.com%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=J8%2FypKJmI01tdDHelk4m1TEMIHDnV%2FMLMpja5K2nCqs%3D=0
>

[LinkedIn]<
https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fsystema-gmbh%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=e%2F%2BQlaPnLGBRHuugkY81AUdzNcQrytIdOt5wSOejmE4%3D=0
>[Facebook]<
https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fde-de.facebook.com%2FSYSTEMA.automation%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=38bp0NDE45IrPTRuNUWoi7D3NNF2UaZb12Af1Vm4gAc%3D=0
>[XING]<
https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.xing.com%2Fpages%2Fsystemagmbh=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=UDJfXXY9quG9lVsMWsGcrIKtWrTElaTe0SRiz%2Be0Jmg%3D=0
>

SYSTEMA Systementwicklung Dipl.-Inf. Manfred Austen GmbH
Schikanederstraße 2b - Posthof | 93053 Regensburg
HRB 11256 Amtsgericht Dresden | USt.-ID DE 159 607 786
Geschäftsführer: Manfred Austen, CEO und Dr. Ulf Martin, COO




Von:"Richard Bergmann" 
An:"users@activemq.apache.org" 
Datum:07.07.2022 16:10
Betreff:[Ext] Too Many Open Files error trying to start Artemis 
server from a SystemD service
____



I have been struggling with the Too Many Open Files issue running Artemis 
as a service within a service (if that is not confusing enough).  My 
/etc/security/limits.conf files contains:

-
.
.
.
# End of file
*softnofile 81920
*hardnofile 81920
-

Originally it was set to 8192 and I was able to start the service from the 
command line: /opt/artemis/bin/artemis-service start

When I did so and ran:  lsof -p  | wc -l

it reported some 5K files open by the process, hence the reason it 
wouldn't run with the default limit of 1024.

I then tried to start the SystemD servic

Re: [External] - Antwort: [Ext] Too Many Open Files error trying to start Artemis server from a SystemD service

2022-07-07 Thread Richard Bergmann
Yes, ulimit -n reports 81920.

From: herbert.helmstr...@systema.com 
Sent: Thursday, July 7, 2022 10:21 AM
To: users@activemq.apache.org 
Subject: [External] - Antwort: [Ext] Too Many Open Files error trying to start 
Artemis server from a SystemD service

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.

Hello Richard

8k should be enough sockets. You must at least re-login or reboot to make this 
active.
Did you verify with ulimit -n ? What did you get?

btw. After=network.target is sub-optimal.
After=network-online.target
is better.
Check this out 
https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/<https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.freedesktop.org%2Fwiki%2FSoftware%2Fsystemd%2FNetworkTarget%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=JBD7f6FQvzRRxyk3p8dCJRJOwPTNdJHhcXThKtVH5V0%3D=0>

Best Regards

Herbert


Herbert Helmstreit
Dipl.-Phys.
Software Engineer

[SYSTEMA 
Logo]<https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.systema.com%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=J8%2FypKJmI01tdDHelk4m1TEMIHDnV%2FMLMpja5K2nCqs%3D=0>

Phone: +49 941 / 7 83 92 36
Fax: +49 351 / 88 24 772

herbert.helmstr...@systema.com<mailto:herbert.helmstr...@systema.com> | 
www.systema.com<https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.systema.com%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=J8%2FypKJmI01tdDHelk4m1TEMIHDnV%2FMLMpja5K2nCqs%3D=0>

[LinkedIn]<https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fsystema-gmbh%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=e%2F%2BQlaPnLGBRHuugkY81AUdzNcQrytIdOt5wSOejmE4%3D=0>[Facebook]<https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fde-de.facebook.com%2FSYSTEMA.automation%2F=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=38bp0NDE45IrPTRuNUWoi7D3NNF2UaZb12Af1Vm4gAc%3D=0>[XING]<https://usg02.safelinks.protection.office365.us/?url=https%3A%2F%2Fwww.xing.com%2Fpages%2Fsystemagmbh=05%7C01%7CRBERGMANN%40colsa.com%7Cc096e7f544e14cc74a2c08da60241118%7C9821086b78824b43a5edb1e979bee31f%7C1%7C0%7C637928006067698160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C=UDJfXXY9quG9lVsMWsGcrIKtWrTElaTe0SRiz%2Be0Jmg%3D=0>

SYSTEMA Systementwicklung Dipl.-Inf. Manfred Austen GmbH
Schikanederstraße 2b - Posthof | 93053 Regensburg
HRB 11256 Amtsgericht Dresden | USt.-ID DE 159 607 786
Geschäftsführer: Manfred Austen, CEO und Dr. Ulf Martin, COO




Von:"Richard Bergmann" 
An:"users@activemq.apache.org" 
Datum:07.07.2022 16:10
Betreff:[Ext] Too Many Open Files error trying to start Artemis server 
from a SystemD service
____________



I have been struggling with the Too Many Open Files issue running Artemis as a 
service within a service (if that is not confusing enough).  My 
/etc/security/limits.conf files contains:

-
.
.
.
# End of file
*softnofile 81920
*hardnofile 81920
-

Originally it was set to 8192 and I was able to start the service from the 
command line: /opt/artemis/bin/artemis-service start

When I did so and ran:  lsof -p  | wc -l

it reported some 5K files open by the process, hence the reason it wouldn't run 
with the default limit of 1024.

I then tried to start the SystemD service: sudo systemctl start artemis

using this in /etc/systemd/system/artemis.service:

-
[Unit]
Description=ActiveMQ Artemis Service
After=network.target

[Service]
ExecStart=/opt/artemis/bin/artemis-service start
ExecStop=/opt/artemis/bin/artemis-service stop
Type=forking
User=
Group=

[Install]
WantedBy=multi-user.target
-

and it failed with the Too Many Open Files error.  So I increased it to the 
81920 shown 

Re: Too Many Open Files error trying to start Artemis server from a SystemD service

2022-07-07 Thread SL
add

LimitNOFILE=81920

to the [Service] section of your systemd unit file

Regards,

Le 07/07/2022 à 16:09, Richard Bergmann a écrit :
> I have been struggling with the Too Many Open Files issue running Artemis as 
> a service within a service (if that is not confusing enough).  My 
> /etc/security/limits.conf files contains:
>
> -
> .
> .
> .
> # End of file
> *softnofile 81920
> *hardnofile 81920
> -
>
> Originally it was set to 8192 and I was able to start the service from the 
> command line: /opt/artemis/bin/artemis-service start
>
> When I did so and ran:  lsof -p  | wc -l
>
> it reported some 5K files open by the process, hence the reason it wouldn't 
> run with the default limit of 1024.
>
> I then tried to start the SystemD service: sudo systemctl start artemis
>
> using this in /etc/systemd/system/artemis.service:
>
> -
> [Unit]
> Description=ActiveMQ Artemis Service
> After=network.target
>
> [Service]
> ExecStart=/opt/artemis/bin/artemis-service start
> ExecStop=/opt/artemis/bin/artemis-service stop
> Type=forking
> User=
> Group=
>
> [Install]
> WantedBy=multi-user.target
> -
>
> and it failed with the Too Many Open Files error.  So I increased it to the 
> 81920 shown above, rebooted, and I STILL get the Too Many Open Files error.
>
> Is there something special about the way services are started such that it 
> doesn't use /etc/security/limits.conf to determine the number of open files 
> allowed for a process?
>
> Regards,
>
> Rich Bergmann
> 
> The information contained in this e-mail and any attachments from COLSA 
> Corporation may contain company sensitive and/or proprietary information, and 
> is intended only for the named recipient to whom it was originally addressed. 
> If you are not the intended recipient, any disclosure, distribution, or 
> copying of this e-mail or its attachments is strictly prohibited. If you have 
> received this e-mail in error, please notify the sender immediately by return 
> e-mail and permanently delete the e-mail and any attachments.
>
>
> COLSA Proprietary
>

Antwort: [Ext] Too Many Open Files error trying to start Artemis server from a SystemD service

2022-07-07 Thread Herbert . Helmstreit
Hello Richard

8k should be enough sockets. You must at least re-login or reboot to make 
this active.
Did you verify with ulimit -n ? What did you get?

btw. After=network.target is sub-optimal.
After=network-online.target
is better.
Check this out 
https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/

Best Regards

Herbert




Von:"Richard Bergmann" 
An: "users@activemq.apache.org" 
Datum:  07.07.2022 16:10
Betreff:    [Ext] Too Many Open Files error trying to start Artemis 
server from a SystemD service



I have been struggling with the Too Many Open Files issue running Artemis 
as a service within a service (if that is not confusing enough).  My 
/etc/security/limits.conf files contains:

-
.
.
.
# End of file
*softnofile 81920
*hardnofile 81920
-

Originally it was set to 8192 and I was able to start the service from the 
command line: /opt/artemis/bin/artemis-service start

When I did so and ran:  lsof -p  | wc -l

it reported some 5K files open by the process, hence the reason it 
wouldn't run with the default limit of 1024.

I then tried to start the SystemD service: sudo systemctl start artemis

using this in /etc/systemd/system/artemis.service:

-
[Unit]
Description=ActiveMQ Artemis Service
After=network.target

[Service]
ExecStart=/opt/artemis/bin/artemis-service start
ExecStop=/opt/artemis/bin/artemis-service stop
Type=forking
User=
Group=

[Install]
WantedBy=multi-user.target
-

and it failed with the Too Many Open Files error.  So I increased it to 
the 81920 shown above, rebooted, and I STILL get the Too Many Open Files 
error.

Is there something special about the way services are started such that it 
doesn't use /etc/security/limits.conf to determine the number of open 
files allowed for a process?

Regards,

Rich Bergmann

The information contained in this e-mail and any attachments from COLSA 
Corporation may contain company sensitive and/or proprietary information, 
and is intended only for the named recipient to whom it was originally 
addressed. If you are not the intended recipient, any disclosure, 
distribution, or copying of this e-mail or its attachments is strictly 
prohibited. If you have received this e-mail in error, please notify the 
sender immediately by return e-mail and permanently delete the e-mail and 
any attachments.


COLSA Proprietary






Too Many Open Files error trying to start Artemis server from a SystemD service

2022-07-07 Thread Richard Bergmann
I have been struggling with the Too Many Open Files issue running Artemis as a 
service within a service (if that is not confusing enough).  My 
/etc/security/limits.conf files contains:

-
.
.
.
# End of file
*softnofile 81920
*hardnofile 81920
-

Originally it was set to 8192 and I was able to start the service from the 
command line: /opt/artemis/bin/artemis-service start

When I did so and ran:  lsof -p  | wc -l

it reported some 5K files open by the process, hence the reason it wouldn't run 
with the default limit of 1024.

I then tried to start the SystemD service: sudo systemctl start artemis

using this in /etc/systemd/system/artemis.service:

-
[Unit]
Description=ActiveMQ Artemis Service
After=network.target

[Service]
ExecStart=/opt/artemis/bin/artemis-service start
ExecStop=/opt/artemis/bin/artemis-service stop
Type=forking
User=
Group=

[Install]
WantedBy=multi-user.target
-

and it failed with the Too Many Open Files error.  So I increased it to the 
81920 shown above, rebooted, and I STILL get the Too Many Open Files error.

Is there something special about the way services are started such that it 
doesn't use /etc/security/limits.conf to determine the number of open files 
allowed for a process?

Regards,

Rich Bergmann

The information contained in this e-mail and any attachments from COLSA 
Corporation may contain company sensitive and/or proprietary information, and 
is intended only for the named recipient to whom it was originally addressed. 
If you are not the intended recipient, any disclosure, distribution, or copying 
of this e-mail or its attachments is strictly prohibited. If you have received 
this e-mail in error, please notify the sender immediately by return e-mail and 
permanently delete the e-mail and any attachments.


COLSA Proprietary


Re: Too many open files

2014-01-16 Thread Michael Priess
Hi,

correct me if I'm wrong but if I listen to a topic the connection must be
always open? In my thinking a PooledConnectionFactory make only sense if
I'm sending a lot of messages to the broker.
BTW. know someone the exactly number of file handles which are open per
connection on the server?


2014/1/15 Jose María Zaragoza demablo...@gmail.com

 2014/1/13 Michael Priess digitalover...@googlemail.com:
 
  Is there a way how to find which client open so much connections? Is
 there
  a reason why ActiveMQ doesn't close this connections?
  Any idea how to debug such an issue?


 Why dont use PooledConnectionFactory?

 Regards



Re: Too many open files

2014-01-16 Thread Arjen van der Meijden
I think he was referring to your clients which could be adjusted to pool 
a small amount of connections, rather than opening many.


You obviously can't control this from the server-side. Still its very 
uncommon to really need 1000 connections at the same time, it generally 
suggests your clients open way to many connections. Or they're opened 
and closed very fast, so the server can't keep up in its default 
configuration and has many waiting to be closed connections.


For each connection to your server there is a single file pointer. You 
can, amongsts others, use 'lsof' to see how many for your java-proces 
(see man lsof for usage examples).

You can also see connections in CLOSE_WAIT using lsof.

If you really have that many clients, there isn't much you can do.
You're limited to adjusting the server in such a way it can hold more 
file pointers (i.e. use ulimit) or change your server-architecture to 
have more servers (with that many clients, some redundancy is a good 
thing anyway).


Best regards,

Arjen

On 16-1-2014 9:54 Michael Priess wrote:

Hi,

correct me if I'm wrong but if I listen to a topic the connection must be
always open? In my thinking a PooledConnectionFactory make only sense if
I'm sending a lot of messages to the broker.
BTW. know someone the exactly number of file handles which are open per
connection on the server?


2014/1/15 Jose María Zaragoza demablo...@gmail.com


2014/1/13 Michael Priess digitalover...@googlemail.com:


Is there a way how to find which client open so much connections? Is

there

a reason why ActiveMQ doesn't close this connections?
Any idea how to debug such an issue?



Why dont use PooledConnectionFactory?

Regards





Re: Too many open files

2014-01-16 Thread Jose María Zaragoza
2014/1/16 Arjen van der Meijden acmmail...@tweakers.net:
 I think he was referring to your clients which could be adjusted to pool a
 small amount of connections, rather than opening many.
 You obviously can't control this from the server-side.

Well, I don't know so much about Apache Camel , but in
http://camel.apache.org/activemq.html

you can use PooledConnectionFactory


Regards


Re: Too many open files

2014-01-16 Thread Torsten Mielke
In general the use of a pooling aware JMS ConnectionFactory such as ActiveMQs 
PooledConnectionFactory is *always* recommended. Particularly in Camel which 
uses Spring JMS underneath. 


Regards,

Torsten Mielke
tmielke.blogspot.com


On 16 Jan 2014, at 10:54 am, Jose María Zaragoza demablo...@gmail.com wrote:

 2014/1/16 Arjen van der Meijden acmmail...@tweakers.net:
 I think he was referring to your clients which could be adjusted to pool a
 small amount of connections, rather than opening many.
 You obviously can't control this from the server-side.
 
 Well, I don't know so much about Apache Camel , but in
 http://camel.apache.org/activemq.html
 
 you can use PooledConnectionFactory
 
 
 Regards





Re: Too many open files

2014-01-15 Thread Jose María Zaragoza
2014/1/13 Michael Priess digitalover...@googlemail.com:

 Is there a way how to find which client open so much connections? Is there
 a reason why ActiveMQ doesn't close this connections?
 Any idea how to debug such an issue?


Why dont use PooledConnectionFactory?

Regards


Too many open files

2014-01-13 Thread Michael Priess
Hi,

hi I actually use Apache Camel 2.10.1 in combination with ActiveMQ 5.7.0.

To build a connection to ActiveMQ I use the ActiveMQ ConnectionFactory.
After a while I get this exception:

2013-12-21 04:16:51,442 | ERROR | Could not accept connection :
java.net.SocketException: Too many open files |
org.apache.activemq.broker.TransportConnector | ActiveMQ Transport Server:
tcp://
0.0.0.0:61616?maximumConnections=1000wireformat.maxFrameSize=104857600

Is there a way how to find which client open so much connections? Is there
a reason why ActiveMQ doesn't close this connections?
Any idea how to debug such an issue?

Regards,

Michael


Re: Too many open files

2014-01-13 Thread Torsten Mielke
Hi,

You can use JMX tools like console and connect to the broker, check the 
connections and their IP addresses.
I suggest you first find out to which client process these many connections 
belong and then look at the client code/configuration to find out why 
connections are not closed. 
I am not aware of any bugs in this area, so its likely a user / configuration 
problem.


Regards,

Torsten Mielke
tmielke.blogspot.com


On 13 Jan 2014, at 10:09 am, Michael Priess digitalover...@googlemail.com 
wrote:

 Hi,
 
 hi I actually use Apache Camel 2.10.1 in combination with ActiveMQ 5.7.0.
 
 To build a connection to ActiveMQ I use the ActiveMQ ConnectionFactory.
 After a while I get this exception:
 
 2013-12-21 04:16:51,442 | ERROR | Could not accept connection :
 java.net.SocketException: Too many open files |
 org.apache.activemq.broker.TransportConnector | ActiveMQ Transport Server:
 tcp://
 0.0.0.0:61616?maximumConnections=1000wireformat.maxFrameSize=104857600
 
 Is there a way how to find which client open so much connections? Is there
 a reason why ActiveMQ doesn't close this connections?
 Any idea how to debug such an issue?
 
 Regards,
 
 Michael






Re: Too many open files

2014-01-13 Thread Arjen van der Meijden
By default, ActiveMQ closes connections asynchronously. That is normally 
not a problem, unless you open and close many connections (in our case, 
hundreds per minute).
In that case, the asynchronous closing may not be able to keep up with 
your opening/closing-rate and you start accumulating file descriptors.


There is a transport-level configuration switch that you can set in your 
activemq.xml to switch that to synchronous:


We have it for stomp:
transportConnector name=stomp 
uri=stomp://0.0.0.0:61613?transport.closeAsync=false /


But as said, this should not happen when you have just a low open/close 
rate.


By the way, if you allow 1000 connections at maximum and expect to get 
close to that number... you'll need to raise the limit on most 
Linux-distro's. The default tends to be 1024 (but that's not enough for 
1000 connections, since you need to account for file descriptors for 
libraries, storage files, etc).


We have added this to our /etc/init.d/activemq-script
ulimit -n 102400

Best regards,

Arjen

On 13-1-2014 10:09, Michael Priess wrote:

Hi,

hi I actually use Apache Camel 2.10.1 in combination with ActiveMQ 5.7.0.

To build a connection to ActiveMQ I use the ActiveMQ ConnectionFactory.
After a while I get this exception:

2013-12-21 04:16:51,442 | ERROR | Could not accept connection :
java.net.SocketException: Too many open files |
org.apache.activemq.broker.TransportConnector | ActiveMQ Transport Server:
tcp://
0.0.0.0:61616?maximumConnections=1000wireformat.maxFrameSize=104857600

Is there a way how to find which client open so much connections? Is there
a reason why ActiveMQ doesn't close this connections?
Any idea how to debug such an issue?

Regards,

Michael



Re: Too many open files

2014-01-13 Thread artnaseef
The 1000 maximum connections is an insane number for a single connection
pool.  I'm hard-pressed to imagine a scenario in which such a large number
of connections can improve performance.

Why the large number?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Too-many-open-files-tp4676228p4676259.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Broker leaks FDs - Too many open files

2013-08-21 Thread Pietro Romanazzi

etc/security/limits.conf

set nofile

regards,


-Messaggio originale- 
From: Jerry Cwiklik

Sent: Tuesday, August 20, 2013 6:04 PM
To: users@activemq.apache.org
Subject: Re: Broker leaks FDs - Too many open files

Thanks, Paul. We are running on Linux (SLES). All clients use openwire. The
broker is configured
with producerFlowControl=false, optimizedDispatch=true for both queues
and topics.

The openwire connector configured with transport.soWriteTimeout=45000. We
dont use persistence for messaging. The broker's jvm is given 8Gig.

JC



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Broker-leaks-FDs-Too-many-open-files-tp4670496p4670525.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com. 



Re: Broker leaks FDs - Too many open files

2013-08-20 Thread Jerry Cwiklik
Thanks, Paul. We are running on Linux (SLES). All clients use openwire. The
broker is configured 
with producerFlowControl=false, optimizedDispatch=true for both queues
and topics.

The openwire connector configured with transport.soWriteTimeout=45000. We
dont use persistence for messaging. The broker's jvm is given 8Gig. 

JC



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Broker-leaks-FDs-Too-many-open-files-tp4670496p4670525.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Broker leaks FDs - Too many open files

2013-08-20 Thread DV
We've had similar issue in the past, which went away with the following
changes:

* adding closeAsync=false to transportConnectors
* using nio instead of tcp in transportConnectors
* setting ulimits to unlimited for activemq user
* fine-tuning kahaDB settings (enableIndexWriteAsync=true
enableJournalDiskSyncs=false journalMaxFileLength=256mb)
* fine-tuning topic and queue settings (destinationPolicy)
* enabling producer flow control

However, all this fine-tuning can only do so much, so ultimately we had to:

* reduce broker usage by splitting datasets onto multiple brokers
* optimize consumers to reduce the length of time a message spends on the
broker

The less messages broker has to hold on to, the less likely you'll run into
some sort of a limit.


On Tue, Aug 20, 2013 at 9:04 AM, Jerry Cwiklik cwik...@us.ibm.com wrote:

 Thanks, Paul. We are running on Linux (SLES). All clients use openwire. The
 broker is configured
 with producerFlowControl=false, optimizedDispatch=true for both queues
 and topics.

 The openwire connector configured with transport.soWriteTimeout=45000. We
 dont use persistence for messaging. The broker's jvm is given 8Gig.

 JC



 --
 View this message in context:
 http://activemq.2283324.n4.nabble.com/Broker-leaks-FDs-Too-many-open-files-tp4670496p4670525.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.




-- 
Best regards, Dmitriy V.


Broker leaks FDs - Too many open files

2013-08-19 Thread Jerry Cwiklik
Our production broker (v.5.6.0) keeps dying while in heavy use. The broker
log is filled with:

2013-07-28 00:04:08,264 [teTimeout=45000] ERROR TransportConnector
- Could not accept connection : java.net.SocketException: Too many open
files

2013-07-28 00:04:08,264 [teTimeout=45000] ERROR TransportConnector
- Could not accept connection : java.net.SocketException: Too many open
files

2013-07-28 00:04:08,264 [teTimeout=45000] ERROR TransportConnector
- Could not accept connection : java.net.SocketException: Too many open
files
 
This is logged at such a rapid rate that the logs roll and hide the initial
error/warning. We capture open fd of the broker's process and notice that
when the broker starts to croak the open fd count just explodes. Here is
part of the open fd log. The first column shows broker's open fds and each
line is logged every 60 secs.

   1284   12569  160194   -- normal count
   1294   12669  161438
   1305   12779  162812
   1318   12909  164426
   1328   13009  165658   - FD explosion
   1393   13659  173816
   1528   15009  190748
   1611   15839  201152
   1701   16739  212419
   1951   19239  243520
   2310   22830  290374
   2667   26399  332362
   3013   29859  375262
   3369   33422  422638
   3729   37019  464017
   4111   40841  515342
   4484   44570  561933
   4870   48432  609992
   5249   52219  652157
   5634   56071  705356
   6019   59919  747457
   6484   64571  811476
   6892   68652  862375
   7307   72802  914122
   7727   77002  966555
   8129   81022 1016717
   8336   83090 1042601
   8336   83090 1042584
   8336   83090 1042583
 
It normally shows ~1300 fds and this is more or less constant overtime, but
eventually it rapidly increases to 8336 and the broker becomes unusable. The
ulimit is set to 4094. The netstat shows a ton of sockets in CLOSE_WAIT
suggesting that the broker is not closing its side of a socket.

I found related open issue
https://issues.apache.org/jira/browse/AMQ-4531?page=com.atlassian.jirafisheyeplugin:fisheye-issuepanel

This Jira states that the problem surfaces in 5.8.0 and when
maximumConnections is set. We dont use this setting and we run with an older
version of AMQ. Any ideas how to deal with this? Would closeAsync=false have
any effect? 

JC




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Broker-leaks-FDs-Too-many-open-files-tp4670496.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Broker leaks FDs - Too many open files

2013-08-19 Thread Christian Posta
Yah, give that a try (as seen here
https://issues.apache.org/jira/browse/AMQ-1739).
Could also have a look at
http://activemq.apache.org/maven/apidocs/org/apache/activemq/transport/WriteTimeoutFilter.htmlper
this jira
https://issues.apache.org/jira/browse/AMQ-1993


On Mon, Aug 19, 2013 at 12:51 PM, Jerry Cwiklik cwik...@us.ibm.com wrote:

 Our production broker (v.5.6.0) keeps dying while in heavy use. The broker
 log is filled with:

 2013-07-28 00:04:08,264 [teTimeout=45000] ERROR TransportConnector
 - Could not accept connection : java.net.SocketException: Too many open
 files

 2013-07-28 00:04:08,264 [teTimeout=45000] ERROR TransportConnector
 - Could not accept connection : java.net.SocketException: Too many open
 files

 2013-07-28 00:04:08,264 [teTimeout=45000] ERROR TransportConnector
 - Could not accept connection : java.net.SocketException: Too many open
 files

 This is logged at such a rapid rate that the logs roll and hide the initial
 error/warning. We capture open fd of the broker's process and notice that
 when the broker starts to croak the open fd count just explodes. Here is
 part of the open fd log. The first column shows broker's open fds and each
 line is logged every 60 secs.

1284   12569  160194   -- normal count
1294   12669  161438
1305   12779  162812
1318   12909  164426
1328   13009  165658   - FD explosion
1393   13659  173816
1528   15009  190748
1611   15839  201152
1701   16739  212419
1951   19239  243520
2310   22830  290374
2667   26399  332362
3013   29859  375262
3369   33422  422638
3729   37019  464017
4111   40841  515342
4484   44570  561933
4870   48432  609992
5249   52219  652157
5634   56071  705356
6019   59919  747457
6484   64571  811476
6892   68652  862375
7307   72802  914122
7727   77002  966555
8129   81022 1016717
8336   83090 1042601
8336   83090 1042584
8336   83090 1042583

 It normally shows ~1300 fds and this is more or less constant overtime, but
 eventually it rapidly increases to 8336 and the broker becomes unusable.
 The
 ulimit is set to 4094. The netstat shows a ton of sockets in CLOSE_WAIT
 suggesting that the broker is not closing its side of a socket.

 I found related open issue

 https://issues.apache.org/jira/browse/AMQ-4531?page=com.atlassian.jirafisheyeplugin:fisheye-issuepanel

 This Jira states that the problem surfaces in 5.8.0 and when
 maximumConnections is set. We dont use this setting and we run with an
 older
 version of AMQ. Any ideas how to deal with this? Would closeAsync=false
 have
 any effect?

 JC




 --
 View this message in context:
 http://activemq.2283324.n4.nabble.com/Broker-leaks-FDs-Too-many-open-files-tp4670496.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.




-- 
*Christian Posta*
http://www.christianposta.com/blog
twitter: @christianposta


Re: Broker leaks FDs - Too many open files

2013-08-19 Thread Jerry Cwiklik
Christian, thanks. More questions:

What would be your theory why we see the explosion of open FDs? This is
triggered by some event. Any clues as to what that might be? 

Also, isnt it a bug that the broker just goes bezerk logging the same thing
over and over? Our broker's logs are filled with the error below. I suspect
that perhaps Camel or Spring which are used on the client side are in some
recovery mode trying to establish a connection to the broker which keeps
failing. 

2013-07-28 00:04:08,264 [teTimeout=45000] ERROR TransportConnector
- Could not accept connection : java.net.SocketException: Too many open
files
 ..

What are the consequences of using closeAsync=false? 

JC



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Broker-leaks-FDs-Too-many-open-files-tp4670496p4670498.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Broker leaks FDs - Too many open files

2013-08-19 Thread Paul Gale
On Mon, Aug 19, 2013 at 4:57 PM, Jerry Cwiklik cwik...@us.ibm.com wrote:

 What are the consequences of using closeAsync=false?



Setting async to false means that the socket close call is blocking and is
not handled in a separate thread.

This is preferable and common in web applications where STOMP clients, say,
will open a connection in the context of a web request, send a message,
then close the connection. Depending on the amount of web requests being
handled this translates into a lot of connect/disconnect traffic for the
broker.

It can take up to 30 seconds (on Linux systems, unless otherwise
configured) for a socket that's been 'closed' for its descriptor to be made
available for a new socket. If the close were made asynchronous then, as
you've seen, socket open requests are being made at a faster rate than
closed sockets can be recycled. Making the close operation synchronous
forces the client to block until it completes, thus controlling the number
of open requests keeping the descriptor count manageable.

What OS are you running your broker on? Please give more detail about your
clients, e.g., are they STOMP based etc? If they're STOMP based you might
want to consider configuring the STOMP transport connector to managed over
NIO for greater efficiency.

Thanks,
Paul


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-24 Thread Gary Tully
yes, closeAsync=false can help reduce the number of open sockets if
the broker is busy at the cost of delaying the close response to the
client, but you need to really ask your computer that question; in
other words, do an empirical test.

On 23 May 2012 21:57, johneboyer johnboye...@gmail.com wrote:
 Thank you Gary very much for your suggestions!

 Incidentally, we do have a lot of short lived STOMP producers. Should we
 also set /transport.closeAsync=false/ as suggested by Arjen earlier?

 Regards,

 John Boyer

 --
 View this message in context: 
 http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4652133.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://fusesource.com
http://blog.garytully.com


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-24 Thread Arjen van der Meijden

Are you sure the client even notices this?

From our experience, I'm fairly certain that only the server side of 
the connections where still open when we ran into this IOException.


I.e. isn't the correctly and completely closed (in Stomp-communication 
terms from the client perspective) connection put into a queue to later 
completely remove all connection information, close the server side of 
the socket, etc?


Actually, if there would be a last communication towards the client, I'd 
expect the non-asynchronous method of closing to be quicker per single 
client rather than adding a delay. Especially under load. Queueing for 
asynchronous processing normally suggests some form of unknown and 
highly variable latency.
I can see two advantages of asynchronous closing. Firstly, it'll reduce 
time a thread spends on a single connection by pushing part of that work 
into a backend processing thread. Which in the end should result in a 
reduced amount of threads being active at any time.
And secondly it can group several close/cleanup-operations in one go and 
thus reduce the overhead per cleanup.


But reduced delay for the close-confirmation towards a client wouldn't 
be anything I'd expect from a asynchronous background operation.


So while synchronous closing may increase the load for the broker by 
pushing this work into the communication threads and increasing the 
relative overhead for each cleanup/close. It shouldn't really be a 
disadvantage to the client side. Or am I missing a step?


Best regards,

Arjen

On 24-5-2012 12:01 Gary Tully wrote:

yes, closeAsync=false can help reduce the number of open sockets if
the broker is busy at the cost of delaying the close response to the
client, but you need to really ask your computer that question; in
other words, do an empirical test.

On 23 May 2012 21:57, johneboyerjohnboye...@gmail.com  wrote:

Thank you Gary very much for your suggestions!

Incidentally, we do have a lot of short lived STOMP producers. Should we
also set /transport.closeAsync=false/ as suggested by Arjen earlier?

Regards,

John Boyer

--
View this message in context: 
http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4652133.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.






Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-24 Thread Gary Tully
Sorry, my bad. You are correct. It is just the server side 'socket'
close that is async, not the tear down of broker state relating to a
connection, so there is no client impact.

asyncSync close really just means that there is a thread pool handling
close rather than the transport thread, so the transport thread can
get on with being reused if socket.close blocks for a bit.

On 24 May 2012 14:12, Arjen van der Meijden acmmail...@tweakers.net wrote:
 Are you sure the client even notices this?

 From our experience, I'm fairly certain that only the server side of the
 connections where still open when we ran into this IOException.

 I.e. isn't the correctly and completely closed (in Stomp-communication terms
 from the client perspective) connection put into a queue to later completely
 remove all connection information, close the server side of the socket, etc?

 Actually, if there would be a last communication towards the client, I'd
 expect the non-asynchronous method of closing to be quicker per single
 client rather than adding a delay. Especially under load. Queueing for
 asynchronous processing normally suggests some form of unknown and highly
 variable latency.
 I can see two advantages of asynchronous closing. Firstly, it'll reduce time
 a thread spends on a single connection by pushing part of that work into a
 backend processing thread. Which in the end should result in a reduced
 amount of threads being active at any time.
 And secondly it can group several close/cleanup-operations in one go and
 thus reduce the overhead per cleanup.

 But reduced delay for the close-confirmation towards a client wouldn't be
 anything I'd expect from a asynchronous background operation.

 So while synchronous closing may increase the load for the broker by pushing
 this work into the communication threads and increasing the relative
 overhead for each cleanup/close. It shouldn't really be a disadvantage to
 the client side. Or am I missing a step?

 Best regards,

 Arjen


 On 24-5-2012 12:01 Gary Tully wrote:

 yes, closeAsync=false can help reduce the number of open sockets if
 the broker is busy at the cost of delaying the close response to the
 client, but you need to really ask your computer that question; in
 other words, do an empirical test.

 On 23 May 2012 21:57, johneboyerjohnboye...@gmail.com  wrote:

 Thank you Gary very much for your suggestions!

 Incidentally, we do have a lot of short lived STOMP producers. Should we
 also set /transport.closeAsync=false/ as suggested by Arjen earlier?

 Regards,

 John Boyer

 --
 View this message in context:
 http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4652133.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.








-- 
http://fusesource.com
http://blog.garytully.com


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-23 Thread johneboyer
Thank you Gary very much for your suggestions!

Incidentally, we do have a lot of short lived STOMP producers. Should we
also set /transport.closeAsync=false/ as suggested by Arjen earlier?

Regards,

John Boyer

--
View this message in context: 
http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4652133.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-21 Thread mickhayes
I would just point out that if you have thousands of threads, you will hit
another limit due to the stack-space per-thread (native memory.)

If a broker is running in a 32-bit JVM (max 2^^32 ~ 4GB addressable), then
increasing the maxheap space (as mentioned in the orginal post) reduces the
portion of 4GB that you have left for the stack-per-thread. 

So - all other things being equal - increasing the maxheapsize means that
you can handle *fewer threads* in that JVM.





-
Michael Hayes B.Sc. (NUI), M.Sc. (DCU), SCSA SCNA 

--
View this message in context: 
http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4647881.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-21 Thread Gary Tully
if you have badly behaved stomp clients that don't close their
connections, you will need to introduce tcp level keepalive
to have the connections timeout in a reasonable manner.
use:
transportConnectors
transportConnector name=stomp
uri=stomp://0.0.0.0:61613?transport.keepAlive=true/
/transportConnectors

and configure tcp keepalive for your os. Some references at
http://www.gnugk.org/keepalive.html

On 18 May 2012 18:20, johneboyer johnboye...@gmail.com wrote:
 Yes. After further research yesterday, we did
 http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/ increase
 the number of files parameter  to *10240*. We have  hundreds of open STOMP
 connections and expect this number to continue to grow in the coming months.

 /So, I guess the question we have now is how to optimize this setting in
 future/. I realize this question is out-of-scope for this forum, but it
 would be nice if the ActiveMQ team could provide some best practices in this
 area when a broker has hundreds or even thousands of open connections.

 Thanks,
 John Boyer

 --
 View this message in context: 
 http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4644813.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://fusesource.com
http://blog.garytully.com


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-19 Thread Arjen van der Meijden
If you have short lived connections: Try adding 
'?transport.closeAsync=false' to your connection construct, i.e. 
something like this:
transportConnector name=stomp 
uri=stomp://0.0.0.0:61613?transport.closeAsync=false /


We have many very short lived stomp producers and due to the 
asynchronous disconnect code, it would take a while before the 
connections where actually cleaned up on the server side. And that would 
leave the 'files' (tcp sockets) open longer than necessary, thus causing 
us hitting the ulimit.


You probably still need the 10240 limit, but with that parameter, you 
should be save with short lived connections like in our scenario.


If you just have long lived connections, that's obviously not going to 
help. In that case you just need to keep increasing the setting (or 
adding ActiveMQ instances in a network).

And then your question remains.

Best regard,

Arjen

On 18-5-2012 19:20 johneboyer wrote:

Yes. After further research yesterday, we did
http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/ increase
the number of files parameter  to *10240*. We have  hundreds of open STOMP
connections and expect this number to continue to grow in the coming months.

/So, I guess the question we have now is how to optimize this setting in
future/. I realize this question is out-of-scope for this forum, but it
would be nice if the ActiveMQ team could provide some best practices in this
area when a broker has hundreds or even thousands of open connections.

Thanks,
John Boyer

--
View this message in context: 
http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4644813.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-18 Thread metatech

  wrote
 
 /java.io.IOException: Too many open/ files error. 
 

Hi,

Have you tried adding ulimit -n 2048 in the ~/.profile of the user
running ActiveMQ ?
Can you show the output of ulimit -a ?

metatech

--
View this message in context: 
http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4644784.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: java.io.IOException: Too many open files (v5.4.2)

2012-05-18 Thread johneboyer
Yes. After further research yesterday, we did 
http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/ increase
the number of files parameter  to *10240*. We have  hundreds of open STOMP
connections and expect this number to continue to grow in the coming months. 

/So, I guess the question we have now is how to optimize this setting in
future/. I realize this question is out-of-scope for this forum, but it
would be nice if the ActiveMQ team could provide some best practices in this
area when a broker has hundreds or even thousands of open connections.

Thanks,
John Boyer

--
View this message in context: 
http://activemq.2283324.n4.nabble.com/java-io-IOException-Too-many-open-files-v5-4-2-tp4643701p4644813.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Too many open files

2011-01-05 Thread Don Santillan

Hello,

I have an embedded activemq in my webapp and it is deployed in a jetty 
server. I believe that our server OS's file descriptor limit has enough 
value.


After a few days, the application crashed with the following error:

2011-01-05 00:34:40.000:WARN::EXCEPTION 
java.io.IOException: Too many open files
	at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)

   at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
   at org.eclipse.jetty.server.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:74)
	at org.eclipse.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:650)

   at 
org.eclipse.jetty.io.nio.SelectorManager.doSelect(SelectorManager.java:195)
   at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:134)
   at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:850)
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
   at java.lang.Thread.run(Thread.java:6


Is there a possibility that I have configured my activemq or persistence 
wrongly which caused this? Here's my activemq.xml:

beans xmlns=http://www.springframework.org/schema/beans;
   xmlns:amq=http://activemq.apache.org/schema/core; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation=http://www.springframework.org/schema/beans 
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
 http://activemq.apache.org/schema/core 
http://activemq.apache.org/schema/core/activemq-core.xsd;


   broker xmlns=http://activemq.apache.org/schema/core;
   brokerName=devbroker persistent=true
   destroyApplicationContextOnStop=true
   dataDirectory=/tmp/dev
  
   transportConnectors

   transportConnector name=openwire
   uri=vm://devBroker /
   /transportConnectors

   plugins
   statisticsBrokerPlugin /
   /plugins

   /broker

/beans


I googled this and saw some posts telling this maybe caused by an 
improper setting on memory limit. I also saw some threads that says use 
kaha db for better persistence. How am I able to address both of this? 
Can I configure both using the activemq.xml alone? What are the 
appropriate values for an application that processes large amount of 
messages.


Can anyone point me to a good activemq.xml configuration that is good 
for production environment?



Thanks!
-don


Re: Too many open files

2011-01-05 Thread Juan Nin
We used to have this issue with ActiveMQ 5.2.x, so had to rise the
file descriptors.
Since we moved to ActiveMQ 5.3.x and KahaDB, we naver had that issue again.

Regards,

Juan


On Wed, Jan 5, 2011 at 12:01 PM, Don Santillan donzym...@gmail.com wrote:
 Hello,

 I have an embedded activemq in my webapp and it is deployed in a jetty
 server. I believe that our server OS's file descriptor limit has enough
 value.

 After a few days, the application crashed with the following error:

 2011-01-05 00:34:40.000:WARN::EXCEPTION java.io.IOException: Too many open
 files                at sun.nio.ch.ServerSocketChannelImpl.accept0(Native
 Method)
       at
 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
       at
 org.eclipse.jetty.server.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:74)
           at
 org.eclipse.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:650)
       at
 org.eclipse.jetty.io.nio.SelectorManager.doSelect(SelectorManager.java:195)
       at
 org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:134)
       at
 org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:850)
       at
 org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
       at java.lang.Thread.run(Thread.java:6


 Is there a possibility that I have configured my activemq or persistence
 wrongly which caused this? Here's my activemq.xml:
 beans xmlns=http://www.springframework.org/schema/beans;
   xmlns:amq=http://activemq.apache.org/schema/core;
 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation=http://www.springframework.org/schema/beans
 http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
  http://activemq.apache.org/schema/core
 http://activemq.apache.org/schema/core/activemq-core.xsd;

   broker xmlns=http://activemq.apache.org/schema/core;
       brokerName=devbroker persistent=true
       destroyApplicationContextOnStop=true
       dataDirectory=/tmp/dev
             transportConnectors
           transportConnector name=openwire
               uri=vm://devBroker /
       /transportConnectors

       plugins
           statisticsBrokerPlugin /
       /plugins

   /broker

 /beans


 I googled this and saw some posts telling this maybe caused by an improper
 setting on memory limit. I also saw some threads that says use kaha db for
 better persistence. How am I able to address both of this? Can I configure
 both using the activemq.xml alone? What are the appropriate values for an
 application that processes large amount of messages.

 Can anyone point me to a good activemq.xml configuration that is good for
 production environment?


 Thanks!
 -don



Re: Too many open files

2011-01-05 Thread Don Santillan

Thanks for the reply Juan!

Can you give me a sample activemq.xml that enables KahaDB with 
production level settings?


-don

Can you give me a

Juan Nin wrote:

We used to have this issue with ActiveMQ 5.2.x, so had to rise the
file descriptors.
Since we moved to ActiveMQ 5.3.x and KahaDB, we naver had that issue again.

Regards,

Juan


On Wed, Jan 5, 2011 at 12:01 PM, Don Santillan donzym...@gmail.com wrote:
  

Hello,

I have an embedded activemq in my webapp and it is deployed in a jetty
server. I believe that our server OS's file descriptor limit has enough
value.

After a few days, the application crashed with the following error:

2011-01-05 00:34:40.000:WARN::EXCEPTION java.io.IOException: Too many open
filesat sun.nio.ch.ServerSocketChannelImpl.accept0(Native
Method)
  at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
  at
org.eclipse.jetty.server.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:74)
  at
org.eclipse.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:650)
  at
org.eclipse.jetty.io.nio.SelectorManager.doSelect(SelectorManager.java:195)
  at
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:134)
  at
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:850)
  at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
  at java.lang.Thread.run(Thread.java:6


Is there a possibility that I have configured my activemq or persistence
wrongly which caused this? Here's my activemq.xml:
beans xmlns=http://www.springframework.org/schema/beans;
  xmlns:amq=http://activemq.apache.org/schema/core;
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  xsi:schemaLocation=http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
 http://activemq.apache.org/schema/core
http://activemq.apache.org/schema/core/activemq-core.xsd;

  broker xmlns=http://activemq.apache.org/schema/core;
  brokerName=devbroker persistent=true
  destroyApplicationContextOnStop=true
  dataDirectory=/tmp/dev
transportConnectors
  transportConnector name=openwire
  uri=vm://devBroker /
  /transportConnectors

  plugins
  statisticsBrokerPlugin /
  /plugins

  /broker

/beans


I googled this and saw some posts telling this maybe caused by an improper
setting on memory limit. I also saw some threads that says use kaha db for
better persistence. How am I able to address both of this? Can I configure
both using the activemq.xml alone? What are the appropriate values for an
application that processes large amount of messages.

Can anyone point me to a good activemq.xml configuration that is good for
production environment?


Thanks!
-don




  


Re: Too many open files

2011-01-05 Thread Juan Nin
I'm basically using the default config, with the things I don't use
disabled, and several values tweaked based on the sample config files
that come with ActiveMQ


On Wed, Jan 5, 2011 at 3:24 PM, Don Santillan donzym...@gmail.com wrote:
 Thanks for the reply Juan!

 Can you give me a sample activemq.xml that enables KahaDB with production
 level settings?

 -don

 Can you give me a

 Juan Nin wrote:

 We used to have this issue with ActiveMQ 5.2.x, so had to rise the
 file descriptors.
 Since we moved to ActiveMQ 5.3.x and KahaDB, we naver had that issue
 again.

 Regards,

 Juan


 On Wed, Jan 5, 2011 at 12:01 PM, Don Santillan donzym...@gmail.com
 wrote:


 Hello,

 I have an embedded activemq in my webapp and it is deployed in a jetty
 server. I believe that our server OS's file descriptor limit has enough
 value.

 After a few days, the application crashed with the following error:

 2011-01-05 00:34:40.000:WARN::EXCEPTION java.io.IOException: Too many
 open
 files                at sun.nio.ch.ServerSocketChannelImpl.accept0(Native
 Method)
      at

 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
      at

 org.eclipse.jetty.server.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:74)
          at

 org.eclipse.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:650)
      at

 org.eclipse.jetty.io.nio.SelectorManager.doSelect(SelectorManager.java:195)
      at

 org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:134)
      at

 org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:850)
      at

 org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
      at java.lang.Thread.run(Thread.java:6


 Is there a possibility that I have configured my activemq or persistence
 wrongly which caused this? Here's my activemq.xml:
 beans xmlns=http://www.springframework.org/schema/beans;
  xmlns:amq=http://activemq.apache.org/schema/core;
 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  xsi:schemaLocation=http://www.springframework.org/schema/beans
 http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
  http://activemq.apache.org/schema/core
 http://activemq.apache.org/schema/core/activemq-core.xsd;

  broker xmlns=http://activemq.apache.org/schema/core;
      brokerName=devbroker persistent=true
      destroyApplicationContextOnStop=true
      dataDirectory=/tmp/dev
            transportConnectors
          transportConnector name=openwire
              uri=vm://devBroker /
      /transportConnectors

      plugins
          statisticsBrokerPlugin /
      /plugins

  /broker

 /beans


 I googled this and saw some posts telling this maybe caused by an
 improper
 setting on memory limit. I also saw some threads that says use kaha db
 for
 better persistence. How am I able to address both of this? Can I
 configure
 both using the activemq.xml alone? What are the appropriate values for an
 application that processes large amount of messages.

 Can anyone point me to a good activemq.xml configuration that is good for
 production environment?


 Thanks!
 -don







Re: too many open files error with 5.3 and Stomp

2009-10-30 Thread alex.hollerith

config:
policyEntry queue= producerFlowControl=true memoryLimit=5mb

setup:
1 perl stomp producer producing into a queue,
  connecting and disconnecting on every post,
  rather low frequency of posts (0.5/minute)
0 consumers

behaviour:
works OK until around 68 messages are in the queue (surely depends on the
size of the messages)

after that you get this in the log:
2009-10-29 20:32:05,189 | INFO  | Usage Manager memory limit reached on
queue://test.soccerfeeds.queue. Producers will be throttled to the rate at
which messages are removed ...

And while the activemq service is in that throttling producers state you
will see CLOSE_WAIT sockets building up:
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:41519   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:36141   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:45840   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:43793   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:40212   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:44060   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:43776   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:44032   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:43781   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:40200   
CLOSE_WAIT
tcp0  0 :::10.60.1.51:61613 :::10.60.1.206:44045   
CLOSE_WAIT

You can watch the numbers grow with:
watch --interval=5 'netstat -an |grep tcp |grep 61613 | grep CLOSE_WAIT |wc
-l'

Every post increases the number of CLOSE_WAIT sockets by 1. And the sockets
will not go away, the number seems to be steadily growing, we watched it for
around 17 hours.

Now just consume one single message (we did this via the admin webinterface)
and the number of sockets in CLOSE_WAIT drops to 0 instantly.

[r...@bladedist01 activemq]# netstat -an |grep tcp |grep 61613
tcp0  0 :::61613:::*   
LISTEN

Our theory is that activemq does somehow manage to build up sockets in
CLOSE_WAIT state while it is in  throttling producers mode on a given
queue until, eventually, the system runs out of resources (file descriptors
in this case).
-- 
View this message in context: 
http://old.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p26129409.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files error with 5.3 and Stomp

2009-10-22 Thread DavidLevy

Hi

- it's web, so I can't really tell ...
I would say 50-100 STOMP PHP producers/consumers
+ 2 REST python producers/consumers
In 5.3 i was using stomp at first, and then stomp+nio but I think both had
the issue.

- most of them yes, because it's new PHP scripts

PS : the server is a 64bit server. It might be a problem ?



Dejan Bosanac wrote:
 
 Hi David,
 
 ok, can you at least provide more data on:
 
 - how many producers/consumers are you using
 - are you opening a new connection for every message send/receive
 
 Also, are you using regular stomp or stomp+nio transports?
 
 I'm working on creating a java test that can simulate stomp load
 environment
 with many producers/consumers opening connections.
 
 Cheers
 --
 Dejan Bosanac - http://twitter.com/dejanb
 
 Open Source Integration - http://fusesource.com/
 ActiveMQ in Action - http://www.manning.com/snyder/
 Blog - http://www.nighttale.net
 
 
 On Wed, Oct 21, 2009 at 4:03 PM, DavidLevy dvid.l...@gmail.com wrote:
 

 Thanks Dejan,

 This happen in my production environment which is made of a dozen server.
 I don't believe it would be easy to reproduce by myself in a smaller
 environment.

 I noticed that it works better with v5.2. With 5.3, it's not stable for
 more
 than a few hours. 5.2 can last a few days before it crashes.

 I was also thinking about using REST calls instead of STOMP for some PHP
 scripts ... It should help if the STOMP leak is my issue for real ...





 Dejan Bosanac wrote:
 
  I plan to look at these connection closing issues with stomp in coming
  days.
  Any chance you can create a reproducible test case (in any language)?
 
  Cheers
  --
  Dejan Bosanac - http://twitter.com/dejanb
 
  Open Source Integration - http://fusesource.com/
  ActiveMQ in Action - http://www.manning.com/snyder/
  Blog - http://www.nighttale.net
 
 
  On Wed, Oct 14, 2009 at 12:46 PM, DavidLevy dvid.l...@gmail.com
 wrote:
 
 
  Hi
 
  I had the issue with 5.2 and now that I installed 5.3 last night, it
  still
  the same, or even worse.
  It basically looks like this issue :
  http://issues.apache.org/activemq/browse/AMQ-1873
 
  After a short time (1 hour or so) the broker is filled with errors
 like
  'too
  many open files' and I have to restart activemq.
 
 
  I don't believe it's a performance issue because
  - the host server is strong
  - there are less than 10 queues
  - there are less than 1 messages
 
  I assume it has to do also with Stomp connections not being released.
  I am using both PHP and Python clients. I believe the issue is related
 to
  PHP clients that may be more error prone if the Apache threads die
  improperly ...
 
  I ll attach my current config file.
 
  Any help is welcome !
 
  Thanks :)
 
  http://www.nabble.com/file/p2531/activemq.xml activemq.xml
  --
  View this message in context:
 
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p2531.html
  Sent from the ActiveMQ - User mailing list archive at Nabble.com.
 
 
 
 
  -
  Dejan Bosanac
 
  Open Source Integration - http://fusesource.com/
  ActiveMQ in Action - http://www.manning.com/snyder/
  Blog - http://www.nighttale.net
 

 --
 View this message in context:
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p25993154.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.


 
 
 -
 Dejan Bosanac
 
 Open Source Integration - http://fusesource.com/
 ActiveMQ in Action - http://www.manning.com/snyder/
 Blog - http://www.nighttale.net
 

-- 
View this message in context: 
http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p26007460.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files error with 5.3 and Stomp

2009-10-22 Thread Dejan Bosanac
   the same, or even worse.
   It basically looks like this issue :
   http://issues.apache.org/activemq/browse/AMQ-1873
  
   After a short time (1 hour or so) the broker is filled with errors
  like
   'too
   many open files' and I have to restart activemq.
  
  
   I don't believe it's a performance issue because
   - the host server is strong
   - there are less than 10 queues
   - there are less than 1 messages
  
   I assume it has to do also with Stomp connections not being released.
   I am using both PHP and Python clients. I believe the issue is
 related
  to
   PHP clients that may be more error prone if the Apache threads die
   improperly ...
  
   I ll attach my current config file.
  
   Any help is welcome !
  
   Thanks :)
  
   http://www.nabble.com/file/p2531/activemq.xml activemq.xml
   --
   View this message in context:
  
 
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p2531.html
   Sent from the ActiveMQ - User mailing list archive at Nabble.com.
  
  
  
  
   -
   Dejan Bosanac
  
   Open Source Integration - http://fusesource.com/
   ActiveMQ in Action - http://www.manning.com/snyder/
   Blog - http://www.nighttale.net
  
 
  --
  View this message in context:
 
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p25993154.html
  Sent from the ActiveMQ - User mailing list archive at Nabble.com.
 
 
 
 
  -
  Dejan Bosanac
 
  Open Source Integration - http://fusesource.com/
  ActiveMQ in Action - http://www.manning.com/snyder/
  Blog - http://www.nighttale.net
 

 --
 View this message in context:
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p26007460.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.


!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements.  See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the License); you may not use this file except in compliance with
the License.  You may obtain a copy of the License at
   
http://www.apache.org/licenses/LICENSE-2.0
   
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an AS IS BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--
beans
  xmlns=http://www.springframework.org/schema/beans;
  xmlns:amq=http://activemq.apache.org/schema/core;
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  xsi:schemaLocation=http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd;

!-- Allows us to use system properties as variables in this configuration file --
bean class=org.springframework.beans.factory.config.PropertyPlaceholderConfigurer
property name=locations
valuefile:${activemq.base}/conf/credentials.properties/value
/property  
/bean

!-- 
The broker element is used to configure the ActiveMQ broker. 
--
broker xmlns=http://activemq.apache.org/schema/core; brokerName=localhost dataDirectory=${activemq.base}/data

!-- 
The managementContext is used to configure how ActiveMQ is exposed in 
JMX. By default, ActiveMQ uses the MBean server that is started by 
the JVM. For more information, see: 

http://activemq.apache.org/jmx.html 
--
managementContext
managementContext createConnector=false/
/managementContext

!-- 
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag). 
For more information, see: 

http://activemq.apache.org/persistence.html 
--
persistenceAdapter
kahaDB directory=${activemq.base}/data/kahadb/
/persistenceAdapter


!--
			For better performances use VM cursor and small memory limit.
			For more information, see:

http://activemq.apache.org/message-cursors.html

Also, if your producer is hanging, it's probably due to producer flow control.
For more information, see:
http://activemq.apache.org/producer-flow-control.html
--
  
destinationPolicy
policyMap
  policyEntries
policyEntry topic= producerFlowControl=true
  pendingSubscriberPolicy
vmCursor

Re: too many open files error with 5.3 and Stomp

2009-10-21 Thread Dejan Bosanac
I plan to look at these connection closing issues with stomp in coming days.
Any chance you can create a reproducible test case (in any language)?

Cheers
--
Dejan Bosanac - http://twitter.com/dejanb

Open Source Integration - http://fusesource.com/
ActiveMQ in Action - http://www.manning.com/snyder/
Blog - http://www.nighttale.net


On Wed, Oct 14, 2009 at 12:46 PM, DavidLevy dvid.l...@gmail.com wrote:


 Hi

 I had the issue with 5.2 and now that I installed 5.3 last night, it still
 the same, or even worse.
 It basically looks like this issue :
 http://issues.apache.org/activemq/browse/AMQ-1873

 After a short time (1 hour or so) the broker is filled with errors like
 'too
 many open files' and I have to restart activemq.


 I don't believe it's a performance issue because
 - the host server is strong
 - there are less than 10 queues
 - there are less than 1 messages

 I assume it has to do also with Stomp connections not being released.
 I am using both PHP and Python clients. I believe the issue is related to
 PHP clients that may be more error prone if the Apache threads die
 improperly ...

 I ll attach my current config file.

 Any help is welcome !

 Thanks :)

 http://www.nabble.com/file/p2531/activemq.xml activemq.xml
 --
 View this message in context:
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p2531.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.




Re: too many open files error with 5.3 and Stomp

2009-10-21 Thread DavidLevy

Thanks Dejan, 

This happen in my production environment which is made of a dozen server. 
I don't believe it would be easy to reproduce by myself in a smaller
environment.

I noticed that it works better with v5.2. With 5.3, it's not stable for more
than a few hours. 5.2 can last a few days before it crashes.

I was also thinking about using REST calls instead of STOMP for some PHP
scripts ... It should help if the STOMP leak is my issue for real ...





Dejan Bosanac wrote:
 
 I plan to look at these connection closing issues with stomp in coming
 days.
 Any chance you can create a reproducible test case (in any language)?
 
 Cheers
 --
 Dejan Bosanac - http://twitter.com/dejanb
 
 Open Source Integration - http://fusesource.com/
 ActiveMQ in Action - http://www.manning.com/snyder/
 Blog - http://www.nighttale.net
 
 
 On Wed, Oct 14, 2009 at 12:46 PM, DavidLevy dvid.l...@gmail.com wrote:
 

 Hi

 I had the issue with 5.2 and now that I installed 5.3 last night, it
 still
 the same, or even worse.
 It basically looks like this issue :
 http://issues.apache.org/activemq/browse/AMQ-1873

 After a short time (1 hour or so) the broker is filled with errors like
 'too
 many open files' and I have to restart activemq.


 I don't believe it's a performance issue because
 - the host server is strong
 - there are less than 10 queues
 - there are less than 1 messages

 I assume it has to do also with Stomp connections not being released.
 I am using both PHP and Python clients. I believe the issue is related to
 PHP clients that may be more error prone if the Apache threads die
 improperly ...

 I ll attach my current config file.

 Any help is welcome !

 Thanks :)

 http://www.nabble.com/file/p2531/activemq.xml activemq.xml
 --
 View this message in context:
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p2531.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.


 
 
 -
 Dejan Bosanac
 
 Open Source Integration - http://fusesource.com/
 ActiveMQ in Action - http://www.manning.com/snyder/
 Blog - http://www.nighttale.net
 

-- 
View this message in context: 
http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p25993154.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files error with 5.3 and Stomp

2009-10-21 Thread Dejan Bosanac
Hi David,

ok, can you at least provide more data on:

- how many producers/consumers are you using
- are you opening a new connection for every message send/receive

Also, are you using regular stomp or stomp+nio transports?

I'm working on creating a java test that can simulate stomp load environment
with many producers/consumers opening connections.

Cheers
--
Dejan Bosanac - http://twitter.com/dejanb

Open Source Integration - http://fusesource.com/
ActiveMQ in Action - http://www.manning.com/snyder/
Blog - http://www.nighttale.net


On Wed, Oct 21, 2009 at 4:03 PM, DavidLevy dvid.l...@gmail.com wrote:


 Thanks Dejan,

 This happen in my production environment which is made of a dozen server.
 I don't believe it would be easy to reproduce by myself in a smaller
 environment.

 I noticed that it works better with v5.2. With 5.3, it's not stable for
 more
 than a few hours. 5.2 can last a few days before it crashes.

 I was also thinking about using REST calls instead of STOMP for some PHP
 scripts ... It should help if the STOMP leak is my issue for real ...





 Dejan Bosanac wrote:
 
  I plan to look at these connection closing issues with stomp in coming
  days.
  Any chance you can create a reproducible test case (in any language)?
 
  Cheers
  --
  Dejan Bosanac - http://twitter.com/dejanb
 
  Open Source Integration - http://fusesource.com/
  ActiveMQ in Action - http://www.manning.com/snyder/
  Blog - http://www.nighttale.net
 
 
  On Wed, Oct 14, 2009 at 12:46 PM, DavidLevy dvid.l...@gmail.com wrote:
 
 
  Hi
 
  I had the issue with 5.2 and now that I installed 5.3 last night, it
  still
  the same, or even worse.
  It basically looks like this issue :
  http://issues.apache.org/activemq/browse/AMQ-1873
 
  After a short time (1 hour or so) the broker is filled with errors like
  'too
  many open files' and I have to restart activemq.
 
 
  I don't believe it's a performance issue because
  - the host server is strong
  - there are less than 10 queues
  - there are less than 1 messages
 
  I assume it has to do also with Stomp connections not being released.
  I am using both PHP and Python clients. I believe the issue is related
 to
  PHP clients that may be more error prone if the Apache threads die
  improperly ...
 
  I ll attach my current config file.
 
  Any help is welcome !
 
  Thanks :)
 
  http://www.nabble.com/file/p2531/activemq.xml activemq.xml
  --
  View this message in context:
 
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p2531.html
  Sent from the ActiveMQ - User mailing list archive at Nabble.com.
 
 
 
 
  -
  Dejan Bosanac
 
  Open Source Integration - http://fusesource.com/
  ActiveMQ in Action - http://www.manning.com/snyder/
  Blog - http://www.nighttale.net
 

 --
 View this message in context:
 http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p25993154.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.




Re: too many open files error with 5.3 and Stomp

2009-10-21 Thread alex.hollerith

Hi,


Dejan Bosanac wrote:
 
 ok, can you at least provide more data on:
 
 - how many producers/consumers are you using
 - are you opening a new connection for every message send/receive
 

I saw the same thing happening in our environment which consists of just one
producer at the moment.
We are using stomp via Perl and I think we are opening and closing the
connection every time we post, but am not sure.

Logfile attached.

A. http://www.nabble.com/file/p25995732/amq_log.txt amq_log.txt 
-- 
View this message in context: 
http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p25995732.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



too many open files error with 5.3 and Stomp

2009-10-14 Thread DavidLevy

Hi

I had the issue with 5.2 and now that I installed 5.3 last night, it still
the same, or even worse.
It basically looks like this issue : 
http://issues.apache.org/activemq/browse/AMQ-1873

After a short time (1 hour or so) the broker is filled with errors like 'too
many open files' and I have to restart activemq.


I don't believe it's a performance issue because
- the host server is strong
- there are less than 10 queues
- there are less than 1 messages

I assume it has to do also with Stomp connections not being released.
I am using both PHP and Python clients. I believe the issue is related to
PHP clients that may be more error prone if the Apache threads die
improperly ...

I ll attach my current config file.

Any help is welcome !

Thanks :)

http://www.nabble.com/file/p2531/activemq.xml activemq.xml 
-- 
View this message in context: 
http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tp2531p2531.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files

2009-06-04 Thread bwtaylor

We have a similar problem. I'm skeptical that simply raising the ulimit is
the right solution. What I've seen is a file descriptor leak where STOMP
connections leak when closed by an intermediate firewall. The producer's
packets will be dropped and it will close its connection, but activemq just
keeps the socket open. Raising the ulimit on open files will simply make it
take longer to die. 

A better solution is to make keepalive work with STOMP. Then the firewall
probably wouldn't shut the connection, or if it did anyway, the keepalive
packet would bounce back and cause the socket to be cleaned up.

The failure mode is horrible here... activemq doesn't crash as it simply
thinks it's waiting for messages on sockets that will never come. So
master/slave doesn't matter. Clients cannot connect to the master as all its
sockets are and remain full. 

Possibly related bugs:
http://issues.apache.org/activemq/browse/AMQ-1873


DataMover wrote:
 
 Good news is that we have a solution that works.
 
 We are using Ubuntu Jaunty as the server and the above mentioned tweaks
 work imediately.
 
 We have abandoned Centos for now.
 
 
 

-- 
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23872970.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files

2009-05-25 Thread DataMover

Good news is that we have a solution that works.

We are using Ubuntu Jaunty as the server and the above mentioned tweaks work
imediately.

We have abandoned Centos for now.


-- 
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23702699.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files

2009-05-18 Thread Arjen van der Meijden
I saw you mention limits.conf earlier, when I tried that, it didn't work 
for me. But it may work better in Centos/Redhat. The easiest way to 
check if it did work is to execute 'ulimit -n' as the user that is 
starting activemq. The default 1024 will not be enough to hold your 1000 
threads/sockets.


We use a recent Debian Linux. For me, the only way to increase the 
fd-limit beyond 1024 was to become root, execute 'ulimit -n 102400' and 
then start activemq with something like su activemq -c 
'/some/path/bin/activemq'.


Best regards,

Arjen

On 18-5-2009 7:17 Rob Davies wrote:

Are you sure you upped the max number of fds allowable per process ?


On 18 May 2009, at 01:23, DataMover wrote:




After a week of fighting with it, no matter what was done, we could 
not get

it to work.
We created a simple test, creating 1000 producers sending one message to
1000 queues.
Each producer in its own thread.
This would fail to get connections intermittently.
Eventually we decided it was an operating system issue.

I tested it on windows with no modifications  and there is NO problem.

Just wondering what linux version you were using?
We use Centos 5.3 .

What is the best linux distro to use with activemq?
Also, are there ports we should not use for transport?




Arjen van der Meijden wrote:


Well, that is really mostly the default config with some small
differences.

- I completely commented out the destinationPolicy-tag. This also
disables the per-queue/topic size limits.
- I upped the memoryUsage to 200 mb, the storeUsage to 1 gb and the
tempUsage to 1 gb.
- I changed the connector uri's (for stomp and openwire) to contain
?transport.closeAsync=false.

These settings aren't really well thought through and only aimed at our
very high connect/send/disconnect rate, they're just changes that should
disable or enlarge some of the limitations I was running in to.

And as you could see from the issue-report, I used a different JAVA_OPTS
to allow for some larger heap and such.

Best regards,

Arjen

On 11-5-2009 9:29, DataMover wrote:


I looked at that issue url you gave and wow, had a lot of great info.

Any chance one could get a copy of the configuration xml file you 
created

that solved the issue for you.
Just to get some ideas.

I had upped the memory limits via the etc security limit file and 
that at

least seemed to increase the load and slow the system down. Have not
tried
it again after that.

As far as upping the queue sizes, is there a limit?
Are there best practices anywhere?



Arjen van der Meijden wrote:

There may be at one or more of these three issues that I ran into:

- You actually have a too low setting for the open files. Try 
increasing
it (see man ulimit etc, be careful that normally only root can 
increase

it beyond 1024, but other programs, including su do inherit it).

- You're opening and closing connections too fast, this is what we 
had:

http://issues.apache.org/activemq/browse/AMQ-1739
Adding the ?transport.closeAsync=false-parameter to the url 
helped us

here.

- You're queues may be getting larger than the limits. Especially the
5mb per queue limit in the default configuration is easy to hit. 
Once I

raised the global limits and removed the per-queue/topic limits it has
worked stable for several months in a row (since feb 19 our single
broker has queued and dequeued over 300M tiny messages).

30 and 250 producers isn't that many, so unless they're maxing out 
your
broker system on some other resource than file pointers, my guess 
is the

single machine should be able to handle them.

Best regards,

Arjen

On 10-5-2009 22:03 DataMover wrote:

I have seems several posts on this but I have not been able to solve
our
situation.

We have 30 clients (producers) working with one activemq server.
All worked amazingly well.

Then we tried a test with around 250 clients.

That would get many transport errors.
Increasing the file limits on the os caused the system to come to a
crawl
with no benefit.

I am assuming the problem can be solved with multiple brokers being
run.
One question is do they have to be on different machines, or can we
have
multiple activemqs running on the same server, each listening on a
different
ip?












--
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23589365.html

Sent from the ActiveMQ - User mailing list archive at Nabble.com.






Re: too many open files

2009-05-18 Thread DataMover
 van der Meijden wrote:
 There may be at one or more of these three issues that I ran into:

 - You actually have a too low setting for the open files. Try 
 increasing
 it (see man ulimit etc, be careful that normally only root can 
 increase
 it beyond 1024, but other programs, including su do inherit it).

 - You're opening and closing connections too fast, this is what we 
 had:
 http://issues.apache.org/activemq/browse/AMQ-1739
 Adding the ?transport.closeAsync=false-parameter to the url 
 helped us
 here.

 - You're queues may be getting larger than the limits. Especially the
 5mb per queue limit in the default configuration is easy to hit. 
 Once I
 raised the global limits and removed the per-queue/topic limits it
 has
 worked stable for several months in a row (since feb 19 our single
 broker has queued and dequeued over 300M tiny messages).

 30 and 250 producers isn't that many, so unless they're maxing out 
 your
 broker system on some other resource than file pointers, my guess 
 is the
 single machine should be able to handle them.

 Best regards,

 Arjen

 On 10-5-2009 22:03 DataMover wrote:
 I have seems several posts on this but I have not been able to solve
 our
 situation.

 We have 30 clients (producers) working with one activemq server.
 All worked amazingly well.

 Then we tried a test with around 250 clients.

 That would get many transport errors.
 Increasing the file limits on the os caused the system to come to a
 crawl
 with no benefit.

 I am assuming the problem can be solved with multiple brokers being
 run.
 One question is do they have to be on different machines, or can we
 have
 multiple activemqs running on the same server, each listening on a
 different
 ip?








 -- 
 View this message in context: 
 http://www.nabble.com/too-many-open-files-tp23473539p23589365.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.

 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23592103.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files

2009-05-17 Thread DataMover


After a week of fighting with it, no matter what was done, we could not get
it to work.
We created a simple test, creating 1000 producers sending one message to
1000 queues.
Each producer in its own thread.
This would fail to get connections intermittently.
Eventually we decided it was an operating system issue.

I tested it on windows with no modifications  and there is NO problem.

Just wondering what linux version you were using?
We use Centos 5.3 .

What is the best linux distro to use with activemq?
Also, are there ports we should not use for transport?




Arjen van der Meijden wrote:
 
 Well, that is really mostly the default config with some small
 differences.
 
 - I completely commented out the destinationPolicy-tag. This also 
 disables the per-queue/topic size limits.
 - I upped the memoryUsage to 200 mb, the storeUsage to 1 gb and the 
 tempUsage to 1 gb.
 - I changed the connector uri's (for stomp and openwire) to contain 
 ?transport.closeAsync=false.
 
 These settings aren't really well thought through and only aimed at our 
 very high connect/send/disconnect rate, they're just changes that should 
 disable or enlarge some of the limitations I was running in to.
 
 And as you could see from the issue-report, I used a different JAVA_OPTS 
 to allow for some larger heap and such.
 
 Best regards,
 
 Arjen
 
 On 11-5-2009 9:29, DataMover wrote:
 
 I looked at that issue url you gave and wow, had a lot of great info.
 
 Any chance one could get a copy of the configuration xml file you created
 that solved the issue for you.
 Just to get some ideas.
 
 I had upped the memory limits via the etc security limit file and that at
 least seemed to increase the load and slow the system down. Have not
 tried
 it again after that.
 
 As far as upping the queue sizes, is there a limit?
 Are there best practices anywhere?
 
 
 
 Arjen van der Meijden wrote:
 There may be at one or more of these three issues that I ran into:

 - You actually have a too low setting for the open files. Try increasing 
 it (see man ulimit etc, be careful that normally only root can increase 
 it beyond 1024, but other programs, including su do inherit it).

 - You're opening and closing connections too fast, this is what we had:
 http://issues.apache.org/activemq/browse/AMQ-1739
 Adding the ?transport.closeAsync=false-parameter to the url helped us 
 here.

 - You're queues may be getting larger than the limits. Especially the 
 5mb per queue limit in the default configuration is easy to hit. Once I 
 raised the global limits and removed the per-queue/topic limits it has 
 worked stable for several months in a row (since feb 19 our single 
 broker has queued and dequeued over 300M tiny messages).

 30 and 250 producers isn't that many, so unless they're maxing out your 
 broker system on some other resource than file pointers, my guess is the 
 single machine should be able to handle them.

 Best regards,

 Arjen

 On 10-5-2009 22:03 DataMover wrote:
 I have seems several posts on this but I have not been able to solve
 our
 situation.

 We have 30 clients (producers) working with one activemq server.
 All worked amazingly well.

 Then we tried a test with around 250 clients.

 That would get many transport errors.
 Increasing the file limits on the os caused the system to come to a
 crawl
 with no benefit.

 I am assuming the problem can be solved with multiple brokers being
 run.
 One question is do they have to be on different machines, or can we
 have
 multiple activemqs running on the same server, each listening on a
 different
 ip?



 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23589365.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files

2009-05-17 Thread Rob Davies

Are you sure you upped the max number of fds allowable per process ?


On 18 May 2009, at 01:23, DataMover wrote:




After a week of fighting with it, no matter what was done, we could  
not get

it to work.
We created a simple test, creating 1000 producers sending one  
message to

1000 queues.
Each producer in its own thread.
This would fail to get connections intermittently.
Eventually we decided it was an operating system issue.

I tested it on windows with no modifications  and there is NO problem.

Just wondering what linux version you were using?
We use Centos 5.3 .

What is the best linux distro to use with activemq?
Also, are there ports we should not use for transport?




Arjen van der Meijden wrote:


Well, that is really mostly the default config with some small
differences.

- I completely commented out the destinationPolicy-tag. This also
disables the per-queue/topic size limits.
- I upped the memoryUsage to 200 mb, the storeUsage to 1 gb and the
tempUsage to 1 gb.
- I changed the connector uri's (for stomp and openwire) to contain
?transport.closeAsync=false.

These settings aren't really well thought through and only aimed at  
our
very high connect/send/disconnect rate, they're just changes that  
should

disable or enlarge some of the limitations I was running in to.

And as you could see from the issue-report, I used a different  
JAVA_OPTS

to allow for some larger heap and such.

Best regards,

Arjen

On 11-5-2009 9:29, DataMover wrote:


I looked at that issue url you gave and wow, had a lot of great  
info.


Any chance one could get a copy of the configuration xml file you  
created

that solved the issue for you.
Just to get some ideas.

I had upped the memory limits via the etc security limit file and  
that at

least seemed to increase the load and slow the system down. Have not
tried
it again after that.

As far as upping the queue sizes, is there a limit?
Are there best practices anywhere?



Arjen van der Meijden wrote:

There may be at one or more of these three issues that I ran into:

- You actually have a too low setting for the open files. Try  
increasing
it (see man ulimit etc, be careful that normally only root can  
increase

it beyond 1024, but other programs, including su do inherit it).

- You're opening and closing connections too fast, this is what  
we had:

http://issues.apache.org/activemq/browse/AMQ-1739
Adding the ?transport.closeAsync=false-parameter to the url  
helped us

here.

- You're queues may be getting larger than the limits. Especially  
the
5mb per queue limit in the default configuration is easy to hit.  
Once I
raised the global limits and removed the per-queue/topic limits  
it has

worked stable for several months in a row (since feb 19 our single
broker has queued and dequeued over 300M tiny messages).

30 and 250 producers isn't that many, so unless they're maxing  
out your
broker system on some other resource than file pointers, my guess  
is the

single machine should be able to handle them.

Best regards,

Arjen

On 10-5-2009 22:03 DataMover wrote:
I have seems several posts on this but I have not been able to  
solve

our
situation.

We have 30 clients (producers) working with one activemq server.
All worked amazingly well.

Then we tried a test with around 250 clients.

That would get many transport errors.
Increasing the file limits on the os caused the system to come  
to a

crawl
with no benefit.

I am assuming the problem can be solved with multiple brokers  
being

run.
One question is do they have to be on different machines, or can  
we

have
multiple activemqs running on the same server, each listening on a
different
ip?












--
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23589365.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.





Re: too many open files

2009-05-11 Thread DataMover

Version is 5.2

I will check on the ram allocated to queues.
We only use queues.
Some need to be persistence a few do not need to be.

One thing that may be relevant is that the server is behind a firewall that
closes connections after a timeout of no activity. We have no control over
the firewall.

So ... the clients will close an reopen a  connection to communicate their
information every 20 seconds. Wonder if that is an issue.

We were using Mysql as the persitence store but took it out thinking that
may be the cause.
So it has been using the default store and the problem did not go away.


Thank you for your responses


rajdavies wrote:
 
 
 On 10 May 2009, at 21:03, DataMover wrote:
 

 I have seems several posts on this but I have not been able to solve  
 our
 situation.

 We have 30 clients (producers) working with one activemq server.
 All worked amazingly well.

 Then we tried a test with around 250 clients.

 That would get many transport errors.
 Increasing the file limits on the os caused the system to come to a  
 crawl
 with no benefit.

 I am assuming the problem can be solved with multiple brokers being  
 run.
 One question is do they have to be on different machines, or can we  
 have
 multiple activemqs running on the same server, each listening on a  
 different
 ip?


 -- 
 View this message in context:
 http://www.nabble.com/too-many-open-files-tp23473539p23473539.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.

 
 You can use multiple brokers on the same machine - but could you  
 provide a little more detail about your setup ?:
   How much memory did you allocate for your broker?
 What version of ActiveMQ are you running ?
 Are you using Topics or Queues ?
 Do you need the Queue messages to be persistent ?
 
 
 
 cheers,
 
 Rob
 
 Rob Davies
 http://fusesource.com
 http://rajdavies.blogspot.com/
 
 
 
 
 
 
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23477862.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files

2009-05-11 Thread DataMover

Also

I don't see where a ram usage paramenter is given on start up.

#!/bin/bash 
export SUNJMX=-Dcom.sun.management.jmxremote.port=1099 
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote=true
export JDK_HOME=/usr/java/jdk1.6.0_06
export JAVA_HOME=/usr/java/jdk1.6.0_06
/opt/activemq/apache-activemq-5.2.0/bin/activemq-admin start 






DataMover wrote:
 
 Version is 5.2
 
 I will check on the ram allocated to queues.
 We only use queues.
 Some need to be persistence a few do not need to be.
 
 One thing that may be relevant is that the server is behind a firewall
 that closes connections after a timeout of no activity. We have no control
 over the firewall.
 
 So ... the clients will close an reopen a  connection to communicate their
 information every 20 seconds. Wonder if that is an issue.
 
 We were using Mysql as the persitence store but took it out thinking that
 may be the cause.
 So it has been using the default store and the problem did not go away.
 
 
 Thank you for your responses
 
 
 rajdavies wrote:
 
 
 On 10 May 2009, at 21:03, DataMover wrote:
 

 I have seems several posts on this but I have not been able to solve  
 our
 situation.

 We have 30 clients (producers) working with one activemq server.
 All worked amazingly well.

 Then we tried a test with around 250 clients.

 That would get many transport errors.
 Increasing the file limits on the os caused the system to come to a  
 crawl
 with no benefit.

 I am assuming the problem can be solved with multiple brokers being  
 run.
 One question is do they have to be on different machines, or can we  
 have
 multiple activemqs running on the same server, each listening on a  
 different
 ip?


 -- 
 View this message in context:
 http://www.nabble.com/too-many-open-files-tp23473539p23473539.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.

 
 You can use multiple brokers on the same machine - but could you  
 provide a little more detail about your setup ?:
   How much memory did you allocate for your broker?
 What version of ActiveMQ are you running ?
 Are you using Topics or Queues ?
 Do you need the Queue messages to be persistent ?
 
 
 
 cheers,
 
 Rob
 
 Rob Davies
 http://fusesource.com
 http://rajdavies.blogspot.com/
 
 
 
 
 
 
 
 
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/too-many-open-files-tp23473539p23478140.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: too many open files

2009-05-11 Thread Arjen van der Meijden

Well, that is really mostly the default config with some small differences.

- I completely commented out the destinationPolicy-tag. This also 
disables the per-queue/topic size limits.
- I upped the memoryUsage to 200 mb, the storeUsage to 1 gb and the 
tempUsage to 1 gb.
- I changed the connector uri's (for stomp and openwire) to contain 
?transport.closeAsync=false.


These settings aren't really well thought through and only aimed at our 
very high connect/send/disconnect rate, they're just changes that should 
disable or enlarge some of the limitations I was running in to.


And as you could see from the issue-report, I used a different JAVA_OPTS 
to allow for some larger heap and such.


Best regards,

Arjen

On 11-5-2009 9:29, DataMover wrote:


I looked at that issue url you gave and wow, had a lot of great info.

Any chance one could get a copy of the configuration xml file you created
that solved the issue for you.
Just to get some ideas.

I had upped the memory limits via the etc security limit file and that at
least seemed to increase the load and slow the system down. Have not tried
it again after that.

As far as upping the queue sizes, is there a limit?
Are there best practices anywhere?



Arjen van der Meijden wrote:

There may be at one or more of these three issues that I ran into:

- You actually have a too low setting for the open files. Try increasing 
it (see man ulimit etc, be careful that normally only root can increase 
it beyond 1024, but other programs, including su do inherit it).


- You're opening and closing connections too fast, this is what we had:
http://issues.apache.org/activemq/browse/AMQ-1739
Adding the ?transport.closeAsync=false-parameter to the url helped us 
here.


- You're queues may be getting larger than the limits. Especially the 
5mb per queue limit in the default configuration is easy to hit. Once I 
raised the global limits and removed the per-queue/topic limits it has 
worked stable for several months in a row (since feb 19 our single 
broker has queued and dequeued over 300M tiny messages).


30 and 250 producers isn't that many, so unless they're maxing out your 
broker system on some other resource than file pointers, my guess is the 
single machine should be able to handle them.


Best regards,

Arjen

On 10-5-2009 22:03 DataMover wrote:

I have seems several posts on this but I have not been able to solve our
situation.

We have 30 clients (producers) working with one activemq server.
All worked amazingly well.

Then we tried a test with around 250 clients.

That would get many transport errors.
Increasing the file limits on the os caused the system to come to a crawl
with no benefit.

I am assuming the problem can be solved with multiple brokers being run.
One question is do they have to be on different machines, or can we have
multiple activemqs running on the same server, each listening on a
different
ip?










Re: too many open files

2009-05-10 Thread Rob Davies

thanks Arjen - need to get add a Faq entry for this!

On 11 May 2009, at 06:51, Arjen van der Meijden wrote:


There may be at one or more of these three issues that I ran into:

- You actually have a too low setting for the open files. Try  
increasing it (see man ulimit etc, be careful that normally only  
root can increase it beyond 1024, but other programs, including su  
do inherit it).


- You're opening and closing connections too fast, this is what we  
had:

http://issues.apache.org/activemq/browse/AMQ-1739
Adding the ?transport.closeAsync=false-parameter to the url helped  
us here.


- You're queues may be getting larger than the limits. Especially  
the 5mb per queue limit in the default configuration is easy to hit.  
Once I raised the global limits and removed the per-queue/topic  
limits it has worked stable for several months in a row (since feb  
19 our single broker has queued and dequeued over 300M tiny messages).


30 and 250 producers isn't that many, so unless they're maxing out  
your broker system on some other resource than file pointers, my  
guess is the single machine should be able to handle them.


Best regards,

Arjen

On 10-5-2009 22:03 DataMover wrote:
I have seems several posts on this but I have not been able to  
solve our

situation.
We have 30 clients (producers) working with one activemq server.
All worked amazingly well.
Then we tried a test with around 250 clients.
That would get many transport errors.
Increasing the file limits on the os caused the system to come to a  
crawl

with no benefit.
I am assuming the problem can be solved with multiple brokers being  
run.
One question is do they have to be on different machines, or can we  
have
multiple activemqs running on the same server, each listening on a  
different

ip?




Re: ActiveMQ 5.2 too many open files exception

2009-04-01 Thread deathemperor

It's fixed by our system engineer. our AMQ is running up to 1m of msg now.
The problem was firewall.

http://activemq.apache.org/multicast-transport.html 
http://en.wikipedia.org/wiki/Multicast


Bill Schuller wrote:
 
 Unix variants use a file descriptor for each network connection. In my
 messaging experience, 9 times out of 10 file descriptor issues are
 associated with the number of active network connections the process has
 open. With a file descriptor limit of 10240, the process either has a lot
 of
 files or a lot of network connections open. Check to ensure your clients
 are
 doing connection pooling, etc.
 
 
 
 From: deathemperor deathempe...@gmail.com
 Reply-To: ActiveMQ Users users@activemq.apache.org
 Date: Mon, 30 Mar 2009 20:34:03 -0700
 To: ActiveMQ Users users@activemq.apache.org
 Subject: ActiveMQ 5.2 too many open files exception
 
 
 
 I'm running AMQ 5.2.0 and keep getting the error:
 
 2009-03-31 08:41:28,674 [...@0.0.0.0:8161] WARN  log
 - EXCEPTION
 java.io.IOException: Too many open files
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
 at
 org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelCo
 nnector.java:75)
 at
 org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:4
 75)
 at 
 org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:166)
 at
 org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.j
 ava:124)
 at
 org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:537)
 at
 org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:4
 50)
 
 This cause either of these problem:
 
 - ActiveMQ process hangs on Apache, no messages can be sent or received,
 the
 AMQ admin is not responding at the time either.
 - AMQ  AMQ admin running but no messages can be sent although
 receiver/consumer can still connect and check for message.
 
 
 The pre-procedure is to start AMQ as background service, running message
 up
 to 18-20k and the above occurs.
 
 Some of my conf:
 - maxFileLength: 1gb
 - ulimit -n: 10240
 - cealnupInterval: 1000 (ms)
 
 
 My another concern is why cealnupInterval doesn't work. Nothing get
 cleaned
 up as I expect the data file of the queue does.
 
 Please anyone help.
 --
 View this message in context:
 http://www.nabble.com/ActiveMQ-5.2-too-many-open-files-exception-tp22797720p
 22797720.html
 Sent from the ActiveMQ - User mailing list archive at Nabble.com.
 
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/ActiveMQ-5.2-too-many-open-files-exception-tp22797720p22834998.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: ActiveMQ 5.2 too many open files exception

2009-03-31 Thread andymorris

Hi

I experienced a problem like this with when we tried an HA setup with
master/slave but it only happened when running on RedHat Linux with Java 5
and the older 4.1 version of ActiveMQ.

There was no problem with the above config when running on Windows but when
we upgraded to Java 6 AND ActiveMQ 5.1 then the problem on RedHat went away.

Perhaps there's some clues to your problem? What is your setup?



deathemperor wrote:
 
 I'm running AMQ 5.2.0 and keep getting the error:
 
 2009-03-31 08:41:28,674 [...@0.0.0.0:8161] WARN  log  
  
 - EXCEPTION 
 java.io.IOException: Too many open files
   at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
   at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
   at
 org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75)
   at
 org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:475)
   at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:166)
   at
 org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
   at
 org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:537)
   at
 org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:450)
 
 This cause either of these problem:
 
 - ActiveMQ process hangs on Apache, no messages can be sent or received,
 the AMQ admin is not responding at the time either.
 - AMQ  AMQ admin running but no messages can be sent although
 receiver/consumer can still connect and check for message.
 
 
 The pre-procedure is to start AMQ as background service, running message
 up to 18-20k and the above occurs.
 
 Some of my conf:
 - maxFileLength: 1gb
 - ulimit -n: 10240
 - cealnupInterval: 1000 (ms)
 
 
 My another concern is why cealnupInterval doesn't work. Nothing get
 cleaned up as I expect the data file of the queue does.
 
 Please anyone help.
 

-- 
View this message in context: 
http://www.nabble.com/ActiveMQ-5.2-too-many-open-files-exception-tp22797720p22803162.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: ActiveMQ 5.2 too many open files exception

2009-03-31 Thread Bill Schuller
Unix variants use a file descriptor for each network connection. In my
messaging experience, 9 times out of 10 file descriptor issues are
associated with the number of active network connections the process has
open. With a file descriptor limit of 10240, the process either has a lot of
files or a lot of network connections open. Check to ensure your clients are
doing connection pooling, etc.



From: deathemperor deathempe...@gmail.com
Reply-To: ActiveMQ Users users@activemq.apache.org
Date: Mon, 30 Mar 2009 20:34:03 -0700
To: ActiveMQ Users users@activemq.apache.org
Subject: ActiveMQ 5.2 too many open files exception



I'm running AMQ 5.2.0 and keep getting the error:

2009-03-31 08:41:28,674 [...@0.0.0.0:8161] WARN  log
- EXCEPTION
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
at
org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelCo
nnector.java:75)
at
org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:4
75)
at 
org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:166)
at
org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.j
ava:124)
at
org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:537)
at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:4
50)

This cause either of these problem:

- ActiveMQ process hangs on Apache, no messages can be sent or received, the
AMQ admin is not responding at the time either.
- AMQ  AMQ admin running but no messages can be sent although
receiver/consumer can still connect and check for message.


The pre-procedure is to start AMQ as background service, running message up
to 18-20k and the above occurs.

Some of my conf:
- maxFileLength: 1gb
- ulimit -n: 10240
- cealnupInterval: 1000 (ms)


My another concern is why cealnupInterval doesn't work. Nothing get cleaned
up as I expect the data file of the queue does.

Please anyone help.
--
View this message in context:
http://www.nabble.com/ActiveMQ-5.2-too-many-open-files-exception-tp22797720p
22797720.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.





ActiveMQ 5.2 too many open files exception

2009-03-30 Thread deathemperor

I'm running AMQ 5.2.0 and keep getting the error:

2009-03-31 08:41:28,674 [...@0.0.0.0:8161] WARN  log   
- EXCEPTION 
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
at
org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75)
at
org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:475)
at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:166)
at
org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
at
org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:537)
at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:450)

This cause either of these problem:

- ActiveMQ process hangs on Apache, no messages can be sent or received, the
AMQ admin is not responding at the time either.
- AMQ  AMQ admin running but no messages can be sent although
receiver/consumer can still connect and check for message.


The pre-procedure is to start AMQ as background service, running message up
to 18-20k and the above occurs.

Some of my conf:
- maxFileLength: 1gb
- ulimit -n: 10240
- cealnupInterval: 1000 (ms)


My another concern is why cealnupInterval doesn't work. Nothing get cleaned
up as I expect the data file of the queue does.

Please anyone help.
-- 
View this message in context: 
http://www.nabble.com/ActiveMQ-5.2-too-many-open-files-exception-tp22797720p22797720.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: Too many open files exception on broker

2007-02-23 Thread James Strachan

So the main error you're getting is that your OS is running out of
file descriptors - which can happen anyway (each socket connected is a
file descriptor as is any attempt to open a file etc.

If sockets get disconnected, clients reconnect but the broker can lag
detecting the old socket disconnecting (due to OS limitations
detecting dead sockets), so its always good to have plenty

Note one or two issues have been found with ActiveMQ not always
cleaning up great on clients that just die rather than disconnect, so
you could try trunk if you are seeing some kind of leaks (we'll
hopefully have 4.1.1 soon with these patches included)


On 2/23/07, GaryG [EMAIL PROTECTED] wrote:


I'm seeing tons of exceptions on the broker:


2007-02-23 07:15:48,528 [xxsxxx:61616] ERROR TransportConnector
- Could not accept connection : java.net.SocketException: Too many open
files
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
at java.net.ServerSocket.implAccept(ServerSocket.java:450)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at
org.apache.activemq.transport.tcp.TcpTransportServer.run(TcpTransportServer.java:153)
at java.lang.Thread.run(Thread.java:595)


I can't connect to it via JMX anymore, although some messages still seem to
be getting
through.

I'm also seeing lots of errors like this on the broker:

=
2007-02-23 07:15:43,523 [34.186.85:33862] DEBUG Service
- Error occured while processing sync command:
javax.jms.InvalidClientIDException: Broker: amqDev2 - Client: wf4 already
connected
javax.jms.InvalidClientIDException: Broker: amqDev2 - Client: wf4 already
connected
at
org.apache.activemq.broker.region.RegionBroker.addConnection(RegionBroker.java:205)
at
org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:82)
at
org.apache.activemq.advisory.AdvisoryBroker.addConnection(AdvisoryBroker.java:70)
at
org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:82)
at
org.apache.activemq.broker.MutableBrokerFilter.addConnection(MutableBrokerFilter.java:92)
at
org.apache.activemq.broker.TransportConnection.processAddConnection(TransportConnection.java:687)
at
org.apache.activemq.broker.jmx.ManagedTransportConnection.processAddConnection(ManagedTransportConnection.java:86)
at
org.apache.activemq.command.ConnectionInfo.visit(ConnectionInfo.java:121)
at
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:284)
at
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:177)
at
org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:65)
at
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:133)
at
org.apache.activemq.transport.InactivityMonitor.onCommand(InactivityMonitor.java:122)
at
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:84)
at
org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:137)
at java.lang.Thread.run(Thread.java:595)
==

However, looking at the logs on the client mentioned above wf4, I'll see
an error like this:
-
2007-02-23 14:30:01,332 [AcitveMQ Connection Worker:
tcp://x/xxx:61616] ERROR jms_comm -
ActiveMQFactoryUtil: exception reconnecting: javax.jms.JMSException: Wire
format negociation timeout: peer did not send his wire format.
---

but then right before and right after such errors, messages are being
received and sent with no problem.

These errors started happening after about 2 days of running the broker with
broker usageMemory being set to 256MB, and broker JVM has 512MB.  I know
last I checked memory use for the broker, according to JMX, was growing
steadily.  We're using non-persistent messages, but it seems memory is not
beeing freed up.

A co-worker here recommended upping the open file limit before I bring up
the Broker JVM with these commands:
--
ulimit -u unlimited
ulimit -n 9
ulimit -s unlimited
---

Which, I'll try shortly, but what I don't understand, is why would AMQ even
be in such a state in the first place?

Is there a bug with this too many open files issue?  Is there another fix
that is recommended?

--
View this message in context: 
http://www.nabble.com/Too-many-open-files-exception-on-broker-tf3279888s2354.html#a9122263
Sent from the ActiveMQ - User mailing list archive at Nabble.com.





--

James
---
http://radio.weblogs.com/0112098/


Too many open files exception on broker

2007-02-23 Thread GaryG

I'm seeing tons of exceptions on the broker:


2007-02-23 07:15:48,528 [xxsxxx:61616] ERROR TransportConnector
- Could not accept connection : java.net.SocketException: Too many open
files
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
at java.net.ServerSocket.implAccept(ServerSocket.java:450)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at
org.apache.activemq.transport.tcp.TcpTransportServer.run(TcpTransportServer.java:153)
at java.lang.Thread.run(Thread.java:595)


I can't connect to it via JMX anymore, although some messages still seem to
be getting 
through.

I'm also seeing lots of errors like this on the broker:

=
2007-02-23 07:15:43,523 [34.186.85:33862] DEBUG Service   
- Error occured while processing sync command:
javax.jms.InvalidClientIDException: Broker: amqDev2 - Client: wf4 already
connected
javax.jms.InvalidClientIDException: Broker: amqDev2 - Client: wf4 already
connected
at
org.apache.activemq.broker.region.RegionBroker.addConnection(RegionBroker.java:205)
at
org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:82)
at
org.apache.activemq.advisory.AdvisoryBroker.addConnection(AdvisoryBroker.java:70)
at
org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:82)
at
org.apache.activemq.broker.MutableBrokerFilter.addConnection(MutableBrokerFilter.java:92)
at
org.apache.activemq.broker.TransportConnection.processAddConnection(TransportConnection.java:687)
at
org.apache.activemq.broker.jmx.ManagedTransportConnection.processAddConnection(ManagedTransportConnection.java:86)
at
org.apache.activemq.command.ConnectionInfo.visit(ConnectionInfo.java:121)
at
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:284)
at
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:177)
at
org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:65)
at
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:133)
at
org.apache.activemq.transport.InactivityMonitor.onCommand(InactivityMonitor.java:122)
at
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:84)
at
org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:137)
at java.lang.Thread.run(Thread.java:595)
==

However, looking at the logs on the client mentioned above wf4, I'll see
an error like this:
-
2007-02-23 14:30:01,332 [AcitveMQ Connection Worker:
tcp://x/xxx:61616] ERROR jms_comm -
ActiveMQFactoryUtil: exception reconnecting: javax.jms.JMSException: Wire
format negociation timeout: peer did not send his wire format.
---

but then right before and right after such errors, messages are being
received and sent with no problem.  

These errors started happening after about 2 days of running the broker with
broker usageMemory being set to 256MB, and broker JVM has 512MB.  I know
last I checked memory use for the broker, according to JMX, was growing
steadily.  We're using non-persistent messages, but it seems memory is not
beeing freed up.

A co-worker here recommended upping the open file limit before I bring up
the Broker JVM with these commands:
--
ulimit -u unlimited
ulimit -n 9
ulimit -s unlimited
---

Which, I'll try shortly, but what I don't understand, is why would AMQ even
be in such a state in the first place?  

Is there a bug with this too many open files issue?  Is there another fix
that is recommended?

-- 
View this message in context: 
http://www.nabble.com/Too-many-open-files-exception-on-broker-tf3279888s2354.html#a9122263
Sent from the ActiveMQ - User mailing list archive at Nabble.com.



Re: Too many open files exception on broker

2007-02-23 Thread Christopher G. Stach II
GaryG wrote:
 A co-worker here recommended upping the open file limit before I bring up
 the Broker JVM with these commands:
 --
 ulimit -u unlimited
 ulimit -n 9
 ulimit -s unlimited
 ---
 
 Which, I'll try shortly, but what I don't understand, is why would AMQ even
 be in such a state in the first place?  
 
 Is there a bug with this too many open files issue?  Is there another fix
 that is recommended?
 

It's normal *nix resource limits.  Just like the nohup thing, this
really doesn't have anything to do with AMQ specifically.

-- 
Christopher G. Stach II