Re: Tomcat threads dependency on net.core.somaxconn value on Linux

2015-04-20 Thread satish jupalli
Thanks Christopher, it makes more sense now.



On Fri, Apr 17, 2015 at 8:51 PM, Christopher Schultz 
ch...@christopherschultz.net wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Satish,

 On 4/17/15 7:20 AM, satish jupalli wrote:
  I would like to support a bust in traffic we are planning to
  increase the umber of threads by adding executor pool with
  misSpareThread to 100 and accept count to 5.  However, I'm
  trying to understand the correlation between the tomcat connector
  acceptCount and ThreadCount to the Linux config
  net.core.somaxconn(number of incoming connections) which defaults
  to 128.

 The thread count isn't really a part of this discussion: only the
 accept count and /proc/sys/net/core/somaxx

  Does it makes sense to increase the thread count alone with out
  increasing net.core.somaxconn value?

 Probably not; Linux limits the accept queue using that value and will
 not allow client code to exceed that limit.

 https://computing.llnl.gov/linux/slurm/high_throughput.html

  Below is the config that we are planning to use on Tomcat 7.0.42 on
  Linux 5.x.

 (You should upgrade to 7.0.62 as soon as it feasible for you.)

  Executor name=tomcatThreadPool namePrefix=tomcat-http-
  minSpareThreads=100 maxThreads=300/
 
  Connector port=8080 protocol=HTTP/1.1
  connectionTimeout=2 redirectPort=8443
  enableLookups=false keepAliveTimeout=30
  maxKeepAliveRequests=1 acceptCount=5 socketBuffer=10240
  executor=tomcatThreadPool compression=on
  compressableMimeType=text/html, text/xml xpoweredBy=false
  server=false /
 
  Any inputs regarding the Tomcat threads and Linux
  net.core.somaxconn will be greatly appreciated to support many
  concurrent connections.

 If you use acceptCount=5, Tomcat will ask for that socket queue
 size and simply won't get it. You should definitely raise the value of
 somaxx on Linux.

 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 Comment: GPGTools - http://gpgtools.org

 iQIcBAEBCAAGBQJVMQG4AAoJEBzwKT+lPKRYFuMQAMMbBv7v5jOTHz302TMYNbMB
 sc1qH9IuV7Z1DrqAOE+yN19xdzt+7cakNl5YWomHr/U3ePNianLeqY+27WYYUm9V
 gTw/kHqBOUYfWttnlFzDCBidUDPw6m3TnhFq8Oia5UBdfPh4IQDR+zxF9FCkncUi
 51evAzsp4gzAOMjDUmxMQMCfYQML1l+VX56Za4RT4S6z2L6HCxyjeVYQSc2nsLnU
 Y6G3X0ccomxszHhb5GuU82lldgKjw5BDubkp57/fxBajZ6QcWSU7sr+HlaoAUfLz
 +p/PSJhyXDMxlKqGmGsBK8BMMsK5H26yup5LtpKkLDEJuUYSv5N1rsklmQCbUuXk
 0pBzFmVKQG2CRh2miutBC7Vr5l3AOP5ghV2uOOvuCBg+Sg8pzImase2m9Nw9Z1sh
 XnLrrJq7a/nvd4C2jRT/y01AJNfv0qJG95RjBxoVdoQ629fjK05Z2MB7avL2Qzwq
 8JpcwhkfhVAWW87JVAIMPluILNuhCCg/LDQSVj4twdtGBy181E29yNSfmIRkq33j
 2Jc83/gLgNU+maM4RzXKWzFYpk8ccLMhHsbh8txqswK+rV8XDfQTLZzxy43IawCK
 Nc1rBnWCVQOPMxjumfZWuZtdAy/HlSpw1otRVkXNVONOCLwMcAggLbjeO1TFJiJj
 MArVB8ONarkbkDts2quG
 =vSf+
 -END PGP SIGNATURE-

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Apache Tomcat jk connector 400 bad request

2015-04-20 Thread Razi
Hi there,

I would like to explain my scenario, perhaps this has been answered on this 
forum.

A bunch of random Ajax requests from the browser (IE9) end up with a 400 error 
code on the apache webserver and the the browser hangs for 5 minutes. Httpwatch 
shows the error code as ERROR_INTERNET_CONNECTION_RESET and then immediately 
afterwards IE fires the same request again, which shows up with a time taken of 
5 minutes and error code as  ERROR_HTTP_INVALID_SERVER_RESPONSE. The browser 
recovers after 5 minutes.

Further investigation on the webserver and appserver logs reveals the 
following::
  a.. The request comes from the browser and hits the webserver and then 
forwards to the appserver instantly. 
  b.. The mod_jk log for the request shows that there is time duration of 5 
minutes spent in the ajp_read_fully_server::jk_ajp_common.c(1399): enter. After 
5 minutes I get the next line as follows 
ajp_read_fully_server::jk_ajp_common.c(1432): exit. Then in the next line i see 
the following ajp_send_request::jk_ajp_common.c(1766) worker 11 browser stop 
sending data, no need to recover. Later it shows unrecoverable 400, request 
failed. 
  c.. The forensic.log show the content length as a nonzero value. 
  d.. The applcation server log hangs in the 
org.apache.coyote.ajp.AjpProcessor.read method for 5 mintues and the continues 
the execution. The thread dump also confirms this.
The questions I have are::
  a.. Is this a problem with IE only because of the keepalive timeout and the 
apache webserver keepalive time(current value is set to 5seconds) out which is 
not in sync. 
  b.. Is this a problem with the appserver not able to process requests that 
are bad/incomplete. 
  c.. Should I increase the Apache webserver timeout value to 60s or more , 
will this have any performance impact.
Kindly advise on the scenario. Many thanks for reading through.

Current setup:
Apache 2.2.24
Mod_jk 1.2.37
Redhat Linux VM
JBoss EAP 6.1.0
JSF 2.1, Richfaces 3.3.4




Warm Regards
Razi A. Ansari


Re: [Tomcat8] What happened to WebappLoader.addRepository()?

2015-04-20 Thread Thusitha Thilina Dayaratne
Hi,

 Try addURL().
 Sorry for the inconvenience.
 As I understand addURL() method is defined in WebAppClassLoaderBase. So
 should I obtain the WebAppClassLoaderBase using getClassLoader() method
and
 use reflections to call the addURL() method?

No need for reflection. Cast to URLClassloader.

Note the cast *should* always work but if someone is using a strange
custom class loader it will fail. Note that Tomcat 7 required the class
loader to be an instance of WebappClassLoader.
Thanks for quick explanation Mark.
But still addURL() method is defined as protected. So it is not possible
to call that method by casting right?
This is my implementation with Tomcat 7

public class CarbonWebappLoader extends WebappLoader {

@Override
protected void startInternal() throws LifecycleException {
WebappClassloadingContext webappClassloadingContext;
try {
webappClassloadingContext =
ClassloadingContextBuilder.buildClassloadingContext(getWebappFilePath());
} catch (Exception e) {
throw new LifecycleException(e.getMessage(), e);
}

for (String repository :
webappClassloadingContext.getProvidedRepositories()) {
addRepository(repository);
}
 super.startInternal();

//Adding the WebappClassloadingContext to the WebappClassloader
((CarbonWebappClassLoader)
getClassLoader()).setWebappCC(webappClassloadingContext);
}

In Tomcat 8 don't have addRepository() in the WebAppLoader. Suggestion was
to use addURL() method.
But that can be access through WebAppClassLoader. So I must get the
classloader using getClassLoader() and use reflections to call addURL
method.
That would be really costly operation since it will get call for each and
every application.

Is there a better approach than that? Or Should I move this logic to
somewhere else?

Thanks
Best Regards

2015-04-07 16:19 GMT+05:30 Thusitha Thilina Dayaratne 
thusithathil...@gmail.com:

 Hi,

  Hi Mark,
 
  snipe/

 :)
 I think you meant snip rather than snipe.

  Try addURL().
  Sorry for the inconvenience.
  As I understand addURL() method is defined in WebAppClassLoaderBase. So
  should I obtain the WebAppClassLoaderBase using getClassLoader() method
 and
  use reflections to call the addURL() method?

 No need for reflection. Cast to URLClassloader.

 Note the cast *should* always work but if someone is using a strange
 custom class loader it will fail. Note that Tomcat 7 required the class
 loader to be an instance of WebappClassLoader.
 Thanks for quick explanation Mark.
 But still addURL() method is defined as protected. So it is not possible
 to call that method by casting right?



 
 
  Thanks
  Best Regards
 
  On Tue, Apr 7, 2015 at 1:19 PM, Mark Thomas ma...@apache.org wrote:
 
  On 07/04/2015 07:51, Thusitha Thilina Dayaratne wrote:
  Hi All,
 
  I'm having the same issue. We have extend the WebAppLoader as follows
 in
  the Tomcat 7
 
  public class CarbonWebappLoader extends WebappLoader {
  @Override
  protected void startInternal() throws LifecycleException {
  WebappClassloadingContext webappClassloadingContext;
  try {
  webappClassloadingContext =
  ClassloadingContextBuilder.buildClassloadingContext(
 getWebappFilePath());
  } catch (Exception e) {
  throw new LifecycleException(e.getMessage(), e);
  }
 
  //Adding provided classpath entries, if any
  for (String repository :
  webappClassloadingContext.getProvidedRepositories()) {
  addRepository(repository);
  }
 
  super.startInternal();
 
  //Adding the WebappClassloadingContext to the WebappClassloader
  ((CarbonWebappClassLoader)
  getClassLoader()).setWebappCC(webappClassloadingContext);
  }
  }
 
  Since addRepository method have been removed, what could I do?
 
  Don't top-post.
 
  Read the thread history. The answer has already been provided by Chris.
 
  Mark
 
 
 
 
  Thanks
  Best Regards
  /Thusitha
 
  On Wed, Mar 18, 2015 at 12:25 AM, Mark Thomas ma...@apache.org
 wrote:
 
  On 17/03/2015 18:30, Ed Rouse wrote:
 
 
  -Original Message-
  From: Pilkington, Simon [mailto:simo...@amazon.com]
  Sent: Tuesday, March 17, 2015 12:45 PM
  To: users@tomcat.apache.org
  Subject: [Tomcat8] What happened to WebappLoader.addRepository()?
 
  Hey tomcat users,
 
  The javadoc for WebappLoader still tells me to use addRepository(),
  but
  that method no longer exists. My team has implemented an extension
 of
  WebappLoader that looked like this:
 
 
  http://cp.mcafee.com/d/1jWVIq3zqb2rzydPhOCYrKrhKCqenTzhOe7cCQrFCzBZUQsL
 
  9ICQrFCzBZUQszxP1J6WpEVvd7aabPxLURrFUalAv3UYKrlAv3UYKrKXHXRTT-LPz5TCnA-
 
  LsKyev7szsQsIFICzBzBHEShhlKYPOEuvkzaT0QSyrjdTdTdAVPmEBCjGHrpZGSS9_M079R
 
  lJIOUXHBQaSPlFo01PlJIj_brfjVgT3WWxYs0nO6Hb1mKEv7wsrrFYq5U_dKc2WrWr9EVjb
  _6HtfelAv3UYK2FRlJI-
 
  Rrr4_U02rs7e3zpFr1dlrrdUQKCy01iuPd41flBLxW1EwDkQg0bV3lBwHnkfzSE80LRGQBe
  IiNEEd598S-UrI1Lf5-sL
 
 http://cp.mcafee.com/d/2DRPoAd3hJ5xdNN6VEVjudTdETjd7bXNEV73CjqdQPhO-
  YqenASjqdQPhO-
 
  

Re: WebappLoader vs WebappClassLoader

2015-04-20 Thread Thusitha Thilina Dayaratne
Hi,

 Could someone tell me what is the purpose of having WebappLoader and
 WebappClassLoader in Tomcat?
WebappLoader is the Tomcat object that a user can configure that
represents the class loader. It remains the same across web application
stop/start.

WebappClassLoader is the actual class loader. Every time the web
application is started, a new instance is created and used.

 As I understand WebappClassLoader is per web application and WebAppLoader
for
 tomcat server instance. Am I wrong?
Yes.
Thanks for the quick explanation.
So it means WebappClassLoader and WebAppLoader both are per web
application.
In tomcat 7 we were able to add repository to the class loader using
WebAppClassLoader.addRepository()
So in Tomcat 8 if we wanna add repositories to the classloader we should
done that trough the WebAppLoader not with WebAppClassLoader?
Please correct me if I'm wrong.

Thanks
Best Regards

2015-04-21 1:51 GMT+05:30 Mark Thomas ma...@apache.org:

 On 20/04/2015 14:22, Thusitha Thilina Dayaratne wrote:
  Hi,
 
  Could someone tell me what is the purpose of having WebappLoader and
  WebappClassLoader in Tomcat?

 WebappLoader is the Tomcat object that a user can configure that
 represents the class loader. It remains the same across web application
 stop/start.

 WebappClassLoader is the actual class loader. Every time the web
 application is started, a new instance is created and used.

  As I understand WebappClassLoader is per web application and
 WebAppLoader for
  tomcat server instance. Am I wrong?

 Yes.

 Mark


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




--


Re: WebappLoader vs WebappClassLoader

2015-04-20 Thread Mark Thomas
On 20/04/2015 14:22, Thusitha Thilina Dayaratne wrote:
 Hi,
 
 Could someone tell me what is the purpose of having WebappLoader and
 WebappClassLoader in Tomcat?

WebappLoader is the Tomcat object that a user can configure that
represents the class loader. It remains the same across web application
stop/start.

WebappClassLoader is the actual class loader. Every time the web
application is started, a new instance is created and used.

 As I understand WebappClassLoader is per web application and WebAppLoader for
 tomcat server instance. Am I wrong?

Yes.

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: French chars not displayed correctly - Tomcat 7

2015-04-20 Thread Mark Thomas
On 20/04/2015 15:49, Neill Lima wrote:
 Try adding this to your html:
 
 head
 *meta charset=UTF-8*
 /head
 
 On Mon, Apr 20, 2015 at 4:45 PM, radiatejava radiatej...@gmail.com wrote:
 
 Hello Tomcat users, have code like this in my jsp:

 td${pageKeys.ui_user_name_label};/td

 Where td represents a cell of an html table.

 Value of the variable pageKeys.ui_user_name_label populated in server
 side is in french and it is exactly as shown below:

 Nom d'utilisateur

 However, while displaying this in the browser, this is getting displayed
 as:

 Nom drsquo;utilisateur


 How can I get this corrected ? When I try to see the source code, I
 see something like this:

 out.write((java.lang.String)

 org.apache.jasper.runtime.PageContextImpl.proprietaryEvaluate(${pageKeys.ui_user_name_label},
 java.lang.String.class,
 (javax.servlet.jsp.PageContext)_jspx_page_context, null, false));

 What I want to know from developers here is whether the above code
 really does some escaping/encoding of the content ?

Tomcat will use whatever encoding you have configured for the JSP page.
If you don't define an encoding then the default will be used. Note that
using 'meta charset=UTF-8' is just meaningless template text as far
as Tomcat is concerned and will have no impact on how Tomcat generates
the page.

 Is a there a way to avoid this encoding ?

You cannot avoid encoding. You have to define the mapping you wish to
use to map from bytes to characters. If you do not, the default will be
used.

You should take a look at the Character Encoding part of the FAQ on the
wiki:
http://wiki.apache.org/tomcat/FAQ/CharacterEncoding

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



WebappLoader vs WebappClassLoader

2015-04-20 Thread Thusitha Thilina Dayaratne
Hi,

Could someone tell me what is the purpose of having WebappLoader and
WebappClassLoader in Tomcat?
As I understand WebappClassLoader is per web application and WebAppLoader for
tomcat server instance. Am I wrong?

Thanks
Best Regards

--


Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Neill Lima
Increasing the amount of opened file descriptors is an accepted fine-tune (*if
your application it is handling the threads properly*)

ulimit -n
ulimit -n [new_value]
ulimit -n

If even after allowing more fds the performance is not adequate, some sort
of scaling (H/V) is necessary.

On Mon, Apr 20, 2015 at 3:41 PM, Rainer Jung rainer.j...@kippdata.de
wrote:

 Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

 Hi,

 I have tried to find help regarding an issue we experience with our
 platform leading to random file descriptor peaks. This happens more often
 on heavy load but can also happen on low traffic periods.

 Our application is using servlet 3.0 async features and an async
 connector.
 We noticed that a lot of issues regarding asynchronous feature were fixed
 between our production version and the last stable build. We decided to
 give it a try to see if it improves things or at least give clues on what
 can cause the issue; Unfortunately it did neither.

 The file descriptor peaks and application blocking happens frequently with
 this version when it only happens rarely on previous version (tomcat7
 7.0.28-4).

 Tomcat is behind an nginx server. The tomcat connector used is configured
 as follows:

 We use an Nio connector:
 Connector port=8080 protocol=org.apache.coyote.
 http11.Http11NioProtocol
selectorTimeout=1000
maxThreads=200
maxHttpHeaderSize=16384
address=127.0.0.1
redirectPort=8443/

 In catalina I can see some Broken pipe message that were not happening
 with
 previous version.

 I compared thread dumps from server with both the new and old version of
 tomcat and both look similar from my stand point.

 My explanation may not be very clear, but I hope this gives an idea how
 what we are experiencing. Any pointer would be welcomed.


 If the peaks happen long enough and your platforms has the tools available
 you can use lsof to look for what those FDs are - or on Linux looking at
 ls -l /proc/PID/fd/* (PID is the process PID file) - or on Solaris use
 the pfiles command.

 If the result is what is expected, namely that by far the most FDs are
 coming from network connections for port 8080, then you can check via
 netstat in which connection state those are.

 If most are in ESTABLISHED state, then you/we need to further break down
 the strategy.

 Regards,

 Rainer


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Rainer Jung

Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

Hi,

I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic periods.

Our application is using servlet 3.0 async features and an async connector.
We noticed that a lot of issues regarding asynchronous feature were fixed
between our production version and the last stable build. We decided to
give it a try to see if it improves things or at least give clues on what
can cause the issue; Unfortunately it did neither.

The file descriptor peaks and application blocking happens frequently with
this version when it only happens rarely on previous version (tomcat7
7.0.28-4).

Tomcat is behind an nginx server. The tomcat connector used is configured
as follows:

We use an Nio connector:
Connector port=8080 protocol=org.apache.coyote.
http11.Http11NioProtocol
   selectorTimeout=1000
   maxThreads=200
   maxHttpHeaderSize=16384
   address=127.0.0.1
   redirectPort=8443/

In catalina I can see some Broken pipe message that were not happening with
previous version.

I compared thread dumps from server with both the new and old version of
tomcat and both look similar from my stand point.

My explanation may not be very clear, but I hope this gives an idea how
what we are experiencing. Any pointer would be welcomed.


If the peaks happen long enough and your platforms has the tools 
available you can use lsof to look for what those FDs are - or on Linux 
looking at ls -l /proc/PID/fd/* (PID is the process PID file) - or on 
Solaris use the pfiles command.


If the result is what is expected, namely that by far the most FDs are 
coming from network connections for port 8080, then you can check via 
netstat in which connection state those are.


If most are in ESTABLISHED state, then you/we need to further break down 
the strategy.


Regards,

Rainer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
Hi,

I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic periods.

Our application is using servlet 3.0 async features and an async connector.
We noticed that a lot of issues regarding asynchronous feature were fixed
between our production version and the last stable build. We decided to
give it a try to see if it improves things or at least give clues on what
can cause the issue; Unfortunately it did neither.

The file descriptor peaks and application blocking happens frequently with
this version when it only happens rarely on previous version (tomcat7
7.0.28-4).

Tomcat is behind an nginx server. The tomcat connector used is configured
as follows:

We use an Nio connector:
Connector port=8080 protocol=org.apache.coyote.
http11.Http11NioProtocol
  selectorTimeout=1000
  maxThreads=200
  maxHttpHeaderSize=16384
  address=127.0.0.1
  redirectPort=8443/

In catalina I can see some Broken pipe message that were not happening with
previous version.

I compared thread dumps from server with both the new and old version of
tomcat and both look similar from my stand point.

My explanation may not be very clear, but I hope this gives an idea how
what we are experiencing. Any pointer would be welcomed.

Thomas


Re: Cluster - Session replication error: Unable to apply diff

2015-04-20 Thread Keiichi Fujino
This NPE has been caused by that apply the diff data to a
ReplicatedMapEntry that has not set a MapOwner.
Usually, ReplicatedMapEntry always has to have the MapOwner.
I think this is probably a bug.
Please open Bugzilla entry.
I will scrutinize the code.

2015-04-20 15:04 GMT+09:00 Keiichi Fujino kfuj...@apache.org:

 Hi

 Are there other error or exception in your log?

 Please show us your cluster configuration in your server.xml.
 e.g.
 - What is mapSendOptions?
 - Which Interceptor do you use?



 2015-04-15 3:55 GMT+09:00 Théo Chamley theo...@mley.fr:

 Hello,

 I have a working Tomcat 8.0.15 cluster with 3 members with the
 BackupManager as session manager.
 The session replication is mostly working except in a few cases. In those
 cases, I get the following error:

 09-Apr-2015 12:16:58.369 SEVERE [Tribes-Task-Receiver-6]
 org.apache.catalina.tribes.tipis.AbstractReplicatedMap.messageReceived
 Unable to apply diff to key:3B286B4C7CA060163A00988969D21923
  java.lang.NullPointerException
 at
 org.apache.catalina.ha.session.DeltaSession.applyDiff(DeltaSession.java:164)
 at
 org.apache.catalina.tribes.tipis.AbstractReplicatedMap.messageReceived(AbstractReplicatedMap.java:664)
 at
 org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:293)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:112)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor.messageReceived(ThroughputInterceptor.java:89)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:260)
 at
 org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:240)
 at
 org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:206)
 at
 org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:97)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)


 I was able to replicate the problem with a scenario in the application,
 but I was not able to understand the underlying problem.
 This happens when the user is making a very specific request and this
 request arrives on a Tomcat where his session is not stored, forcing the
 Tomcat to fetch the session elsewhere.

 The 3 tomcats are on the same network with a very low network latency.

 Does anybody has some advice on how to debug this problem?

 For now, I got around it with sticky sessions on mod_jk, but I find this
 very unsatisfactory.

 Thank you in advance for your help,

 //Théo

 --
 Keiichi.Fujino




-- 
Keiichi.Fujino


Re: org.apache.catalina.authenticator.SingleSignOn.sessionEvent() equivalent in Tomcat 8

2015-04-20 Thread Mark Thomas
On 20/04/2015 06:41, Thusitha Thilina Dayaratne wrote:
 Hi
 
 In tomcat 7 there is a method to set session event as follows in the
 org.apache.catalina.authenticator.SingleSignOn
 void sessionEvent(SessionEvent event)
 
 But this method has been removed in tomcat 8.
 Is there a equivalent in Tomcat 8? Or is there a specific reason to
 remove that?
 I think I found the answer
 I can get the sessionListner and configure through that
 
 getSessionListener(ssID).sessionEvent(event)
 
 But how can I get the ssoID ?

That is not a setter, it is a handler that responds to session events.
Note that in Tomcat 8 the Listener and the Valve are in separate classes.

What is it that you are trying to do?

Mark


 
 
 2015-04-20 11:05 GMT+05:30 Thusitha Thilina Dayaratne 
 thusithathil...@gmail.com:
 
 Hi

 In tomcat 7 there is a method to set session event as follows in the
 org.apache.catalina.authenticator.SingleSignOn
 void sessionEvent(SessionEvent event)

 But this method has been removed in tomcat 8.
 Is there a equivalent in Tomcat 8? Or is there a specific reason to
 remove that?
 I think I found the answer
 I can get the sessionListner and configure through that

 getSessionListener(ssID).sessionEvent(event)


 Thanks

 Best Regards


 2015-04-20 11:01 GMT+05:30 Thusitha Thilina Dayaratne 
 thusithathil...@gmail.com:

 Hi,

 In tomcat 7 there is a method to set session event as follows in the
 org.apache.catalina.authenticator.SingleSignOn
 void sessionEvent(SessionEvent event)

 But this method has been removed in tomcat 8.
 Is there a equivalent in Tomcat 8? Or is there a specific reason to
 remove that?

 Thanks
 Best Regards

 --





 --


 
 
 
 --
 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread André Warnier

Thomas Boniface wrote:

I did some captures during a peak this morning, I have some lsof and
netstat data.

It seems to me that most file descriptors used by tomcat are some http
connections:

 thomas@localhost  ~/ads3/tbo11h12  cat lsof| wc -l
17772
 thomas@localhost  ~/ads3/tbo11h12  cat lsof | grep TCP | wc -l
13966

(Note that the application also send request to external servers via http)


Regarding netstat I did a small script to try to aggregate connections with
a human readable name, if my script is right the connections between nginx
and tomcat are as follows:

tomcat = nginx SYN_RECV 127
tomcat = nginx ESTABLISHED 1650
tomcat = nginx CLOSE_WAIT 8381
tomcat = nginx TIME_WAIT 65

nginx = tomcat SYN_SENT 20119
nginx = tomcat ESTABLISHED 4692
nginx = tomcat TIME_WAIT 122
nginx = tomcat FIN_WAIT2 488
nginx = tomcat FIN_WAIT1 13


I don't understand the distinction here.  Tomcat should never initiate connections *to* 
nginx, or ?


For personal historical reasons, the high number of connections in CLOSE_WAIT state above 
triggered my interest.  Search Google for : tcp close_wait state meaning
Basically, it can mean that the client wants to go away, and closes its end of the 
connection to the server, but the application on the server never properly closes the 
connection to the client. And as long as it doesn't, the corresponding connection will 
remain stuck in the CLOSE_WAIT state (and continue to use resources on the server, such as 
an fd and associated resources).

All that doesn't mean that this is your main issue here, but it's something to 
look into.




Concerning the other response and the system max number of file, I am not
sure this is where our issue lies. The peak itself seems to be a sympton of
an issue, tomcat fd are around 1000 almost all the time except when a peak
occurs. In such cases it can go up to 1 or more sometimes.

Thomas



2015-04-20 15:41 GMT+02:00 Rainer Jung rainer.j...@kippdata.de:


Am 20.04.2015 um 14:11 schrieb Thomas Boniface:


Hi,

I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic periods.

Our application is using servlet 3.0 async features and an async
connector.
We noticed that a lot of issues regarding asynchronous feature were fixed
between our production version and the last stable build. We decided to
give it a try to see if it improves things or at least give clues on what
can cause the issue; Unfortunately it did neither.

The file descriptor peaks and application blocking happens frequently with
this version when it only happens rarely on previous version (tomcat7
7.0.28-4).

Tomcat is behind an nginx server. The tomcat connector used is configured
as follows:

We use an Nio connector:
Connector port=8080 protocol=org.apache.coyote.
http11.Http11NioProtocol
   selectorTimeout=1000
   maxThreads=200
   maxHttpHeaderSize=16384
   address=127.0.0.1
   redirectPort=8443/

In catalina I can see some Broken pipe message that were not happening
with
previous version.

I compared thread dumps from server with both the new and old version of
tomcat and both look similar from my stand point.

My explanation may not be very clear, but I hope this gives an idea how
what we are experiencing. Any pointer would be welcomed.


If the peaks happen long enough and your platforms has the tools available
you can use lsof to look for what those FDs are - or on Linux looking at
ls -l /proc/PID/fd/* (PID is the process PID file) - or on Solaris use
the pfiles command.

If the result is what is expected, namely that by far the most FDs are
coming from network connections for port 8080, then you can check via
netstat in which connection state those are.

If most are in ESTABLISHED state, then you/we need to further break down
the strategy.

Regards,

Rainer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org







-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
I did some captures during a peak this morning, I have some lsof and
netstat data.

It seems to me that most file descriptors used by tomcat are some http
connections:

 thomas@localhost  ~/ads3/tbo11h12  cat lsof| wc -l
17772
 thomas@localhost  ~/ads3/tbo11h12  cat lsof | grep TCP | wc -l
13966

(Note that the application also send request to external servers via http)


Regarding netstat I did a small script to try to aggregate connections with
a human readable name, if my script is right the connections between nginx
and tomcat are as follows:

tomcat = nginx SYN_RECV 127
tomcat = nginx ESTABLISHED 1650
tomcat = nginx CLOSE_WAIT 8381
tomcat = nginx TIME_WAIT 65

nginx = tomcat SYN_SENT 20119
nginx = tomcat ESTABLISHED 4692
nginx = tomcat TIME_WAIT 122
nginx = tomcat FIN_WAIT2 488
nginx = tomcat FIN_WAIT1 13

Concerning the other response and the system max number of file, I am not
sure this is where our issue lies. The peak itself seems to be a sympton of
an issue, tomcat fd are around 1000 almost all the time except when a peak
occurs. In such cases it can go up to 1 or more sometimes.

Thomas



2015-04-20 15:41 GMT+02:00 Rainer Jung rainer.j...@kippdata.de:

 Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

 Hi,

 I have tried to find help regarding an issue we experience with our
 platform leading to random file descriptor peaks. This happens more often
 on heavy load but can also happen on low traffic periods.

 Our application is using servlet 3.0 async features and an async
 connector.
 We noticed that a lot of issues regarding asynchronous feature were fixed
 between our production version and the last stable build. We decided to
 give it a try to see if it improves things or at least give clues on what
 can cause the issue; Unfortunately it did neither.

 The file descriptor peaks and application blocking happens frequently with
 this version when it only happens rarely on previous version (tomcat7
 7.0.28-4).

 Tomcat is behind an nginx server. The tomcat connector used is configured
 as follows:

 We use an Nio connector:
 Connector port=8080 protocol=org.apache.coyote.
 http11.Http11NioProtocol
selectorTimeout=1000
maxThreads=200
maxHttpHeaderSize=16384
address=127.0.0.1
redirectPort=8443/

 In catalina I can see some Broken pipe message that were not happening
 with
 previous version.

 I compared thread dumps from server with both the new and old version of
 tomcat and both look similar from my stand point.

 My explanation may not be very clear, but I hope this gives an idea how
 what we are experiencing. Any pointer would be welcomed.


 If the peaks happen long enough and your platforms has the tools available
 you can use lsof to look for what those FDs are - or on Linux looking at
 ls -l /proc/PID/fd/* (PID is the process PID file) - or on Solaris use
 the pfiles command.

 If the result is what is expected, namely that by far the most FDs are
 coming from network connections for port 8080, then you can check via
 netstat in which connection state those are.

 If most are in ESTABLISHED state, then you/we need to further break down
 the strategy.

 Regards,

 Rainer


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Rainer Jung

Am 20.04.2015 um 15:41 schrieb Rainer Jung:

Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

Hi,

I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic periods.

Our application is using servlet 3.0 async features and an async
connector.
We noticed that a lot of issues regarding asynchronous feature were fixed
between our production version and the last stable build. We decided to
give it a try to see if it improves things or at least give clues on what
can cause the issue; Unfortunately it did neither.

The file descriptor peaks and application blocking happens frequently
with
this version when it only happens rarely on previous version (tomcat7
7.0.28-4).

Tomcat is behind an nginx server. The tomcat connector used is configured
as follows:

We use an Nio connector:
Connector port=8080 protocol=org.apache.coyote.
http11.Http11NioProtocol
   selectorTimeout=1000
   maxThreads=200
   maxHttpHeaderSize=16384
   address=127.0.0.1
   redirectPort=8443/

In catalina I can see some Broken pipe message that were not happening
with
previous version.

I compared thread dumps from server with both the new and old
version of
tomcat and both look similar from my stand point.

My explanation may not be very clear, but I hope this gives an idea how
what we are experiencing. Any pointer would be welcomed.


If the peaks happen long enough and your platforms has the tools
available you can use lsof to look for what those FDs are - or on Linux
looking at ls -l /proc/PID/fd/* (PID is the process PID file) - or on
Solaris use the pfiles command.

If the result is what is expected, namely that by far the most FDs are
coming from network connections for port 8080, then you can check via
netstat in which connection state those are.

If most are in ESTABLISHED state, then you/we need to further break down
the strategy.


One more thing: the connection peak might happen, if for some reason 
your application or the JVM (GC) gets slow. The reason doesn't have to 
still be there at the time when you take the thread dump.


You might want to add %D to your Tomcat access log and ty to estimate, 
whether the connection peaks are due to (temporary) application slow down.


The same holds for activating a GC log and check for long or many 
cumulative GC pauses.


Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: French chars not displayed correctly - Tomcat 7

2015-04-20 Thread Neill Lima
Try adding this to your html:

head
*meta charset=UTF-8*
/head

On Mon, Apr 20, 2015 at 4:45 PM, radiatejava radiatej...@gmail.com wrote:

 Hello Tomcat users, have code like this in my jsp:

 td${pageKeys.ui_user_name_label};/td

 Where td represents a cell of an html table.

 Value of the variable pageKeys.ui_user_name_label populated in server
 side is in french and it is exactly as shown below:

 Nom d'utilisateur

 However, while displaying this in the browser, this is getting displayed
 as:

 Nom drsquo;utilisateur


 How can I get this corrected ? When I try to see the source code, I
 see something like this:

 out.write((java.lang.String)

 org.apache.jasper.runtime.PageContextImpl.proprietaryEvaluate(${pageKeys.ui_user_name_label},
 java.lang.String.class,
 (javax.servlet.jsp.PageContext)_jspx_page_context, null, false));

 What I want to know from developers here is whether the above code
 really does some escaping/encoding of the content ? Is a there a way
 to avoid this encoding ?

 Appreciate your reply.

 Thanks.

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




French chars not displayed correctly - Tomcat 7

2015-04-20 Thread radiatejava
Hello Tomcat users, have code like this in my jsp:

td${pageKeys.ui_user_name_label};/td

Where td represents a cell of an html table.

Value of the variable pageKeys.ui_user_name_label populated in server
side is in french and it is exactly as shown below:

Nom d'utilisateur

However, while displaying this in the browser, this is getting displayed as:

Nom drsquo;utilisateur


How can I get this corrected ? When I try to see the source code, I
see something like this:

out.write((java.lang.String)
org.apache.jasper.runtime.PageContextImpl.proprietaryEvaluate(${pageKeys.ui_user_name_label},
java.lang.String.class,
(javax.servlet.jsp.PageContext)_jspx_page_context, null, false));

What I want to know from developers here is whether the above code
really does some escaping/encoding of the content ? Is a there a way
to avoid this encoding ?

Appreciate your reply.

Thanks.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: French chars not displayed correctly - Tomcat 7

2015-04-20 Thread radiatejava
Of course I have that in my html page.
My aim was to ask whether tomcat (its libraries) itself is doing
encoding of special chars.

Thanks.

On Mon, Apr 20, 2015 at 8:19 PM, Neill Lima neill.l...@visual-meta.com wrote:
 Try adding this to your html:

 head
 *meta charset=UTF-8*
 /head


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
Hi,

Both nginx and tomcat are hosted on the same server when listing the
connections I see both the connections from nginx to tomcat (the first one
create) and the one from tomcat to nginx used to reply. I may have
presented things the bad way though (I'm not too good regarding system
level).

I do agree the high number of close wait seems strange, I really feel like
nginx closed the connection before tomcat did (what I think leads to the
broken pipe expections observed in the catalina.out). In case someone want
to have a look I uploaded a netstat log here:
http://www.filedropper.com/netsat

Thomas

2015-04-20 17:13 GMT+02:00 André Warnier a...@ice-sa.com:

 Thomas Boniface wrote:

 I did some captures during a peak this morning, I have some lsof and
 netstat data.

 It seems to me that most file descriptors used by tomcat are some http
 connections:

  thomas@localhost  ~/ads3/tbo11h12  cat lsof| wc -l
 17772
  thomas@localhost  ~/ads3/tbo11h12  cat lsof | grep TCP | wc -l
 13966

 (Note that the application also send request to external servers via http)


 Regarding netstat I did a small script to try to aggregate connections
 with
 a human readable name, if my script is right the connections between nginx
 and tomcat are as follows:

 tomcat = nginx SYN_RECV 127
 tomcat = nginx ESTABLISHED 1650
 tomcat = nginx CLOSE_WAIT 8381
 tomcat = nginx TIME_WAIT 65

 nginx = tomcat SYN_SENT 20119
 nginx = tomcat ESTABLISHED 4692
 nginx = tomcat TIME_WAIT 122
 nginx = tomcat FIN_WAIT2 488
 nginx = tomcat FIN_WAIT1 13


 I don't understand the distinction here.  Tomcat should never initiate
 connections *to* nginx, or ?

 For personal historical reasons, the high number of connections in
 CLOSE_WAIT state above triggered my interest.  Search Google for : tcp
 close_wait state meaning
 Basically, it can mean that the client wants to go away, and closes its
 end of the connection to the server, but the application on the server
 never properly closes the connection to the client. And as long as it
 doesn't, the corresponding connection will remain stuck in the CLOSE_WAIT
 state (and continue to use resources on the server, such as an fd and
 associated resources).
 All that doesn't mean that this is your main issue here, but it's
 something to look into.




 Concerning the other response and the system max number of file, I am not
 sure this is where our issue lies. The peak itself seems to be a sympton
 of
 an issue, tomcat fd are around 1000 almost all the time except when a peak
 occurs. In such cases it can go up to 1 or more sometimes.

 Thomas



 2015-04-20 15:41 GMT+02:00 Rainer Jung rainer.j...@kippdata.de:

  Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

  Hi,

 I have tried to find help regarding an issue we experience with our
 platform leading to random file descriptor peaks. This happens more
 often
 on heavy load but can also happen on low traffic periods.

 Our application is using servlet 3.0 async features and an async
 connector.
 We noticed that a lot of issues regarding asynchronous feature were
 fixed
 between our production version and the last stable build. We decided to
 give it a try to see if it improves things or at least give clues on
 what
 can cause the issue; Unfortunately it did neither.

 The file descriptor peaks and application blocking happens frequently
 with
 this version when it only happens rarely on previous version (tomcat7
 7.0.28-4).

 Tomcat is behind an nginx server. The tomcat connector used is
 configured
 as follows:

 We use an Nio connector:
 Connector port=8080 protocol=org.apache.coyote.
 http11.Http11NioProtocol
selectorTimeout=1000
maxThreads=200
maxHttpHeaderSize=16384
address=127.0.0.1
redirectPort=8443/

 In catalina I can see some Broken pipe message that were not happening
 with
 previous version.

 I compared thread dumps from server with both the new and old version
 of
 tomcat and both look similar from my stand point.

 My explanation may not be very clear, but I hope this gives an idea how
 what we are experiencing. Any pointer would be welcomed.

  If the peaks happen long enough and your platforms has the tools
 available
 you can use lsof to look for what those FDs are - or on Linux looking at
 ls -l /proc/PID/fd/* (PID is the process PID file) - or on Solaris use
 the pfiles command.

 If the result is what is expected, namely that by far the most FDs are
 coming from network connections for port 8080, then you can check via
 netstat in which connection state those are.

 If most are in ESTABLISHED state, then you/we need to further break down
 the strategy.

 Regards,

 Rainer


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org





 -
 To unsubscribe, e-mail: 

Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
Thanks for your time Rainer,

I get what you mean regarding the application getting slow. This server was
also logging the garbage collection activity and it seems normal even when
the problem is occuring, there is no big variation in the time taken to do
a garbage collection operation.

I don't have a clear view of the server response time around the test I
made, so I can't tell if the application gets slow before the file
descriptor peak but as mentioned before this happen also during low traffic
period (in such a period there should have not reason to get slow). Also,
it feels unexpected that this version of tomcat makes the application
getting slower more often than a server with the other version of tomcat.

Thomas


2015-04-20 16:32 GMT+02:00 Rainer Jung rainer.j...@kippdata.de:

 Am 20.04.2015 um 15:41 schrieb Rainer Jung:

 Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

 Hi,

 I have tried to find help regarding an issue we experience with our
 platform leading to random file descriptor peaks. This happens more often
 on heavy load but can also happen on low traffic periods.

 Our application is using servlet 3.0 async features and an async
 connector.
 We noticed that a lot of issues regarding asynchronous feature were fixed
 between our production version and the last stable build. We decided to
 give it a try to see if it improves things or at least give clues on what
 can cause the issue; Unfortunately it did neither.

 The file descriptor peaks and application blocking happens frequently
 with
 this version when it only happens rarely on previous version (tomcat7
 7.0.28-4).

 Tomcat is behind an nginx server. The tomcat connector used is configured
 as follows:

 We use an Nio connector:
 Connector port=8080 protocol=org.apache.coyote.
 http11.Http11NioProtocol
selectorTimeout=1000
maxThreads=200
maxHttpHeaderSize=16384
address=127.0.0.1
redirectPort=8443/

 In catalina I can see some Broken pipe message that were not happening
 with
 previous version.

 I compared thread dumps from server with both the new and old
 version of
 tomcat and both look similar from my stand point.

 My explanation may not be very clear, but I hope this gives an idea how
 what we are experiencing. Any pointer would be welcomed.


 If the peaks happen long enough and your platforms has the tools
 available you can use lsof to look for what those FDs are - or on Linux
 looking at ls -l /proc/PID/fd/* (PID is the process PID file) - or on
 Solaris use the pfiles command.

 If the result is what is expected, namely that by far the most FDs are
 coming from network connections for port 8080, then you can check via
 netstat in which connection state those are.

 If most are in ESTABLISHED state, then you/we need to further break down
 the strategy.


 One more thing: the connection peak might happen, if for some reason your
 application or the JVM (GC) gets slow. The reason doesn't have to still be
 there at the time when you take the thread dump.

 You might want to add %D to your Tomcat access log and ty to estimate,
 whether the connection peaks are due to (temporary) application slow down.

 The same holds for activating a GC log and check for long or many
 cumulative GC pauses.


 Regards,

 Rainer

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: Cluster - Session replication error: Unable to apply diff

2015-04-20 Thread Keiichi Fujino
Hi

Are there other error or exception in your log?

Please show us your cluster configuration in your server.xml.
e.g.
- What is mapSendOptions?
- Which Interceptor do you use?



2015-04-15 3:55 GMT+09:00 Théo Chamley theo...@mley.fr:

 Hello,

 I have a working Tomcat 8.0.15 cluster with 3 members with the
 BackupManager as session manager.
 The session replication is mostly working except in a few cases. In those
 cases, I get the following error:

 09-Apr-2015 12:16:58.369 SEVERE [Tribes-Task-Receiver-6]
 org.apache.catalina.tribes.tipis.AbstractReplicatedMap.messageReceived
 Unable to apply diff to key:3B286B4C7CA060163A00988969D21923
  java.lang.NullPointerException
 at
 org.apache.catalina.ha.session.DeltaSession.applyDiff(DeltaSession.java:164)
 at
 org.apache.catalina.tribes.tipis.AbstractReplicatedMap.messageReceived(AbstractReplicatedMap.java:664)
 at
 org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:293)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:112)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor.messageReceived(ThroughputInterceptor.java:89)
 at
 org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:81)
 at
 org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:260)
 at
 org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:240)
 at
 org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:206)
 at
 org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:97)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)


 I was able to replicate the problem with a scenario in the application,
 but I was not able to understand the underlying problem.
 This happens when the user is making a very specific request and this
 request arrives on a Tomcat where his session is not stored, forcing the
 Tomcat to fetch the session elsewhere.

 The 3 tomcats are on the same network with a very low network latency.

 Does anybody has some advice on how to debug this problem?

 For now, I got around it with sticky sessions on mod_jk, but I find this
 very unsatisfactory.

 Thank you in advance for your help,

 //Théo

 --
 Keiichi.Fujino



Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Rainer Jung

Am 20.04.2015 um 17:40 schrieb Thomas Boniface:

Hi,

Both nginx and tomcat are hosted on the same server when listing the
connections I see both the connections from nginx to tomcat (the first one
create) and the one from tomcat to nginx used to reply. I may have
presented things the bad way though (I'm not too good regarding system
level).

I do agree the high number of close wait seems strange, I really feel like
nginx closed the connection before tomcat did (what I think leads to the
broken pipe expections observed in the catalina.out). In case someone want
to have a look I uploaded a netstat log here:
http://www.filedropper.com/netsat


The connection statistics between clients and nginx http port is:

  Count   IP:Port ConnectionState
  45467  178.32.101.62:80 TIME_WAIT
  44745178.33.42.6:80 TIME_WAIT
  26093178.33.42.6:80 ESTABLISHED
  25667  178.32.101.62:80 ESTABLISHED
   6898  178.32.101.62:80 FIN_WAIT2
   6723178.33.42.6:80 FIN_WAIT2
800  178.32.101.62:80 FIN_WAIT1
792178.33.42.6:80 FIN_WAIT1
712  178.32.101.62:80 LAST_ACK
656178.33.42.6:80 LAST_ACK
234178.33.42.6:80 SYN_RECV
232  178.32.101.62:80 SYN_RECV
 18178.33.42.6:80 CLOSING
  8  178.32.101.62:80 CLOSING
  1178.33.42.6:80 CLOSE_WAIT
  10.0.0.0:80 LISTEN

So lots of connections in TIME_WAIT state which is kind of expected for 
a web server doing lots of short time client connections, but slows down 
the IP stack. Also quite a lot of established connections (about 
50.000!), which means that you probably want to check, whether you can 
reduce your keep alive timeout for nginx.


The same statistics for the https port:

  Count   IP:Port ConnectionState
   2283 178.32.101.62:443 TIME_WAIT
   2125   178.33.42.6:443 TIME_WAIT
   1585 178.32.101.62:443 ESTABLISHED
   1493   178.33.42.6:443 ESTABLISHED
484 178.32.101.62:443 FIN_WAIT2
420   178.33.42.6:443 FIN_WAIT2
 47 178.32.101.62:443 FIN_WAIT1
 46   178.33.42.6:443 FIN_WAIT1
 25 178.32.101.62:443 LAST_ACK
 17   178.33.42.6:443 SYN_RECV
 16 178.32.101.62:443 SYN_RECV
 16   178.33.42.6:443 LAST_ACK
 10   178.33.42.6:443 CLOSING
  4 178.32.101.62:443 CLOSING
  1   0.0.0.0:443 LISTEN

About the same relative picture but only about 5% of the http connection 
counts.


The incoming connection statistics for Tomcat (port 8080) is:

  Count   IP:Port ConnectionState
   8381127.0.0.1:8080 CLOSE_WAIT
   1650127.0.0.1:8080 ESTABLISHED
127127.0.0.1:8080 SYN_RECV
 65127.0.0.1:8080 TIME_WAIT
  1   172.16.1.3:8080 LISTEN
  1127.0.0.1:8080 LISTEN

The many CLOSE_WAIT mean, that the remote side (nginx) has already 
closed the connection, but not yet Tomcat. Probably the idleconnection 
timeout / keep alive timeout for connectiosn between nginx and Tomcat is 
lower on the nginx side, than on the tomcat side.


Interestingly the same connections but viewed from the opposite side of 
the connection (nginx) have totally different statistics:


  Count   IP:Port ConnectionState
  20119127.0.0.1:8080 SYN_SENT
   4692127.0.0.1:8080 ESTABLISHED
488127.0.0.1:8080 FIN_WAIT2
122127.0.0.1:8080 TIME_WAIT
 13127.0.0.1:8080 FIN_WAIT1

I wonder why we have 4692 established connections here, but on 1650 in 
the table above. In a static situation, the numbers should be the same. 
It indicates, that the is so much dynamics, taht the numbers vary a lot 
even while netstat runs.


We see a lot of SYN_SENT, so nginx wants to open many more connections 
to Tomcat but doesn't get them as quickly as it wants.


Finally there's a bunch of connections to remote web services:

  CountIP:Port ConnectionState
286  95.85.3.86:80 CLOSE_WAIT
255   46.228.164.12:80 ESTABLISHED
209   188.125.82.65:80 CLOSE_WAIT
172  176.74.173.230:80 ESTABLISHED
170   54.171.53.252:80 CLOSE_WAIT
136   188.125.82.65:80 LAST_ACK
129  95.85.3.86:80 LAST_ACK
128  23.212.108.209:80 CLOSE_WAIT
106  46.137.157.249:80 CLOSE_WAIT
10181.19.244.69:80 ESTABLISHED
 86   146.148.30.94:80 CLOSE_WAIT
 8346.137.83.90:80 CLOSE_WAIT
 80   188.125.82.65:80 ESTABLISHED
 78  37.252.163.221:80 CLOSE_WAIT
 7746.137.83.90:80 ESTABLISHED
 73  46.137.157.121:80 CLOSE_WAIT
 6454.246.89.98:80 CLOSE_WAIT
 63  173.194.40.153:80 ESTABLISHED
 6193.176.80.69:80 ESTABLISHED
 55  23.212.108.198:80 CLOSE_WAIT
 5354.72.204.78:80 CLOSE_WAIT
 51  37.252.162.230:80 CLOSE_WAIT
 51  173.194.40.154:80 ESTABLISHED
 50  54.247.113.157:80 CLOSE_WAIT
 50   37.252.170.98:80 CLOSE_WAIT
 49  23.212.108.191:80 CLOSE_WAIT
 47   54.154.23.133:80 CLOSE_WAIT
 43  176.34.179.135:80 CLOSE_WAIT
 39   146.148.21.73:80 CLOSE_WAIT
 36   46.137.87.196:80 CLOSE_WAIT
 34  173.194.40.154:80 CLOSE_WAIT
 30   46.137.87.163:80