JMX currentThreadsBusy less than connections/requests when use APR connector

2017-03-07 Thread linbo liao
Hi,

I setup local environment to test Tomcat monitor.

The Environment:

Tomcat: 8.5.5
VM: Ubuntu 14.04.1 LTS
HTTP PORT: 8080
IP: 10.211.55.4

Tomcat use APR connector, I test the tomcat via ab command, find JMX
currentThreadsBusy < 10 all of the time.

ab -n 10 -c 100 10.211.55.4:8080/
>

I tried to search the reason but without the result. For BIO each thread to
handle one connection, so currentThreadsBusy can show the performance of
tomcat.

But for APR connector, what's the meaning of currentThreadsBusy?

Thanks in advance.

Thanks,
Linbo


Re: Tomcat WebSocket does not always send asynchronous messages

2017-03-07 Thread Mark Thomas
On 07/03/17 14:55, Mark Thomas wrote:
> On 07/03/17 11:03, Mark Thomas wrote:
>> On 07/03/17 08:28, Pesonen, Harri wrote:
>>> Hello, we have a problem that Tomcat WebSocket does not always send 
>>> asynchronous messages. This problem is random, and it has been reproduced 
>>> in Tomcat 8.5.6 and 8.5.11. Synchronized operations work fine, and also the 
>>> asynchronous operations work except in one special case. When there is a 
>>> big message that we want to send to client, we split it into 16 kB packets 
>>> for technical reasons, and then we send them very quickly after each other 
>>> using
>>>
>>> /**
>>> * Initiates the asynchronous transmission of a binary message. This method 
>>> returns before the message
>>> * is transmitted. Developers provide a callback to be notified when the 
>>> message has been
>>> * transmitted. Errors in transmission are given to the developer in the 
>>> SendResult object.
>>> *
>>> * @param data   the data being sent, must not be {@code null}.
>>> * @param handler the handler that will be notified of progress, must not be 
>>> {@code null}.
>>> * @throws IllegalArgumentException if either the data or the handler are 
>>> {@code null}.
>>> */
>>> void sendBinary(ByteBuffer data, SendHandler handler);
>>>
>>> Because there can be only one ongoing write to socket, we use Semaphore 
>>> that is released on the SendHandler callback:
>>>
>>> public void onResult(javax.websocket.SendResult result) {
>>> semaphore.release();
>>>
>>> So the code to send is actually:
>>>
>>> semaphore.acquireUninterruptibly();
>>> async.sendBinary(buf, asyncHandler);
>>>
>>> This works fine in most cases. But when we send one 16 kB packet and then 
>>> immediately one smaller packet (4 kB), then randomly the smaller packet is 
>>> not actually sent, but only after we call
>>>
>>> async.sendPing(new byte[0])
>>>
>>> in another thread. sendPing() is called every 20 seconds to keep the 
>>> WebSocket connection alive. This means that the last packet gets extra 
>>> delay on client, which varies between 0 - 20 seconds.
>>>
>>> We have an easy workaround to the problem. If we call flushBatch() after 
>>> each sendBinary(), then it works great, but this means that the sending is 
>>> not actually asynchronous, because flushBatch() is synchronous.
>>> Also we should not be forced to call flushBatch(), because we are not 
>>> enabling batching. Instead we make sure that it is disabled:
>>>
>>> if (async.getBatchingAllowed()) {
>>> async.setBatchingAllowed(false);
>>>
>>> So the working code is:
>>>
>>> semaphore.acquireUninterruptibly();
>>> async.sendBinary(buf, asyncHandler);
>>> async.flushBatch();
>>>
>>> Normally the code works fine without flushBatch(), if there is delay 
>>> between the messages, but when we send the messages right after each other, 
>>> then the last small message is not always sent immediately.
>>> I looked at the Apache WebSocket code, but it was not clear to me what is 
>>> happening there.
>>> Any ideas what is going on here? Any ideas how I could troubleshoot this 
>>> more?
>>
>> Thanks for providing such a clear description of the problem you are seeing.
>>
>> It sounds like there is a race condition somewhere in the WebSocket
>> code. With the detail you have provided, I think there is a reasonable
>> chance of finding via code inspection.
> 
> Some follow-up questions to help narrow the search.
> 
> This is server side, correct?
> 
> Are you using the compression extension? If yes, do you see the problem
> without it?
> 
> When you say "we split it into 16 kB packets" do you mean you split it
> into multiple WebSocket messages?
> 
> If you insert a short delay before sending the final 4kB does that
> reduce the frequency of the problem?

I've added a (disabled by default) test case to explore the issue
described based on my understanding. It passes for me (ignoring what
look like GC introduced delays) with NIO.

http://svn.apache.org/viewvc?rev=1785893=rev

What would be really helpful would be if you could use this as a basis
for providing a test case that demonstrates the problem you are seeing.

Thanks,

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: How to enable Native Memory Tracking(NMT) in Tomcat?

2017-03-07 Thread Suvendu Sekhar Mondal
It worked, Mark! When I launched Tomcat from command line, it fired up
Bootstrap loader. Then JMC can print NMT statistics.

I believe tracking NMT is not a common need. Still, it would be
awesome if we can do the same thing with changes in Apache Commons
Daemon.

Thanks!

On Fri, Mar 3, 2017 at 2:06 PM, Mark Thomas  wrote:
> On 03/03/17 06:20, Suvendu Sekhar Mondal wrote:
>> Mark,
>> I am running Tomcat as a Windows service.
>
> Then I suspect that supporting this would require changes in Apache
> Commons Daemon which is what Tomcat uses to start the Windows service.
>
> It would probably be quicker for you to run Tomcat from the command line
> to get the debugging information you require - assuming you have a
> one-off need. If you want ongoing monitoring then Daemon changes is
> would be the better option.
>
> Mark
>
>>
>> Thanks!
>> Suvendu
>>
>> On Thu, Mar 2, 2017 at 8:08 PM, Mark Thomas  wrote:
>>> On 02/03/17 10:54, Suvendu Sekhar Mondal wrote:
 Hello Everyone,

 I am new here. :)

 Environment:
 Java Version: Java HotSpot(TM) 64-Bit Server VM version 25.91-b15
 (Java version 1.8.0_91-b15)
 Tomcat Version: Tomcat 8.0.20
 OS Version: Microsoft Windows 8.1 Enterprise

 I am trying to enable Native Memory Tracking(NMT) to get internal
 memory usage deatils about Tomcat process which is running on my
 system. I have added following flags to Tomcat's Java options:
 -XX:+UnlockDiagnosticVMOptions
 -XX:NativeMemoryTracking=summary
 -XX:NativeMemoryTracking=detail

 After that I restarted Tomcat. When I tried to use "VM.native_memory"
 command either from JCMD or JMC, I am getting "Native memory tracking
 is not enabled" message. When I shutdown Tomcat, following message is
 getting printed on STDERR log:
 "Java HotSpot(TM) 64-Bit Server VM warning: Native Memory Tracking did
 not setup properly, using wrong launcher?".

 I found this article which discussed about NMT problem in custom JVM 
 launcher:
 https://blogs.oracle.com/poonam/entry/using_nmt_with_custom_jvm

 My question is, does that stands true for Tomcat also? Is there any
 other way to enable NMT in Tomcat?
>>>
>>> No idea. How did you start Tomcat?
>>>
>>> Mark
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: getRealPath is a bad idea?

2017-03-07 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Cris,

On 3/7/17 10:27 AM, Berneburg, Cris J. - US wrote:
> [SNIP]
> 
>>> chris S>>> getRealPath is a bad idea. <<<
>>> 
>>> For my education's sake, would you please explain that?
>>> [SNIP]
>> 
>> There is no guarantee it will return a non-null value. The
>> typical reason is if the app is running from a packed WAR. Using
>> it reduces the portability of your application.
>> 
>> Mark
> 
> Thanks for explaining that.  Never occurred to me that running from
> a WAR would return null.
> 
> I used getRealPath thinking it would *increase* portability, since 
> yet-another-config-option would not need to be manually set (or 
> verified) after every deployment or in a different environment.
> "Why did the such-and-so fail?  Oh... I forgot to set the folder
> location - again."  By using getRealPath the setting never, ever
> needs to be configured - it's automatic.
> 
> But now I may need to rethink that.
> 
> BTW, why doesn't getRealPath return the full path to the folder
> that the WAR file is in instead of null?

You mean for a call like getRealPath("/")?

Well, that would require a path to be returned to the "root" of the
application. Let's say that ROOT.war is in
/home/tomcat/webapps/ROOT.war and also index.html is in the "root" of
the WAR File.

If you used getRealPath("/index.html") it would, as described, return
null -- because there's no file path that could get you to that file.

If you used getRealPath("/") and then added "/index.html" to the end
of it, you'd expect to be able to read index.html from the resulting
path (/home/tomcat/webapps/index.html). Not only does that not work
(because the file isn't there), it's not even the right path. The
"right" path (if there even is one) would be something like
"/home/tomcat/webapps/ROOT.war/index.html".

So when the WAR is not unpacked, there really isn't any meaningful
return value from getRealPath, even for special-cases like "" or "/".

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJYvupLAAoJEBzwKT+lPKRYinQQALRk87wqtMBXKsPf3XCm7F4L
l72cqnMgqpWzSJ4fFP/yZq/TyNAH9Oewyk0i/HZNMvqJZw5539sdpyirpDLilEe3
VMW6pzWI3FRLVkUI1CtCsqrP0onm77Avlp+U0P1xjr8lF+FhgN7FWX6WifDvlV4b
ugorEoH1LXnyTfadRIe7q57APnwEz5cdepCKJv5bPT5bl8UCTEv3jnodbDkKzzXY
lK3ZcBSz9qXQ+F7gIedR94RmlM63jzryzlFWJXhXOIFdWCncFrSNXlJrEnu07VMH
PNsZ5Pt189jzq7u4YcUaDiSgGDRnGbGpWxiFaJyc2cVVo2FbdIZwIxfrX8hSHUIR
oZnJqpHLg//26ZGiVVZ89SGVIRLdcYMCulBzxmhQ7tfDdGuWkHtHEj7eGM8DTEdO
q7u+dhptXArMoWxkVhVJLXU0GjUxjLyH3ftM+YSSil59ML+99qGn3VsCOi2hTB1Q
S34mcSHvJmSV7EoZBa1THcDILdWPGAfA9qv3GdUyMCXVhhse+DvYuO/Iefz7UHG1
/HeHC6yjPR8i/ty76SDwND225yxY2ZLX0qSQ4HtiN5Ks2CzwQl/h15V7HJMpGWFJ
CTKUFQDgefqDzHMIjVK3wGWGiY6vckiK/am/fdxKGFdT2uxMyYxZ4pw6CMR67JMs
0Ey/SBiNAyeqLihSEMri
=OSuc
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: getRealPath is a bad idea?

2017-03-07 Thread Berneburg, Cris J. - US
 [SNIP]

>> chris S>>> getRealPath is a bad idea. <<<
>> 
>> For my education's sake, would you please explain that?  [SNIP]
>
> There is no guarantee it will return a non-null value. The typical reason
> is if the app is running from a packed WAR. Using it reduces the portability
> of your application.
>
> Mark

Thanks for explaining that.  Never occurred to me that running from a WAR would 
return null.

I used getRealPath thinking it would *increase* portability, since 
yet-another-config-option would not need to be manually set (or verified) after 
every deployment or in a different environment.  "Why did the such-and-so fail? 
 Oh... I forgot to set the folder location - again."  By using getRealPath the 
setting never, ever needs to be configured - it's automatic.

But now I may need to rethink that.

BTW, why doesn't getRealPath return the full path to the folder that the WAR 
file is in instead of null?

--
Cris Berneburg
CACI Lead Software Engineer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat WebSocket does not always send asynchronous messages

2017-03-07 Thread Mark Thomas
On 07/03/17 11:03, Mark Thomas wrote:
> On 07/03/17 08:28, Pesonen, Harri wrote:
>> Hello, we have a problem that Tomcat WebSocket does not always send 
>> asynchronous messages. This problem is random, and it has been reproduced in 
>> Tomcat 8.5.6 and 8.5.11. Synchronized operations work fine, and also the 
>> asynchronous operations work except in one special case. When there is a big 
>> message that we want to send to client, we split it into 16 kB packets for 
>> technical reasons, and then we send them very quickly after each other using
>>
>> /**
>> * Initiates the asynchronous transmission of a binary message. This method 
>> returns before the message
>> * is transmitted. Developers provide a callback to be notified when the 
>> message has been
>> * transmitted. Errors in transmission are given to the developer in the 
>> SendResult object.
>> *
>> * @param data   the data being sent, must not be {@code null}.
>> * @param handler the handler that will be notified of progress, must not be 
>> {@code null}.
>> * @throws IllegalArgumentException if either the data or the handler are 
>> {@code null}.
>> */
>> void sendBinary(ByteBuffer data, SendHandler handler);
>>
>> Because there can be only one ongoing write to socket, we use Semaphore that 
>> is released on the SendHandler callback:
>>
>> public void onResult(javax.websocket.SendResult result) {
>> semaphore.release();
>>
>> So the code to send is actually:
>>
>> semaphore.acquireUninterruptibly();
>> async.sendBinary(buf, asyncHandler);
>>
>> This works fine in most cases. But when we send one 16 kB packet and then 
>> immediately one smaller packet (4 kB), then randomly the smaller packet is 
>> not actually sent, but only after we call
>>
>> async.sendPing(new byte[0])
>>
>> in another thread. sendPing() is called every 20 seconds to keep the 
>> WebSocket connection alive. This means that the last packet gets extra delay 
>> on client, which varies between 0 - 20 seconds.
>>
>> We have an easy workaround to the problem. If we call flushBatch() after 
>> each sendBinary(), then it works great, but this means that the sending is 
>> not actually asynchronous, because flushBatch() is synchronous.
>> Also we should not be forced to call flushBatch(), because we are not 
>> enabling batching. Instead we make sure that it is disabled:
>>
>> if (async.getBatchingAllowed()) {
>> async.setBatchingAllowed(false);
>>
>> So the working code is:
>>
>> semaphore.acquireUninterruptibly();
>> async.sendBinary(buf, asyncHandler);
>> async.flushBatch();
>>
>> Normally the code works fine without flushBatch(), if there is delay between 
>> the messages, but when we send the messages right after each other, then the 
>> last small message is not always sent immediately.
>> I looked at the Apache WebSocket code, but it was not clear to me what is 
>> happening there.
>> Any ideas what is going on here? Any ideas how I could troubleshoot this 
>> more?
> 
> Thanks for providing such a clear description of the problem you are seeing.
> 
> It sounds like there is a race condition somewhere in the WebSocket
> code. With the detail you have provided, I think there is a reasonable
> chance of finding via code inspection.

Some follow-up questions to help narrow the search.

This is server side, correct?

Are you using the compression extension? If yes, do you see the problem
without it?

When you say "we split it into 16 kB packets" do you mean you split it
into multiple WebSocket messages?

If you insert a short delay before sending the final 4kB does that
reduce the frequency of the problem?

Thanks,

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat WebSocket does not always send asynchronous messages

2017-03-07 Thread Mark Thomas
On 07/03/17 08:28, Pesonen, Harri wrote:
> Hello, we have a problem that Tomcat WebSocket does not always send 
> asynchronous messages. This problem is random, and it has been reproduced in 
> Tomcat 8.5.6 and 8.5.11. Synchronized operations work fine, and also the 
> asynchronous operations work except in one special case. When there is a big 
> message that we want to send to client, we split it into 16 kB packets for 
> technical reasons, and then we send them very quickly after each other using
> 
> /**
> * Initiates the asynchronous transmission of a binary message. This method 
> returns before the message
> * is transmitted. Developers provide a callback to be notified when the 
> message has been
> * transmitted. Errors in transmission are given to the developer in the 
> SendResult object.
> *
> * @param data   the data being sent, must not be {@code null}.
> * @param handler the handler that will be notified of progress, must not be 
> {@code null}.
> * @throws IllegalArgumentException if either the data or the handler are 
> {@code null}.
> */
> void sendBinary(ByteBuffer data, SendHandler handler);
> 
> Because there can be only one ongoing write to socket, we use Semaphore that 
> is released on the SendHandler callback:
> 
> public void onResult(javax.websocket.SendResult result) {
> semaphore.release();
> 
> So the code to send is actually:
> 
> semaphore.acquireUninterruptibly();
> async.sendBinary(buf, asyncHandler);
> 
> This works fine in most cases. But when we send one 16 kB packet and then 
> immediately one smaller packet (4 kB), then randomly the smaller packet is 
> not actually sent, but only after we call
> 
> async.sendPing(new byte[0])
> 
> in another thread. sendPing() is called every 20 seconds to keep the 
> WebSocket connection alive. This means that the last packet gets extra delay 
> on client, which varies between 0 - 20 seconds.
> 
> We have an easy workaround to the problem. If we call flushBatch() after each 
> sendBinary(), then it works great, but this means that the sending is not 
> actually asynchronous, because flushBatch() is synchronous.
> Also we should not be forced to call flushBatch(), because we are not 
> enabling batching. Instead we make sure that it is disabled:
> 
> if (async.getBatchingAllowed()) {
> async.setBatchingAllowed(false);
> 
> So the working code is:
> 
> semaphore.acquireUninterruptibly();
> async.sendBinary(buf, asyncHandler);
> async.flushBatch();
> 
> Normally the code works fine without flushBatch(), if there is delay between 
> the messages, but when we send the messages right after each other, then the 
> last small message is not always sent immediately.
> I looked at the Apache WebSocket code, but it was not clear to me what is 
> happening there.
> Any ideas what is going on here? Any ideas how I could troubleshoot this more?

Thanks for providing such a clear description of the problem you are seeing.

It sounds like there is a race condition somewhere in the WebSocket
code. With the detail you have provided, I think there is a reasonable
chance of finding via code inspection.

This is next (and currently last) on the list of things I want to look
at before starting the release process for 9.0.x and 8.5.x.

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: ELContext no longer available to tagfiles

2017-03-07 Thread Mark Thomas
On 03/03/17 19:58, Mike Strauch wrote:
>> Exactly. An ELContext is being created so that listener should fire.
> 
> Cool, thanks!

I think I have fixed this. The fix includes a simplistic test case.
Verification of the fix would be appreciated.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat WebSocket does not always send asynchronous messages

2017-03-07 Thread Pesonen, Harri
Hello, we have a problem that Tomcat WebSocket does not always send 
asynchronous messages. This problem is random, and it has been reproduced in 
Tomcat 8.5.6 and 8.5.11. Synchronized operations work fine, and also the 
asynchronous operations work except in one special case. When there is a big 
message that we want to send to client, we split it into 16 kB packets for 
technical reasons, and then we send them very quickly after each other using

/**
* Initiates the asynchronous transmission of a binary message. This method 
returns before the message
* is transmitted. Developers provide a callback to be notified when the message 
has been
* transmitted. Errors in transmission are given to the developer in the 
SendResult object.
*
* @param data   the data being sent, must not be {@code null}.
* @param handler the handler that will be notified of progress, must not be 
{@code null}.
* @throws IllegalArgumentException if either the data or the handler are {@code 
null}.
*/
void sendBinary(ByteBuffer data, SendHandler handler);

Because there can be only one ongoing write to socket, we use Semaphore that is 
released on the SendHandler callback:

public void onResult(javax.websocket.SendResult result) {
semaphore.release();

So the code to send is actually:

semaphore.acquireUninterruptibly();
async.sendBinary(buf, asyncHandler);

This works fine in most cases. But when we send one 16 kB packet and then 
immediately one smaller packet (4 kB), then randomly the smaller packet is not 
actually sent, but only after we call

async.sendPing(new byte[0])

in another thread. sendPing() is called every 20 seconds to keep the WebSocket 
connection alive. This means that the last packet gets extra delay on client, 
which varies between 0 - 20 seconds.

We have an easy workaround to the problem. If we call flushBatch() after each 
sendBinary(), then it works great, but this means that the sending is not 
actually asynchronous, because flushBatch() is synchronous.
Also we should not be forced to call flushBatch(), because we are not enabling 
batching. Instead we make sure that it is disabled:

if (async.getBatchingAllowed()) {
async.setBatchingAllowed(false);

So the working code is:

semaphore.acquireUninterruptibly();
async.sendBinary(buf, asyncHandler);
async.flushBatch();

Normally the code works fine without flushBatch(), if there is delay between 
the messages, but when we send the messages right after each other, then the 
last small message is not always sent immediately.
I looked at the Apache WebSocket code, but it was not clear to me what is 
happening there.
Any ideas what is going on here? Any ideas how I could troubleshoot this more?
Thanks,

-Harri



Re: Tomcat - IPv4 loopback

2017-03-07 Thread tomcat

On 07.03.2017 03:42, satishkumar.krishnas...@cognizant.com wrote:

Hi - We are using Tomcat 8 and in our prod we have been facing no buffer error 
for TCP / IP port, while we triage issue, we have found (through Resource 
monitor) the Tomcat8.exe is having lots of IPv4 loopback as local address and 
remote address. Do you have any hint of why this loop back happens ? as soon as 
we start the server we are seeing almost 60+ loop back entries.



Hi.
On your (Windows) server, with Tomcat running, enter this command in a command 
window :
netstat -aonb -p tcp

and then copy and paste the result here (eliminating irrelevant lines).
This will tell us more clearly what you are referring to.

You can also do the same on another tomcat server, and compare.

Note that if you have a front-end webserver in front of Tomcat (on the same host), such 
things tend to establish a pool of connections between front-end and back-end, which is 
maybe what you are seeing.
Similarly, if Tomcat applications communicate with some back-end database system (on the 
same host), the database driver may also create a pool of connections.


The command above will tell you what connects to what, and that may already provide the 
answer to your question.




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org