Internals of Tomcat Thread Pool

2019-04-25 Thread Supun Abeysinghe
Hi all,


I am working on a project which tries to tune the Tomcat thread pool
dynamically by looking at runtime characteristics in order to enhance the
system performance. To get a better understanding, I have gone through the
Tomcat source code and found out that it uses ThreadPoolExecutor. It maps
minSpareThreads parameter to corePoolSize and maxThreads parameter to
specify the maximum thread pool size. However, I'm having trouble
understanding how to specify the queue length. This is not the acceptCount.
I'm talking about the size of the BlockingQueue (work queue) which is used
inside the ThreadPoolExecutor. As per my understanding, the BlockingQueue
(work queue) and the request queue (of which the size is specified using
the acceptCount parameter) are two different queues (am I wrong here? are
both the same?).

My Question is, how to specify the length of the BlockingQueue (work queue)
using Tomcat parameters? Is there any JMX MBean that reports this queue
size?


Any pointers, help, and suggestions are welcome.

Best regards,
Supun


-- 
*Supun Abeysinghe*
Undergrad, Department of Computer Science and Engineering,
University of Moratuwa, Faculty of Engineering.
+94717018897



Re: OutOfMemory on large file download with AJP and cachingAllowed=false

2019-04-25 Thread Mark Thomas
On 25/04/2019 21:16, Christopher Schultz wrote:
> Mark,
> 
> On 4/25/19 15:55, Mark Thomas wrote:
>> On 23/04/2019 16:29, Olivier Jaquemet wrote:
>>> On 23/04/2019 16:12, Christopher Schultz wrote:
 On 4/23/19 05:58, Olivier Jaquemet wrote:
> 
>> 
> 
> * Add the following directive to context.xml :  cachingAllowed="false" />
 Okay. Why context.xml, by the way?
>>> I don't even know (yet...) why this setting was added in the
>>> first place in the environment where it was present... ! so why
>>> this file... I don't know either :)
> 
>> DefaultServlet is assuming caching is in place. If you disable it,
>> you hit this issue.
> 
> * Create a large file in the samples webapp, for example :
> cd webapps/examples dd if=/dev/zero of=large.txt bs=1k
> count=200
> 
>> 
> 
 Reading the code for FileResource.getContent, it's clear that
 the entire file is being loaded into memory, which obviously
 isn't going to work, here. I'm wondering why that's happening
 since streaming is the correct behavior when caching=false.
 Also strange is that DefaultServlet will attempt to call
 FileResource.getContent() -- which returns a byte[] -- and, if
 that returns null, it will call FileResource.getInputStream
 which ... calls this.getContent. So this looks like a
 special-case for FileResource just trying to implement that
 interface in the simplest way possible.
> 
>> It is assuming it is working with a CachedResource instance rather
>> than directly with a FileResource instance.
> 
 FileResource seems to implement in-memory caching whether it's
 enabled or not.

 I can't understand why this doesn't fail for the other kind of 
 connector. Everything else is the same? You have two separate 
 connectors in one instance, or are you changing the connector
 between tests?
>>>
>>> Everything is exactly the same as I have only one instance with
>>> two separate connectors (AJP+HTTP).
> 
>> I suspect HTTP avoids it because sendfile is enabled.
> 
>> The DefaultServlet logic needs a little refactoring.
> 
> And maybe FileResource, too?
> 
> I wasn't able to follow the logic of whether caching or not caching
> was enabled. I only did cursory checking, but it seemed like none of
> the resources implementations included any caching-aware code at all.
> Was I looking in the wrong place?

Don't think so. When caching is enabled everything gets wrapped in
CachedResource.

> If the resources are caching-aware, then I think the DefaultServlet
> can just always use Resource.getInputStream.
> 
> Hmm. That might cause a lot of unnecessary IO if the bytes are
> actually available.

That is a very tempting solution. The result is a LOT cleaner than the
patch I just wrote. CachingResource is smart enough to cache the bytes
and wrap them in a ByteArrayInputStream if Resource.getInputStream is
called. My only concern is I think this introduces and additional copy
of the data. I need to check that.

> Maybe when caching is disabled, we need to wrap resources in an
> UncachedResource object which always returns null from getContent()
> and forces the use of an InputStream?

My instinct is that would be too much but I'll keep it in mind in case I
end up in a logic hole that that digs me out of.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: OutOfMemory on large file download with AJP and cachingAllowed=false

2019-04-25 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Mark,

On 4/25/19 15:55, Mark Thomas wrote:
> On 23/04/2019 16:29, Olivier Jaquemet wrote:
>> On 23/04/2019 16:12, Christopher Schultz wrote:
>>> On 4/23/19 05:58, Olivier Jaquemet wrote:
> 
> 
> 
 * Add the following directive to context.xml : >>> cachingAllowed="false" />
>>> Okay. Why context.xml, by the way?
>> I don't even know (yet...) why this setting was added in the
>> first place in the environment where it was present... ! so why
>> this file... I don't know either :)
> 
> DefaultServlet is assuming caching is in place. If you disable it,
> you hit this issue.
> 
 * Create a large file in the samples webapp, for example :
 cd webapps/examples dd if=/dev/zero of=large.txt bs=1k
 count=200
> 
> 
> 
>>> Reading the code for FileResource.getContent, it's clear that
>>> the entire file is being loaded into memory, which obviously
>>> isn't going to work, here. I'm wondering why that's happening
>>> since streaming is the correct behavior when caching=false.
>>> Also strange is that DefaultServlet will attempt to call
>>> FileResource.getContent() -- which returns a byte[] -- and, if
>>> that returns null, it will call FileResource.getInputStream
>>> which ... calls this.getContent. So this looks like a
>>> special-case for FileResource just trying to implement that
>>> interface in the simplest way possible.
> 
> It is assuming it is working with a CachedResource instance rather
> than directly with a FileResource instance.
> 
>>> FileResource seems to implement in-memory caching whether it's
>>> enabled or not.
>>> 
>>> I can't understand why this doesn't fail for the other kind of 
>>> connector. Everything else is the same? You have two separate 
>>> connectors in one instance, or are you changing the connector
>>> between tests?
>> 
>> Everything is exactly the same as I have only one instance with
>> two separate connectors (AJP+HTTP).
> 
> I suspect HTTP avoids it because sendfile is enabled.
> 
> The DefaultServlet logic needs a little refactoring.

And maybe FileResource, too?

I wasn't able to follow the logic of whether caching or not caching
was enabled. I only did cursory checking, but it seemed like none of
the resources implementations included any caching-aware code at all.
Was I looking in the wrong place?

If the resources are caching-aware, then I think the DefaultServlet
can just always use Resource.getInputStream.

Hmm. That might cause a lot of unnecessary IO if the bytes are
actually available.

Maybe when caching is disabled, we need to wrap resources in an
UncachedResource object which always returns null from getContent()
and forces the use of an InputStream?

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlzCFbkACgkQHPApP6U8
pFjfng//YwlhPYoJLxXA1QOr9wQ3QvXusXqoqExQ7cQVlv9Z+in7Xl4/qHp65QBp
wlAGqxGCvxbltQCiBuMmIoK3Lz8GEJcgAl+Rt7FCDPC22yfJ/dCQjFEgAQ8+BftD
VohdH+7fJxP+KF2P29qh3Yb041VDC6L1NkkERUOf+4mDKA/B0RnmhuW/QVRCpB5a
6Ola0yR8VP6zfjubA+PkUjDtXD4nw0vIQ5MbvV3pqIIPE5bA0+GfgbslTbgNiPNZ
u8tQlHJzUuvEUexzWCN9f6Ltu0pO8l1ovp4djP5CMLuY1PUm/ZUIkJw1wuB8Qft4
ByX/i5VzOPXxzbfw2cSboNe5PKhHk1LOokLNWB0UDfPsHWJI9Ef0G0mr5neljUGY
uu9tAFG/G6G1LkglLRNVlBajmXi3wMy9I73l2Pkj7lE45kRwv+z185IobG/6034x
ocX8u8UMNAmi4dWSc51x4PaXYTaI7lH9jmrPsyYDq0AHhrd5RxVT1Q9KT2sJexY2
6Qf/6mOXqzUf6H5aBrloOiVOcMOrtYRJG5mRK+UuoaaN+zngvubhlQIVHOa+8eP/
kDtKsQctLoLafvVWpTRsy7wMpOcfD2LkyDDhVI9QpwJtigQwvjuqGRUc+iITvejX
+tIoaqqwoHb0MoHPHdQI+0XStCxEa6N5oK3EtbjhwKpAaZnsuiU=
=drNE
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: OutOfMemory on large file download with AJP and cachingAllowed=false

2019-04-25 Thread Mark Thomas
On 23/04/2019 16:29, Olivier Jaquemet wrote:
> On 23/04/2019 16:12, Christopher Schultz wrote:
>> On 4/23/19 05:58, Olivier Jaquemet wrote:



>>> * Add the following directive to context.xml : >> cachingAllowed="false" />
>> Okay. Why context.xml, by the way?
> I don't even know (yet...) why this setting was added in the first place
> in the environment where it was present... !
> so why this file... I don't know either :)

DefaultServlet is assuming caching is in place. If you disable it, you
hit this issue.

>>> * Create a large file in the samples webapp, for example : cd
>>> webapps/examples dd if=/dev/zero of=large.txt bs=1k count=200



>> Reading the code for FileResource.getContent, it's clear that the
>> entire file is being loaded into memory, which obviously isn't going
>> to work, here. I'm wondering why that's happening since streaming is
>> the correct behavior when caching=false. Also strange is that
>> DefaultServlet will attempt to call FileResource.getContent() -- which
>> returns a byte[] -- and, if that returns null, it will call
>> FileResource.getInputStream which ... calls this.getContent. So this
>> looks like a special-case for FileResource just trying to implement
>> that interface in the simplest way possible.

It is assuming it is working with a CachedResource instance rather than
directly with a FileResource instance.

>> FileResource seems to implement in-memory caching whether it's enabled
>> or not.
>>
>> I can't understand why this doesn't fail for the other kind of
>> connector. Everything else is the same? You have two separate
>> connectors in one instance, or are you changing the connector between
>> tests?
> 
> Everything is exactly the same as I have only one instance with two
> separate connectors (AJP+HTTP).

I suspect HTTP avoids it because sendfile is enabled.

The DefaultServlet logic needs a little refactoring.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Cannot receive email from tomcat.apache.org

2019-04-25 Thread Mark Thomas
On 24/04/2019 14:25, Mark Thomas wrote:
> On 24/04/2019 07:27, Mark Thomas wrote:
>> On 24/04/2019 02:10, Richard Huntrods wrote:
>>> I have confirmed with my email provider that tomcat.apache.org does
>>> indeed have nucleus.com on a blacklist. I can provide proof if needed,
>>> but I do need to get nucleus.com REMOVED from this blacklist.
>>
>> Please provide your proof - to users-ow...@tomcat.apache.org if you
>> don't want to post it to a public list.
> 
> Your mail host is rejecting connections from the ASF mail server:
> 
> From our logs:
> 
> :
> 208.65.246.133 does not like recipient.
> Remote host said: 554 5.7.1 : Client
> host rejected: cidr blacklist CPE abuse
> Giving up on 208.65.246.133.
> 
> There is nothing we can do to fix this. The issue is entirely with your
> mail provider.

And the provided proof confirms this.

The mail administrator for nucleus.com has opted to use the
backscatterer.org blacklist. That particular blacklist doesn't have a
great reputation since it blacklists for 30 days by default and charges
~$90 to get removed earlier than the default expiry.

While I do understand the logic behind both the way the blacklist is
compiled and the charging for early removal, it does strike me as an
arrangement that would be very easy use unethically.

The ASF will not be paying to be removed from this blacklist.

If you Google for "backscatterer extortion" you'll find a number of
complaints about this blacklist going back over an extended period of time.

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Is there a limit to mod_jk?

2019-04-25 Thread Rainer Jung

Am 25.04.2019 um 06:22 schrieb John Larsen:

Hello,

Is there a limit to the number of workers instances mod_jk can handle?


There will be limits due to general file descriptor limits, each TCP 
connection counts as a file descriptor. But I am not aware of a limit on 
the number of workers per se.



I currently have 38 tomcat instances running on a machine. I have had up to
75 in the past, but this machine I keep getting bind exception errors but
lsof -i :port comes up empty when that particular tomcat is offline.


These numbers do not look that high. A nasty limit on some linuxes is 
the number of processes (maxproc or ulimit -u). It is counted per user 
and despite its name it actually counts tasks, and each thread in each 
process of a user is such a task. So it actually counts the number of 
threads in all processes fo some user id. The first process that hits 
the limit will only observe, that it can not start a new thread. 
Typically this will result in very unexpected behavior.



Im at loss as to what is causing it. Usually bind exceptions is when
another java process is running on the same port - but not the case here.
Even if i shutdown apache where mod_jk isnt being used tells me this is
really unrelated to mod_jk


How exactly does the exception look like (full stack)?


mod_jk.log:
[Thu Apr 25 04:14:07.458 2019] [30178:139932601325312] [error]
ajp_service::jk_ajp_common.c (2796): (w314) connecting to tomcat failed
(rc=-3, errors=2, client_errors=0).
[Thu Apr 25 04:14:07.458 2019] [30178:139932601325312] [info]
jk_handler::mod_jk.c (2991): Service error=-3 for worker=w314

I tried updating mod_jk to 1.2.46


There should be additional log lines directly above the snippet you 
showed here. Could you provide a more complete snippet?


Regards,

Rainer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org