Thanks Filip.

- I think there's a bug with maxKeepAliveRequests. Each call to
Http11NioProcessor.process() resets the keepAliveLeft parameter to the old
maxKeepAliveRequests value. When a parsing of a HTTP request doesn't have
enough input to complete the parsing stage, the process() returns
SocketState.LONG in order to re-register on the selector for more input.
Next call to process() will lose previous keepAliveLeft count.

 I am still very much interested to know the implications of my
modifications to the code. What I did is:

- After a request processing was completed, and no input is available in the
input buffer (to ensure no pipelining on the connection) - instead of
keeping the processor in the connections map I've recycled it and
re-registered the channel for read on the selector. Next request on the same
connection will use the a different processor.

On Thu, May 21, 2009 at 7:06 PM, Filip Hanik - Dev Lists <devli...@hanik.com
> wrote:

> yes, with maxKeepAliveRequests you would also throttle the
> connectionTimeout (or keepAliveTimeout, but that may be in 6.0.20) to reduce
> the amount of objects.
> I think you have a very valid use case, so I will add in some features
> around managing the size of these objects to ensure they don't overgrow.
>
> Thank you very much for the report
>
> Filip
>
>
> sagi tomcat wrote:
>
>> Yes I am referring to the Http11NioProcessor objects. Playing with the
>> cache
>> size did not help, since I had to deal with ~7000 registered
>> Http11NioProcessor objects (by registered I mean the object becoming a JMX
>> managed object). That amount by itself consumed a lot of the old gen space
>> (around ~600MB of the 1.6GB) and left no room for other long lived
>> objects.
>> As a consquence, the server had to constantly perform a GC. Also, since
>> the
>> cache size was much lower than 7000, the actual creation, registration and
>> dereregistration of Http11NioProcessor objects caused a major concurrency
>> bottleneck. Changing the Http11NioProcessor to be dedicated to request and
>> not to a connection dropped the number of concurrent registered processors
>> to ~500 (instead of the 7000) in the same production environment and
>> eliminated all memory ad concurrency problems.
>>
>> I am not sure how the maxKeepAliveRequests can help. The problem is the
>> delay between each new http request on the same TCP connection. Browsers
>> tend to keep connections open without sending new requests in a hope for
>> reusing the connection. In the meantime, the processor is doing nothing
>> (reference in kept in the connections map, socket state is LONG),
>> consuming
>> memory and not returned to the cache.
>>
>> On Thu, May 21, 2009 at 5:12 PM, Filip Hanik - Dev Lists <
>> devli...@hanik.com
>>
>>
>>> wrote:
>>>
>>>
>>
>>
>>
>>> hi Sagi, are you referring to the Http11NioProcessor objects?
>>> If so, you should be able to configure the cache size when connections
>>> are
>>> released. So you could also use maxKeepAliveRequests to limit it
>>>
>>> do you have the memory dump available?
>>> Filip
>>>
>>>
>>>
>>> sagi tomcat wrote:
>>>
>>>
>>>
>>>> Hello,
>>>>
>>>> I am using Tomcat 6.0.18 in a production server, serving thousands of
>>>> users
>>>> and hundreds of transactions per second. I am using the NIO connector.
>>>> I've
>>>> noticed a serious memory utilization problem which were traced to the
>>>> fact
>>>> that a single processor is dedicated to a connection and is not recycled
>>>> in-between requests. I've noticed ~7000 registered processors and that,
>>>> as
>>>> a
>>>> result, the server was busy doing GC and nothing else. I've performed
>>>> modification to the code to allow a processor to be recycled between
>>>> requests (I believe this was the behavior in older Tomcat versions)
>>>> which
>>>> indeed solved the memory problem. I'd like to know more regarding the
>>>> implications of such a modification and to understand more why is the
>>>> processor is dedicated to a connection and not to a request.
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>>
>>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>>
>>>
>>>
>>>
>>
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to