Mark,

On 3/24/14, 1:08 PM, Mark Thomas wrote:
> On 24/03/2014 16:56, Christopher Schultz wrote:
>> Mark,
> 
>> On 3/24/14, 5:37 AM, Mark Thomas wrote:
>>> On 24/03/2014 00:50, Christopher Schultz wrote:
>>>> Mark,
>>>
>>>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>>>> Mark,
>>>>>
>>>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>>>> Mark,
>>>>>>>
>>>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>>>> All,
>>>>>>>>
>>>>>>>>> I'm looking at the comparison table at the bottom of
>>>>>>>>> the HTTP connectors page, and I have a few questions
>>>>>>>>> about it.
>>>>>>>>
>>>>>>>>> First, what does "Polling size" mean?
>>>>>>>>
>>>>>>>> Maximum number of connections in the poller. I'd
>>>>>>>> simply remove it from the table. It doesn't add
>>>>>>>> anything.
>>>>>>>
>>>>>>> Okay, thanks.
>>>>>>>
>>>>>>>>> Second, under the NIO connector, both "Read HTTP
>>>>>>>>> Body" and "Write HTTP Response" say that they are 
>>>>>>>>> "sim-Blocking"... does that mean that the API itself
>>>>>>>>> is stream-based (i.e. blocking) but that the actual 
>>>>>>>>> under-the-covers behavior is to use non-blocking
>>>>>>>>> I/O?
>>>>>>>>
>>>>>>>> It means simulated blocking. The low level writes use a
>>>>>>>>  non-blocking API but blocking is simulated by not
>>>>>>>> returning to the caller until the write completes.
>>>>>>>
>>>>>>> That's what I was thinking. Thanks for confirming.
>>>>>
>>>>>> Another quick question: during the sim-blocking for reading
>>>>>> the request-body, does the request go back into the poller
>>>>>> queue, or does it just sit waiting single-threaded-style? I
>>>>>> would assume that latter, otherwise we'd either violate the
>>>>>> spec (one thread serves the whole request) or spend a lot
>>>>>> of resources making sure we got the same thread back, etc.
>>>>>
>>>>> Both.
>>>>>
>>>>> The socket gets added to the BlockPoller and the thread waits
>>>>> on a latch for the BlockPoller to data can be read.
>>>
>>>> Okay, but it's still one-thread-one-request... /The/ thread
>>>> will stay with that request until its complete, right? The
>>>> BlockPoller will just wake-up the same waiting thread.. no
>>>> funny-business? ;)
>>>
>>> Correct.
>>>
>>>> Okay, one more related question: for the BIO connector, does
>>>> the request/connection go back into any kind of queue after
>>>> the initial (keep-alive) request has completed, or does the
>>>> thread that has already processed the first request on the
>>>> connection keep going until there are no more keep-alive
>>>> requests? I can't see a mechanism in the BIO connector to
>>>> ensure any kind of fairness with respect to request priority:
>>>> once the client is in, it can make as many requests as it wants
>>>> (up to maxKeepAliveRequests) without getting back in line.
>>>
>>> Correct. Although keep in mind that for BIO it doesn't make sense
>>> to have connections > threads so it really comes down to how the
>>> threads are scheduled for processing.
> 
>> Understood, but there are say 1000 connections waiting in the
>> accept queue and only 250 threads available, if my connection gets
>> accept()ed, then I get to make as many requests as I want without
>> having to get back in line. Yes, I ave to compete for CPU time with
>> the other 249 threads, but I don't have to wait in the
>> 1000-connection-long line.
> 
> I knew something was bugging me about this.
> 
> You need to look at the end of the while loop in
> AbstractHttp11Processor.process() and the call to breakKeepAliveLoop()
> 
> What happens is that if there is no evidence of a pipelined request at
> that point, the socket goes back into the socket/processor map and the
> thread is used to process another socket so you can end up with more
> concurrent connections than threads but only if you explicitly set
> maxConnections > maxThreads which I would maintain is a bad idea for
> BIO anyway as you can end up with some threads waiting huge amounts of
> time to be processed.

s/some threads/some connections/?

So the BIO connector actually attempts to enforce some "fairness"
amongst pipelined requests? But pipelined requests are very likely to
include .. shall we say "prompt"(?) additional requests, therefore the
fairness will not be very likely? And in the event(s) that there is a
pipeline stall, the connection may be unfairly ignored for a while
whilst the other connections are serviced to completion?

> Given that this feature offers little/no benefit at the price of
> having to run through a whole pile of code only to end up back where
> you started, I'm tempted to hard-code the return value of
> breakKeepAliveLoop() to false for BIO HTTP.

So your suggestion is that BIO fairness should be removed, so the the
situation I described above is actually the case: pipelined requests are
no longer fairly-scheduled amongst all connections vieing for attention?

When faced with the decision between unfair (priority) pipeline
processing and negatively unfair (starvation) pipeline processing, I
think I prefer the former. Most (non-malicious) clients don't make too
many pipelined requests, anyway. MaxKepAliveRequests can be used to
thwart that kind of DOS.

> Rémy Mucharat said:
> Yes please [that's how it used to be]. The rule for that connector is one
> thread <-> one connection, that's its only way of doing something useful
> for some users.

What about when an Executor is used, where the number of threads can
fluctuate (up to a maximum) but are (or can be) also shared with other
connectors?

-chris

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to