Re: WHY they are different when checking concurrent limit?

2015-11-11 Thread Willy Tarreau
Hi,

On Tue, Nov 10, 2015 at 07:50:56AM +, Zhou,Qingzhi wrote:
> Hi??
> Thanks very much.
> But I think we can use listener_full instead of limit_listener if we want
> wake up the listener when there??s a connection closed. Like in the
> beginning of listener_accept:
> 
>  if (unlikely(l->nbconn >= l->maxconn)) {
>   listener_full(l);
>   return;
>   }
> 
> 
> WHY not using listener_full ?

Because the listener is not full. If it were full, it would have been
handled by the test you pointed above. Here we're in the situation where
the frontend's maxconn is reached before the listener is full. So you
have 2 listeners in a frontend each getting half the number of connections.

We know that we won't be able to accept any new connection on this listener
until some connections are released on the frontend. So by calling
limit_listener() we temporarily pause the listener and add it to the
frontend's queue to be enabled again when the frontend releases
connections. There's no reason to add a delay here because we know
exactly when connections are released on this frontend. So trying this
again will not change anything.

Hoping this helps,
Willy




WHY they are different when checking concurrent limit?

2015-11-09 Thread Zhou,Qingzhi

Hi guys:
I’m reading the source code of version 1.6.2, in function listener_accept:

/* Note: if we fail to allocate a connection because of configured
* limits, we'll schedule a new attempt worst 1 second later in the
* worst case. If we fail due to system limits or temporary resource
* shortage, we try again 100ms later in the worst case.
*/
while (max_accept--) {
struct sockaddr_storage addr;
socklen_t laddr = sizeof(addr);

if (unlikely(actconn >= global.maxconn) && !(l->options & LI_O_UNLIMITED)) {
limit_listener(l, _listener_queue);
task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /* try again 
in 1 second */
return;
}

if (unlikely(p && p->feconn >= p->maxconn)) {
limit_listener(l, >listener_queue);   <―――here is my question.
return;
}

My question is why the task_schedule is not called again here? Any purpose?
In my knowledge, if the upper limit is reached, we should re-schedule the task 
with expire time, and the listener will wake up when the task is ran.

With great thanks,
Zhou


Re: WHY they are different when checking concurrent limit?

2015-11-09 Thread Willy Tarreau
Hi,

On Mon, Nov 09, 2015 at 12:46:57PM +, Zhou,Qingzhi wrote:
> if (unlikely(actconn >= global.maxconn) && !(l->options & LI_O_UNLIMITED)) {
> limit_listener(l, _listener_queue);
> task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /* try 
> again in 1 second */
> return;
> }
> 
> if (unlikely(p && p->feconn >= p->maxconn)) {
> limit_listener(l, >listener_queue);return;
> }
> 
> My question is why the task_schedule is not called again here? Any purpose?
> In my knowledge, if the upper limit is reached, we should re-schedule the
> task with expire time, and the listener will wake up when the task is ran.

No because if we're limited by the frontend itself, after we disable the
listener, we will automatically be woken up once a connection is released
there. It's when the global maxconn is reached that we want to reschedule
because there are some situations where we cannot reliably detect if
certain connections impacting global.maxconn have been released (eg:
outgoing peers connections and Lua cosockets count here).

Regards,
Willy




Re: WHY they are different when checking concurrent limit?

2015-11-09 Thread Zhou,Qingzhi
Hi:
Thanks very much.
But I think we can use listener_full instead of limit_listener if we want
wake up the listener when there’s a connection closed. Like in the
beginning of listener_accept:

 if (unlikely(l->nbconn >= l->maxconn)) {
listener_full(l);
return;
}


WHY not using listener_full ?

Thanks,
zhou



在 15/11/10 下午3:30, "Willy Tarreau"  写入:

>Hi,
>
>On Mon, Nov 09, 2015 at 12:46:57PM +, Zhou,Qingzhi wrote:
>> if (unlikely(actconn >= global.maxconn) && !(l->options &
>>LI_O_UNLIMITED)) {
>> limit_listener(l, _listener_queue);
>> task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /*
>>try again in 1 second */
>> return;
>> }
>> 
>> if (unlikely(p && p->feconn >= p->maxconn)) {
>> limit_listener(l, >listener_queue);   > return;
>> }
>> 
>> My question is why the task_schedule is not called again here? Any
>>purpose?
>> In my knowledge, if the upper limit is reached, we should re-schedule
>>the
>> task with expire time, and the listener will wake up when the task is
>>ran.
>
>No because if we're limited by the frontend itself, after we disable the
>listener, we will automatically be woken up once a connection is released
>there. It's when the global maxconn is reached that we want to reschedule
>because there are some situations where we cannot reliably detect if
>certain connections impacting global.maxconn have been released (eg:
>outgoing peers connections and Lua cosockets count here).
>
>Regards,
>Willy
>