> Am 20.06.2022 um 09:21 schrieb Ruediger Pluem <rpl...@apache.org>:
> 
> 
> 
> On 6/17/22 11:24 AM, ic...@apache.org wrote:
>> Author: icing
>> Date: Fri Jun 17 09:24:57 2022
>> New Revision: 1902005
>> 
>> URL: http://svn.apache.org/viewvc?rev=1902005&view=rev
>> Log:
>>  *) mod_http2: new implementation of h2 worker pool.
>>     - O(1) cost at registration of connection processing producers
>>     - no limit on registered producers
>>     - join of ongoing work on unregister
>>     - callbacks to unlink dependencies into other h2 code
>>     - memory cleanup on workers deactivation (on idle timeouts)
>>     - idle_limit as apr_time_t instead of seconds
>> 
>> 
>> Modified:
>>    httpd/httpd/trunk/modules/http2/h2_c1.c
>>    httpd/httpd/trunk/modules/http2/h2_config.c
>>    httpd/httpd/trunk/modules/http2/h2_config.h
>>    httpd/httpd/trunk/modules/http2/h2_mplx.c
>>    httpd/httpd/trunk/modules/http2/h2_mplx.h
>>    httpd/httpd/trunk/modules/http2/h2_workers.c
>>    httpd/httpd/trunk/modules/http2/h2_workers.h
>>    httpd/httpd/trunk/modules/http2/mod_http2.c
> 
>> Modified: httpd/httpd/trunk/modules/http2/h2_workers.c
>> URL: 
>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_workers.c?rev=1902005&r1=1902004&r2=1902005&view=diff
>> ==============================================================================
>> --- httpd/httpd/trunk/modules/http2/h2_workers.c (original)
>> +++ httpd/httpd/trunk/modules/http2/h2_workers.c Fri Jun 17 09:24:57 2022
>> 
>> @@ -385,13 +415,13 @@ static apr_status_t workers_pool_cleanup
>> }
>> 
>> h2_workers *h2_workers_create(server_rec *s, apr_pool_t *pchild,
>> -                              int min_workers, int max_workers,
>> -                              int idle_secs)
>> +                              int max_slots, int min_active, apr_time_t 
>> idle_limit)
>> {
>>     apr_status_t rv;
>>     h2_workers *workers;
>>     apr_pool_t *pool;
>> -    int i, n;
>> +    apr_allocator_t *allocator;
>> +    int i, locked = 0;
>> 
>>     ap_assert(s);
>>     ap_assert(pchild);
>> @@ -401,7 +431,16 @@ h2_workers *h2_workers_create(server_rec
>>      * guarded by our lock. Without this pool, all subpool creations would
>>      * happen on the pool handed to us, which we do not guard.
>>      */
>> -    apr_pool_create(&pool, pchild);
>> +    rv = apr_allocator_create(&allocator);
>> +    if (rv != APR_SUCCESS) {
>> +        goto cleanup;
>> +    }
>> +    rv = apr_pool_create_ex(&pool, pchild, NULL, allocator);
>> +    if (rv != APR_SUCCESS) {
>> +        apr_allocator_destroy(allocator);
>> +        goto cleanup;
>> +    }
>> +    apr_allocator_owner_set(allocator, pool);
>>     apr_pool_tag(pool, "h2_workers");
>>     workers = apr_pcalloc(pool, sizeof(h2_workers));
>>     if (!workers) {
>> @@ -410,19 +449,23 @@ h2_workers *h2_workers_create(server_rec
>> 
>>     workers->s = s;
>>     workers->pool = pool;
>> -    workers->min_workers = min_workers;
>> -    workers->max_workers = max_workers;
>> -    workers->max_idle_secs = (idle_secs > 0)? idle_secs : 10;
>> +    workers->min_active = min_active;
>> +    workers->max_slots = max_slots;
>> +    workers->idle_limit = (idle_limit > 0)? idle_limit : 
>> apr_time_from_sec(10);
>> +    workers->dynamic = (workers->min_active < workers->max_slots);
>> +
>> +    ap_log_error(APLOG_MARK, APLOG_INFO, 0, workers->s,
>> +                 "h2_workers: created with min=%d max=%d idle_ms=%d",
>> +                 workers->min_active, workers->max_slots,
>> +                 (int)apr_time_as_msec(idle_limit));
>> +
>> +    APR_RING_INIT(&workers->idle, h2_slot, link);
>> +    APR_RING_INIT(&workers->busy, h2_slot, link);
>> +    APR_RING_INIT(&workers->free, h2_slot, link);
>> +    APR_RING_INIT(&workers->zombie, h2_slot, link);
>> 
>> -    ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, workers->s,
>> -                 "h2_workers: created with min=%d max=%d idle_timeout=%d 
>> sec",
>> -                 workers->min_workers, workers->max_workers,
>> -                 (int)workers->max_idle_secs);
>> -    /* FIXME: the fifo set we use here has limited capacity. Once the
>> -     * set is full, connections with new requests do a wait.
>> -     */
>> -    rv = h2_fifo_set_create(&workers->mplxs, pool, 16 * 1024);
>> -    if (rv != APR_SUCCESS) goto cleanup;
>> +    APR_RING_INIT(&workers->prod_active, ap_conn_producer_t, link);
>> +    APR_RING_INIT(&workers->prod_idle, ap_conn_producer_t, link);
>> 
>>     rv = apr_threadattr_create(&workers->thread_attr, workers->pool);
>>     if (rv != APR_SUCCESS) goto cleanup;
>> @@ -441,32 +484,35 @@ h2_workers *h2_workers_create(server_rec
>>     if (rv != APR_SUCCESS) goto cleanup;
>>     rv = apr_thread_cond_create(&workers->all_done, workers->pool);
>>     if (rv != APR_SUCCESS) goto cleanup;
>> +    rv = apr_thread_cond_create(&workers->prod_done, workers->pool);
>> +    if (rv != APR_SUCCESS) goto cleanup;
>> 
>> -    n = workers->nslots = workers->max_workers;
>> -    workers->slots = apr_pcalloc(workers->pool, n * sizeof(h2_slot));
>> -    if (workers->slots == NULL) {
>> -        n = workers->nslots = 0;
>> -        rv = APR_ENOMEM;
>> -        goto cleanup;
>> -    }
>> -    for (i = 0; i < n; ++i) {
>> +    apr_thread_mutex_lock(workers->lock);
>> +    locked = 1;
>> +
>> +    /* create the slots and put them on the free list */
>> +    workers->slots = apr_pcalloc(workers->pool, workers->max_slots * 
>> sizeof(h2_slot));
>> +
>> +    for (i = 0; i < workers->max_slots; ++i) {
>>         workers->slots[i].id = i;
>> -    }
>> -    /* we activate all for now, TODO: support min_workers again.
>> -     * do this in reverse for vanity reasons so slot 0 will most
>> -     * likely be at head of idle queue. */
>> -    n = workers->min_workers;
>> -    for (i = n-1; i >= 0; --i) {
>> -        rv = activate_slot(workers, &workers->slots[i]);
>> +        workers->slots[i].state = H2_SLOT_FREE;
>> +        workers->slots[i].workers = workers;
>> +        APR_RING_ELEM_INIT(&workers->slots[i], link);
>> +        APR_RING_INSERT_TAIL(&workers->free, &workers->slots[i], h2_slot, 
>> link);
>> +        rv = apr_thread_cond_create(&workers->slots[i].more_work, 
>> workers->pool);
>>         if (rv != APR_SUCCESS) goto cleanup;
>>     }
>> -    /* the rest of the slots go on the free list */
>> -    for(i = n; i < workers->nslots; ++i) {
>> -        push_slot(&workers->free, &workers->slots[i]);
>> +
>> +    /* activate the min amount of workers */
>> +    for (i = 0; i < workers->min_active; ++i) {
>> +        rv = activate_slot(workers);
>> +        if (rv != APR_SUCCESS) goto cleanup;
>>     }
>> -    workers->dynamic = (workers->worker_count < workers->max_workers);
>> 
>> cleanup:
>> +    if (locked) {
>> +        apr_thread_mutex_unlock(workers->lock);
>> +    }
>>     if (rv == APR_SUCCESS) {
>>         /* Stop/join the workers threads when the MPM child exits (pchild is
>>          * destroyed), and as a pre_cleanup of pchild thus before the threads
>> @@ -476,24 +522,84 @@ cleanup:
>>         apr_pool_pre_cleanup_register(pchild, workers, 
>> workers_pool_cleanup);    
>>         return workers;
>>     }
>> +    ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, workers->s,
>> +                 "h2_workers: errors initializing");
>>     return NULL;
>> }
> 
> Open style question with an ask for opinions: Would it be better to use
> 
> pool instead of workers->pool
> min_active instead of workers->min_active
> max_slots instead of workers->max_slots
> 
> ?
> 
> Would that make the code more
> 
> - readable
> - faster
> 
> or is the current approach better?

Maybe it would be faster on older compilers. Modern optimizers will most likely 
generate the same code in either case, I believe.

Readability is important, but unfortunately also per definition subjective, of 
course. My way of thinking, which I *try* to follow these days is:

1. pass args n, m to xxx_init(n, m)
2. check n, m, for consistency, adapt default values and assign to xxx->n, 
xxx->m
3. it may be that n != xxx->n or m != xxx->m now
4. Log that init(n, m) lead to creation of xxx with its n, m
5. init the rest of xxx according to x->n and x->m

One could modify n, m params directly for defaults or other adjustments, then 
assign them to xxx. But then one can no longer log the details.

An example of this would be 'idle_limit' which, when 0, is assigned a default 
value. After that, code needs to use workers->idle_limit and not the idle_limit 
init parameter.

Because I make so many mistakes, I feel more comfortable using the assigned 
values that are also used in the rest of the code.

Cheers,
Stefan

> Regards
> 
> RĂ¼diger
> 

Reply via email to