Ruediger Pluem wrote:

> The original code above uses some magic to avoid stomping on an already
> existing Warning header:
> 
>             /* make sure we don't stomp on a previous warning */
>             if ((warn_head == NULL) ||
>                 ((warn_head != NULL) && (ap_strstr_c(warn_head, "110") == 
> NULL))
> ) {
>                 apr_table_merge(h->resp_hdrs, "Warning",
>                                 "110 Response is stale");
> 
> Shouldn't we do the same thing here as well?

The nett effect is that we'll have two warnings, which is in theory
harmless, but consistency is good.

r808656.

>> +                /* try to obtain a cache lock at this point. if we succeed,
>> +                 * we are the first to try and cache this url. if we fail,
>> +                 * it means someone else is already trying to cache this
>> +                 * url, and we should just let the request through to the
>> +                 * backend without any attempt to cache. this stops
>> +                 * duplicated simultaneous attempts to cache an entity.
>>                   */
>> -                if (r->main) {
>> -                    ap_log_error(APLOG_MARK, APLOG_DEBUG, APR_SUCCESS,
>> -                                 r->server,
>> -                                 "Adding CACHE_SAVE_SUBREQ filter for %s",
>> -                                 r->uri);
>> -                    
>> ap_add_output_filter_handle(cache_save_subreq_filter_handle,
>> -                                                NULL, r, r->connection);
>> +                rv = ap_cache_try_lock(conf, r, NULL);
> 
> Hm. If the resource was present in the cache and stale we already tried to 
> acquire the
> lock in ap_cache_check_freshness. It seems wrong to try the lock twice.

There are two places where a lock is set. The first is when the entity
doesn't exist at all, and we only want to attempt to cache the entity
once. The second is when the entity goes stale, and we want to keep back
the thundering herd. This is why there are two instances of try_lock.

I found an issue with the second call, where if you already have a lock
the second attempt should succeed, fixed:

r808649.

>> @@ -368,7 +390,24 @@
>>              ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server,
>>                           "cache: Cache provider's store_body failed!");
>>              ap_remove_output_filter(f);
>> +
>> +            /* give someone else the chance to cache the file */
>> +            ap_cache_remove_lock(conf, r, cache->handle ?
> 
> IMHO if cache->handle would be NULL we wouldn't be here, because we would 
> back out
> before.

That makes one failure case work slightly differently to all the other
failure cases, it's far safer to just behave consistently across the board.

>> @@ -955,6 +1023,13 @@
>>      /* array of identifiers that should not be used for key calculation */
>>      ps->ignore_session_id = apr_array_make(p, 10, sizeof(char *));
>>      ps->ignore_session_id_set = CACHE_IGNORE_SESSION_ID_UNSET;
>> +    ps->lock = 0; /* thundering herd lock defaults to off */
>> +    ps->lock_set = 0;
> 
> Why don't you initialize lockmaxage_set and lockpath_set here as well.

Because it's unnecessary, the buffer is allocated with apr_pcalloc. None
of the _set's should be there, but one patch at a time. :)

> Overall the patch is hard to read since you are mixing typo fixes, style fixes
> and white space fixes with your functional changes. It would make it easier
> to read if you would separate these out in a separate patch.

That is Eclipse unfortunately, which takes it upon itself to strip
trailing whitespace on some lines without asking permission. Just went
through the initial patch, and didn't find any style fixes?

Regards,
Graham
--

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to