(Seemingly, it may be beneficial to simply replace the sequentially numbered 
temp_file scheme with hash-named scheme, where if cached, the file is simply 
retained for some period of time and/or other condition, and which may be 
optionally symbolically aliased using their uri path and thereby respectively 
logically accessed as a local static file, or deleted upon no longer being 
needed and not being cached; and thereby kill multiple birds with one stone 
per-se?)

On Jun 30, 2014, at 8:44 PM, Paul Schlie <[email protected]> wrote:

> Is there any possible solution for this problem?
> 
> As although proxy_cache_lock may inhibit the creation of multiple proxy_cache 
> files, it has seemingly no effect on the creation of multiple proxy_temp 
> files, being the true root of the problem which the description of 
> proxy_cache_lock claims to solve (as all proxy_cache files are first 
> proxy_temp files, so unless proxy_cache_lock can properly prevent the 
> creation of multiple redundant proxy_temp file streams, it can seemingly not 
> have the effect it claims to)?
> 
> (Further, as temp_file's are used to commonly source all reverse proxy'd 
> reads, regardless of whether they're using a cache hashed naming scheme for 
> proxy_cache files, or a symbolic naming scheme for reverse proxy'd static 
> files; it would be nice if the fix were applicable to both.)
> 
> 
> On Jun 24, 2014, at 10:58 PM, Paul Schlie <[email protected]> wrote:
> 
>> Hi, Upon further testing, it appears the problem exists even with 
>> proxy_cache'd files with "proxy_cache_lock on".
>> 
>> (Please consider this a serious bug, which I'm surprised hasn't been 
>> detected before; verified on recently released 1.7.2)
>> 
>> On Jun 24, 2014, at 8:58 PM, Paul Schlie <[email protected]> wrote:
>> 
>>> Again thank you. However ... (below)
>>> 
>>> On Jun 24, 2014, at 8:30 PM, Maxim Dounin <[email protected]> wrote:
>>> 
>>>> Hello!
>>>> 
>>>> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote:
>>>> 
>>>>> Thank you; however it appears to have no effect on reverse proxy_store'd 
>>>>> static files?
>>>> 
>>>> Yes, it's part of the cache machinery.  The proxy_store 
>>>> functionality is dumb and just provides a way to store responses 
>>>> received, nothing more.
>>> 
>>> - There should be no difference between how reverse proxy'd files are 
>>> accessed and first stored into corresponding temp_files (and below).
>>> 
>>>> 
>>>>> (Which seems odd, if it actually works for cached files; as both 
>>>>> are first read into temp_files, being the root of the problem.)
>>>> 
>>>> See above (and below).
>>>> 
>>>>> Any idea on how to prevent multiple redundant streams and 
>>>>> corresponding temp_files being created when reading/updating a 
>>>>> reverse proxy'd static file from the backend?
>>>> 
>>>> You may try to do so using limit_conn, and may be error_page and 
>>>> limit_req to introduce some delay.  But unlikely it will be a 
>>>> good / maintainable / easy to write solution.
>>> 
>>> - Please consider implementing by default that no more streams than may 
>>> become necessary if a previously opened stream appears to have died (timed 
>>> out), as otherwise only more bandwidth and thereby delay will most likely 
>>> result to complete the request.  Further as there should be no difference 
>>> between how reverse proxy read-streams and corresponding temp_files are 
>>> created, regardless of whether they may be subsequently stored as either 
>>> symbolically-named static files, or hash-named cache files; this behavior 
>>> should be common to both.
>>> 
>>>>> (Out of curiosity, why would anyone ever want many multiple 
>>>>> redundant streams/temp_files ever opened by default?)
>>>> 
>>>> You never know if responses are going to be the same.  The part 
>>>> which knows (or, rather, tries to) is called "cache", and has 
>>>> lots of directives to control it.
>>> 
>>> - If they're not "the same" then the tcp protocol stack has failed, which 
>>> is nothing to do with ngiinx.
>>> (unless a backend server is frequently dropping connections, it's 
>>> counterproductive to open multiple redundant streams; as doing so by 
>>> default will only likely result in higher-bandwidth and thereby slower 
>>> response completion.)
>>> 
>>>> -- 
>>>> Maxim Dounin
>>>> http://nginx.org/
>>>> 
>>>> _______________________________________________
>>>> nginx mailing list
>>>> [email protected]
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>> 
>>> _______________________________________________
>>> nginx mailing list
>>> [email protected]
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>> 
> 

_______________________________________________
nginx mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to