Brian Pane wrote:

> In the "shawowing" case, we'd also need a way for all the requests
> reading from the same incomplete cache entry to block if they reach
> the end of the incomplete cached brigade without seeing EOS.  I guess
> we could do this by adding a condition variable to every incomplete
> entry in the cache, so that the threads shadowing the request could
> block if they'd sent all the data available so far.  And then the
> thread that was actually retrieving the resource would signal the
> condition variable each time it added some more data to the cache
> entry.
> 
> But that's way too complicated. :-)
> 
> What do you think about the following as a low-tech solution:
> 
> * Keep the current model of only putting complete responses in
>  the cache (at least for now).

This was exactly how the old cached worked - which would mean we just 
rewrote the cache to have exactly the same design flaws as the old cache 
did, which is a big waste of time.

This problem causes an annoying race condition: when a cached object 
expires, for a short time from expiry to cache-complete all requests go 
through to the backend. This causes load spikes, which on expensive 
backends can be really annoying (it has been reported so in the past). A 
key design feature of the new cache was to allow "shadowing" to be 
possible, ie a partially cached response would be servable by other 
cache threads/processes, thus solving this problem.

Regards,
Graham
-- 
-----------------------------------------
[EMAIL PROTECTED] 
        "There's a moon
                                        over Bourbon Street
                                                tonight..."

Reply via email to