Justin Erenkrantz wrote:
-1.This breaks the abstraction between the cache providers and the filter streams. The cache providers should not be in the business of delivering content down to the next filter - that is the job of mod_cache. Following this route is completely anti-thetical to the separation between storing the cache response and delivery of the content.
The current expectation that it be possible to separate completely the storing of the cached response and the delivery of the content is broken.
We have a real world case where the cache is expected to process a many MB or many GB file completely, before sending that same response to the network. This is too slow, and takes up too much RAM, resulting in a broken response to the client.
On wednesday night I wrote a patch that solved the large file problem, while maintaining the current separation between write-to-cache and write-to-network as you assert. This mod_cache code broke up the brigade into bite sized chunks inside mod_cache before passing it to write-to-cache, then write-to-network, and so on.
Joe vetoed the patch, saying that it duplicated the natural behaviour of apr_bucket_read().
The wednesday night patch was reverted, and thursday night was spent instead changing the cache_body() signature to make its own better judgement on how to handle cached files.
Now you veto this next patch, saying it breaks the abstraction.So, we have disgreement over the right way to solve the problem of the cache being expected to swallow mouthfuls too big for it to handle.
I agree with you that a design needs to be found on list first, as I have wasted enough time going round in circles coming up with solution after solution nobody is happy with.
Do we put this to a vote? Regards, Graham --
smime.p7s
Description: S/MIME Cryptographic Signature
