hi,

i've searched the mailing lists and looked over the developer section of the 
wiki, and haven't found an answer for this yet.

is it possible to preemptively store the response for a given uri without 
making a http request to the backend host(s)?

i have a varnish tier with 100 hosts that is insulating a media serving tier, 
and i know ahead of time which objects users are going to request, since the 
objects are not requested until at least N seconds after they are written to 
the backend storage tier.  i would like the initial user requests to always be 
cache hits.

when new objects are written to the backend storage tier, i can make http 
requests for them to the varnish tier and force the objects to get loaded 
before users ask for them.   but...   all that does is time shift the i/o load 
on the backend, i.e. users won't see a delay when they ask for an object since 
it's already in the cache, but the amount of i/o work done on the backend is 
still the same.  it would be optimal if there was a way for me to stick data 
into the cache without making the request to the backend media serving tier.

for example, perhaps something like this on the admin cli:

backend.store $uri $ttl $len $data

or perhaps a way to send a POST to varnishd that contains the uri, response 
headers, and the content body.

the previous caching solution i used was developed in-house and has a small 
embedded http server that stores its cached objects in a separate memcached 
process.  the key is the uri and the value is the cached data, plus some 
minimal headers so we could reconstitute a valid response.  that made it 
straightforward to do a memcache set and skip the backend requests until the 
object had fallen out of the cache, at which time its request rate is going to 
be much lower (after 7-10 days, the request rates for these objects drop 20x).

i realize this is a bit of an edge case, but i would appreciate any advice or 
thoughts you guys have on this.  if this is not supported in varnish currently, 
how disruptive of a change would it be to the caching model?

thanks,

doug
_______________________________________________
varnish-misc mailing list
[email protected]
http://projects.linpro.no/mailman/listinfo/varnish-misc

Reply via email to