I'd like to setup a reverse proxy with cache that will allow me to...

1. expose two different locations...
location /foo {}
location /bar {}

2. resolve to the same pass-through...
location /foo {
   proxy_pass "http://myhost.io/go";;
}
location /bar {
  proxy_pass "http://myhost.io/go";;
}

3. use the same cache...
location /foo {
   proxy_pass "http://myhost.io/go";;
   proxy_cache shared_cache;
}
location /bar {
   proxy_pass "http://myhost.io/go";;
   proxy_cache shared_cache;
}

4. use _different_ cache valid settings...
location /foo {
   proxy_pass "http://myhost.io/go";;
   proxy_cache shared_cache;
   proxy_cache_valid any 5m;
}
location /bar {
   proxy_pass "http://myhost.io/go";;
   proxy_cache shared_cache;
   proxy_cache_valid any 10m;
}

What I have found is that I can request /foo, then /bar and the /bar result 
will be an immediate HIT on the cache, which is good - the keys are the same 
and they are both aware of the cache. However, now that I've requested /bar any 
requests to /foo will result in cache HITs for 10 minutes instead of the 5 
minutes I want. If I never hit /bar, then /foo will cache HIT for the correct 5 
minutes.

Any thoughts on how I can use NGINX to configure my way into a solution for my 
unusual (?) use-case?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to