Re: [varnish] Re: Default behaviour with regards to Cache-Control

2009-02-13 Thread Ricardo Newbery

On Feb 13, 2009, at 4:54 AM, Dag-Erling Smørgrav wrote:

 Ole Laursen o...@iola.dk writes:
 I looked up private here

  http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

 and it says

Indicates that all or part of the response message is intended
for a single user and MUST NOT be cached by a shared cache. This
allows an origin server to state that the specified parts of the
response are intended for only one user and are not a valid
response for requests by other users

 Varnish is not a shared cache, it's a surrogate (not covered by  
 RFC2616)

 DES


Speaking of which...

It would be super handy if Varnish supported Surrogate-Control.  I  
sympathize with the Varnish developers who feel that everything can be  
done in vcl but some of us really do need the backend to control the  
surrogate cache behavior on a more granular level.

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] Re: Cacheability - changed in Varnish 2?

2009-01-29 Thread Ricardo Newbery

On Jan 29, 2009, at 12:34 AM, Poul-Henning Kamp wrote:

 In message 4980f7d8.8090...@giraffen.dk, Anton Stonor writes:

 New try. First, a request with no expire or cache-control header.

   10 RxProtocol   b HTTP/1.1
   10 RxStatus b 200
   10 RxResponse   b OK
   10 RxHeader b Server: Zope/(Zope 2.10.6-final, python 2.4.5,
 linux2) ZServer/1.1 Plone/3.1.5.1
   10 RxHeader b Date: Thu, 29 Jan 2009 00:10:40 GMT
   10 RxHeader b Content-Length: 4
   10 RxHeader b Content-Type: text/plain; charset=utf-8
9 ObjProtocol  c HTTP/1.1
9 ObjStatusc 200
9 ObjResponse  c OK
9 ObjHeaderc Server: Zope/(Zope 2.10.6-final, python 2.4.5,
 linux2) ZServer/1.1 Plone/3.1.5.1
9 ObjHeaderc Date: Thu, 29 Jan 2009 00:10:40 GMT
9 ObjHeaderc Content-Type: text/plain; charset=utf-8
   10 BackendReuse b backend_0
9 TTL  c 1495399095 RFC 0 1233187840 0 0 0 0


 As far as I can tell, a zero TTL (number after RFC) can only
 happen here if your default_ttl parameter is set to zero, OR
 if there is clock-skew between the varnish machine and the
 backend machine.

 Make sure both machines run NTP.

 You can test that they agree by running
   ntpdate -d $backend
 on the varnish machine (or vice versa).


But would this matter since he is resetting the obj.ttl to 1 day in  
vcl_fetch?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] renaming varnish concepts...

2009-01-28 Thread Ricardo Newbery

On Jan 28, 2009, at 1:11 AM, Poul-Henning Kamp wrote:

 1.  Purge vs. Ban
 -

 The CLI and VCL commands are named purge, but they don't, they
 add a ban to the list of bans.

 I would actually like to rename purge to ban and add a real purge
 function that gets rid of the current object (ie: one found in the
 cache) and possibly its Vary: siblings.

 Purge does sound like it will be gone, whereas ban better explains
 what happens when we use the delayed regexp checks.

 Obviously, if I co-opt purge to mean something different, backwards
 compat is not possible, and all your purge scripts and VCLs with
 purge facilities will break.



Can you explain what is the effective difference between a real purge  
and a ban?  Would both still kill all the Vary siblings?  Other than  
possibly releasing the memory quicker, I'm not sure why I should  
care   :-)

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] Re: Cacheability - changed in Varnish 2?

2009-01-28 Thread Ricardo Newbery

On Jan 28, 2009, at 2:23 AM, Anton Stonor wrote:

 sub vcl_recv {
   set req.grace = 120s;
   set req.backend = backend_0;

 }



Is this truly all you have in vcl_recv?  This will mean that any  
cookied requests will get passed.  Is this intentional?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


[varnish] Re: Cacheability - changed in Varnish 2?

2009-01-28 Thread Ricardo Newbery

On Jan 28, 2009, at 4:29 PM, Anton Stonor wrote:

 Ricardo Newbery skrev:

 sub vcl_recv {
set req.grace = 120s;
set req.backend = backend_0;

 }

 Is this truly all you have in vcl_recv?  This will mean that any  
 cookied
 requests will get passed.  Is this intentional?

 No, this is not a production setup. My problem is not that I cache too
 much, but the opposite.

 And yep, I know about the cookie issue:
 http://markmail.org/message/pfpx7lanicpumsdg

 Thanks for noticing.

 /Anton


Sorry, I'm confused.  Maybe I'm misunderstanding what you're saying  
here, but the cookie issue will mean that you will cache too little,  
not too much.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] Varnish, Plone and Apache2

2009-01-21 Thread Ricardo Newbery

On Jan 21, 2009, at 1:50 PM, Charlie Farinella wrote:

 I have one site running Plone with lighttpd and Varnish that I set  
 up as
 documented here:
 http://bitubique.com/content/accelerate-plone-varnish


IMHO, the vcl generated by the plone.recipe.varnish recipe is superior  
to the one on that page.



 I have now been asked to set up others substituting Apache2 for  
 lighttpd
 by the developers, but haven't been able to find such detailed
 instructions for Apache2.  I believe I just need to find the Apache
 equivalent for this line from lighttpd.conf:

 proxy.server = ( /VirtualHostBase/ = (
( host = 127.0.0.1 , port = 6081 ) )
 )

 To my understanding something has to listen on port 80, send the  
 request
 to Varnish, which then either serves from the cache or sends the  
 request
 on to the Zope (Plone) port.

 If anyone knows offhand or has some experience with this I'd like to  
 hear
 from you.  Is Apache a bad choice for this?


Apache is not necessarily a bad choice.

You will need to use ProxyPass or RewriteRule directives.  The Apache  
setup isn't really that much different than the standard Zope/Apache  
config.  Plone.org has plenty of docs on this:
http://plone.org/documentation/tutorial/plone-apache
http://plone.org/documentation/how-to/plone-with-apache

You might also want to look into the CacheFu product:
http://plone.org/products/cachefu

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: [varnish] Re: Varnish Serves only uncompressed objects if they are requested first

2008-12-04 Thread Ricardo Newbery


Your app server should set the Vary on *all* responses if *any*  
response can vary.

Ric



On Dec 4, 2008, at 11:59 AM, Jeff Anderson wrote:

 Our app servers are sending the Vary on the Accept-Encoding when
 compression is requested.  If compression is not requested they do not
 perform the Vary.  Does that mean we should find a way to send a Vary:
 Accept-Encoding:  null,gzip,deflate or something?  Is there a 'no
 compression' accept-encoding header?


 On Dec 2, 2008, at 10:36 PM, Per Buer wrote:

 Jeff Anderson skrev:
 It looks like if the first requested page is for an uncompressed  
 page
 varnish will only deliver the uncompressed page from cache even if a
 compressed page is requested.

 As long as you don't Vary: on the Accept-Encoding I guess that is
 expected. Varnish doesn't not understand the Accept-Encoding header.

 (..)
 What could be causing this?  The only way to fix appears to be to  
 add
 the lines below:

 sub vcl_hash {
 if (req.http.Accept-Encoding ~ gzip || req.http.Accept-Encoding ~
 deflate) {
  set req.hash += req.http.Accept-Encoding;
 }

 Either use the fix you suggested or add a Vary: Accept-Encoding on  
 the
 backend.

 -- 
 Per Buer - Leder Infrastruktur og Drift - Redpill Linpro
 Telefon: 21 54 41 21 - Mobil: 958 39 117
 http://linpro.no/ | http://redpill.se/

 ___
 varnish-misc mailing list
 varnish-misc@projects.linpro.no
 http://projects.linpro.no/mailman/listinfo/varnish-misc

 ___
 varnish-misc mailing list
 varnish-misc@projects.linpro.no
 http://projects.linpro.no/mailman/listinfo/varnish-misc

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Conditional GET (was Re: caching using ETags to vary the content)

2008-11-12 Thread Ricardo Newbery

On Nov 4, 2008, at 2:19 PM, Miles wrote:

 Ryan Tomayko wrote:
 On 11/4/08 12:51 PM, Miles wrote:
 I know varnish doesn't do If-None-Match, but I don't think that is a
 problem in this scheme.

 I'm curious to understand why Varnish doesn't do validation /  
 conditional GET.
 Has If-Modified-Since/If-None-Match support been considered and  
 rejected on
 merit or is it something that could theoretically be accepted into  
 the
 project? Has it just not received any real interest?

 Personally, I'd love to see support for conditional GET as this can
 significantly reduce backend resource use when the backend  
 generates cache
 validators upfront and 304's without generating the full response.

 Ryan

 AFAIK varnish does do if-modified-since, just not if-none-match

 Miles


Unless this has changed with 2.0, varnish will *respond* to if- 
modified-since (IMS) with a 304 response if there is cached entry that  
fails this condition, but varnish will neither *pass* the IMS header  
to the backend (unless you customize the vcl) nor *generate* an IMS to  
the backend.

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: zope-plone.vcl

2008-05-05 Thread Ricardo Newbery

On May 5, 2008, at 6:02 AM, Wichert Akkerman wrote:

 Stig Sandbeck Mathisen wrote:
 Are there any good reasons not to run Plone with the CacheFu (or
 CacheSetup) product installed?  Would a non-CacheFu example be of any
 use?

 CacheSetup monkeypatches a fair amount of things, which breaks some  
 innocent
 third party products. In general my advise would be to never use  
 CacheSetup
 unless you really need it.

 Wichert.



Breaking third-party products is certainly a risk, but to be fair, I'm  
aware of only a single product so far that conflicts with CacheSetup.

While you certainly don't absolutely need CacheFu to take some  
advantage of Varnish (or Squid), in general my advice is to always use  
CacheFu unless you've got a good reason not to.  But since CacheFu is  
not currently included in the base Plone install, any example vcl  
should probably also support the non-CacheFu case.  Fixing 
http://varnish.projects.linpro.no/ticket/236 
  would help.  :-)

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: zope-plone.vcl

2008-05-02 Thread Ricardo Newbery

On May 1, 2008, at 10:50 PM, Wichert Akkerman wrote:

 Ricardo Newbery wrote:

 On May 1, 2008, at 2:21 PM, Wichert Akkerman wrote:

 Previously Ricardo Newbery wrote:
 Just poking around the tracker and I noticed some activity on the
 example plone vcl.  http://varnish.projects.linpro.no/changeset/ 
 2634

 Just thought I would chime in that the example has issues.

 First of all, it's unnecessary to filter cookie-authenticated  
 requests
 as authenticated responses are already set with a past date Expires
 (although you need to set a default_ttl of zero seconds, 
 http://varnish.projects.linpro.no/ticket/236)

 Actually that is not true. It holds for document, but a quick test  
 shows
 it does not for images.


 Pardon, can you elaborate?  What does not hold true?

 I realize that setting a default_ttl of zero seconds introduces  
 another problem in that items without explicit cache control would  
 not be cached.  That's why fixing the varnish Expires handling  
 would be better.

 Authenticated requests do not always get a past Expires-date in  
 their response. This appears to only be true for documents (like  
 ATDocument) but not for images (like ATImage).


Ah right... but I believe this is by design.  Images are usually not  
intended to be excluded from proxy caches.  In Plone, by default even  
if the images are restricted by their workflow state to authenticated  
requests, the response does not have any cache-control to exclude it  
from shared caches downstream (in my opinion, this is a bug).  And if  
you can't exclude it from downstream shared caches, it's rather  
pointless to exclude it from the reverse proxy cache.

The problem with the example zope-plone.vcl is that it excludes ALL  
cookie-authenticated responses -- even those inline images, css, and  
javascript files that otherwise would be cacheable in downstream  
caches -- making authenticated browsing unnecessarily taxing on the  
backend.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: default_ttl applied even when Expires exist

2008-04-21 Thread Ricardo Newbery

On Apr 20, 2008, at 10:33 PM, Poul-Henning Kamp wrote:

 In message 8240BA9F- 
 [EMAIL PROTECTED], Ricardo N
 ewbery writes:

 I see in rfc2616.c that this behavior is intentional.  Varnish
 apparently assumes a clockless origin server if the Expires date
 is not in the future and then applies the default ttl.

 Regarding this behavior.  I would like to suggest to the Varnish
 developers that this logic seems faulty.  I guess it's reasonable to
 assume a bad backend clock if the Date header looks off... but the
 Expires header?

 That particular piece of code is taken pretty directly from RFC2616
 with addition of the default_ttl assumption.

 I'm not at all adverse to changing this code, provided we can agree
 what the correct heuristics should be.


Well, if I parse the pseudocode correctly, it seems to be claiming to  
do the right thing.  But the actual code following adds something  
extra to the heuristic which results in slightly different behavior.

Using the 1.1.2 release, lines 84-86 in rcd2616.c,

if (date  expires)
retirement_age =
max(0, min(retirement_age, Expires: - Date:)

But in lines 146-145, we have,

if (h_date != 0  h_expires != 0) {
if (h_date  h_expires 
h_expires - h_date  retirement_age)
retirement_age = h_expires - h_date;
}

Which appears to impose an extra requirement that Expires must be  
greater than Date.  Fix that (and enforce a floor of 0) and it seems  
like we can interpret Expires with a date in the past correctly.

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


default_ttl applied even when Expires exist

2008-04-20 Thread Ricardo Newbery

Noticed some odd behavior.

On page with an already-expired Expires header (Expires: Sat, 1 Jan  
2000 00:00:00 GMT) and no other cache control headers, a stock install  
of Varnish 1.1.2 appears to be applying the built-in default_ttl of  
120 seconds when instead it should just immediately expire.  There is  
nothing in the vcl doing this so it appears that Varnish is just  
ignoring the Expires header.

Can anyone else confirm?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: default_ttl applied even when Expires exist

2008-04-20 Thread Ricardo Newbery

On Apr 20, 2008, at 2:44 AM, Ricardo Newbery wrote:


 Noticed some odd behavior.

 On page with an already-expired Expires header (Expires: Sat, 1 Jan
 2000 00:00:00 GMT) and no other cache control headers, a stock install
 of Varnish 1.1.2 appears to be applying the built-in default_ttl of
 120 seconds when instead it should just immediately expire.  There is
 nothing in the vcl doing this so it appears that Varnish is just
 ignoring the Expires header.

 Can anyone else confirm?

 Ric



Answering my own question.

I see in rfc2616.c that this behavior is intentional.  Varnish  
apparently assumes a clockless origin server if the Expires date is  
not in the future and then applies the default ttl.

The solution to this -- assuming you can't change the backend behavior  
-- appears to be to manually set a default_ttl = 0.   Are there any  
potential issues with this solution?

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: default_ttl applied even when Expires exist

2008-04-20 Thread Ricardo Newbery

On Apr 20, 2008, at 12:28 PM, Ricardo Newbery wrote:


 On Apr 20, 2008, at 2:44 AM, Ricardo Newbery wrote:


 Noticed some odd behavior.

 On page with an already-expired Expires header (Expires: Sat, 1 Jan
 2000 00:00:00 GMT) and no other cache control headers, a stock  
 install
 of Varnish 1.1.2 appears to be applying the built-in default_ttl of
 120 seconds when instead it should just immediately expire.  There is
 nothing in the vcl doing this so it appears that Varnish is just
 ignoring the Expires header.

 Can anyone else confirm?

 Ric



 Answering my own question.

 I see in rfc2616.c that this behavior is intentional.  Varnish  
 apparently assumes a clockless origin server if the Expires date  
 is not in the future and then applies the default ttl.

 The solution to this -- assuming you can't change the backend  
 behavior -- appears to be to manually set a default_ttl = 0.   Are  
 there any potential issues with this solution?

 Ric




Regarding this behavior.  I would like to suggest to the Varnish  
developers that this logic seems faulty.  I guess it's reasonable to  
assume a bad backend clock if the Date header looks off... but the  
Expires header?

At least one backend I'm familiar with uses an already-expired Expires  
date as a shorthand for do not cache and it seems that this is valid  
behavior according to RFC 2616.

 From RFC 2616 (14.9.3),

Many HTTP/1.0 cache implementations will treat an Expires value that
is less than or equal to the response Date value as being equivalent
to the Cache-Control response directive no-cache. If an HTTP/1.1
cache receives such a response, and the response does not include a
Cache-Control header field, it SHOULD consider the response to be
non-cacheable in order to retain compatibility with HTTP/1.0  
servers.

Even in the case of a clockless origin server, RFC 2616 allows for a  
past Expires date,

 From RFC 2616 (14.18.1),

Some origin server implementations might not have a clock available.
An origin server without a clock MUST NOT assign Expires or Last-
Modified values to a response, unless these values were associated
with the resource by a system or user with a reliable clock. It MAY
assign an Expires value that is known, at or before server
configuration time, to be in the past (this allows pre-expiration
of responses without storing separate Expires values for each
resource).

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Unprivileged user?

2008-04-15 Thread Ricardo Newbery

On Apr 14, 2008, at 11:03 PM, Per Andreas Buer wrote:

 Ricardo Newbery skrev:
 I'm trying to understand the purpose of the -u user option for
 varnishd.  It appears that even when starting up as root, and the
 child process dropping to nobody, Varnish is still saving and
 serving from cache even though nobody doesn't have read/write  
 access
 to the storage file owned by root.

 In Unix, if you drop privileges, you still have access to all your  
 open
 files. Access control happens when you open files. That should answer
 the rest of your questions too, I believe.


Hmm... maybe I'm missing something but this doesn't seem to answer the  
main question.  If, as you seem to imply, Varnish is opening any files  
it needs while it's still root, then what is the purpose of the -u  
user option?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Unprivileged user?

2008-04-15 Thread Ricardo Newbery

On Apr 14, 2008, at 11:25 PM, Florian Engelhardt wrote:

 On Mon, 14 Apr 2008 23:20:11 -0700
 Ricardo Newbery [EMAIL PROTECTED] wrote:


 On Apr 14, 2008, at 11:03 PM, Per Andreas Buer wrote:

 Ricardo Newbery skrev:
 I'm trying to understand the purpose of the -u user option for
 varnishd.  It appears that even when starting up as root, and the
 child process dropping to nobody, Varnish is still saving and
 serving from cache even though nobody doesn't have read/write
 access
 to the storage file owned by root.

 In Unix, if you drop privileges, you still have access to all your
 open
 files. Access control happens when you open files. That should
 answer the rest of your questions too, I believe.

 Hmm... maybe I'm missing something but this doesn't seem to answer
 the main question.  If, as you seem to imply, Varnish is opening any
 files it needs while it's still root, then what is the purpose of
 the -u user option?

 Thats the same thing in apache, mysql, ...
 Open every filehandle you need, then drop privileges. In case the
 software is beeing hacked, it can not damage the system, only the
 opened file pointers and everything the user can do. If the daemon
 would run as root, the hacker could do everything with your computer.

 /Flo


Please reread my question.  I know why privileges are dropped.  That  
is not the question.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Unprivileged user?

2008-04-15 Thread Ricardo Newbery

On Apr 15, 2008, at 12:15 AM, Poul-Henning Kamp wrote:

 Ricardo Newbery writes:

 I'm trying to understand the purpose of the -u user option for
 varnishd.  It appears that even when starting up as root, and the
 child process dropping to nobody, Varnish is still saving and
 serving from cache even though nobody doesn't have read/write  
 access
 to the storage file owned by root.

 The file is opened before the cache process drops to nobody, and in
 UNIX the access check is performed at open time and not at read/write
 time.


I must not be making myself clear.  Let me try again...

Assuming that nobody is an available user on your system, then is  
the -u user option for varnishd superfluous?

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Unprivileged user?

2008-04-14 Thread Ricardo Newbery

I'm trying to understand the purpose of the -u user option for  
varnishd.  It appears that even when starting up as root, and the  
child process dropping to nobody, Varnish is still saving and  
serving from cache even though nobody doesn't have read/write access  
to the storage file owned by root.

I'm guessing this is happening because Varnish is reading and writing  
to memory instead of the file storage?  So I suppose my question is  
what functionality is missing if the effective user doesn't have read/ 
write privileges to the file storage?  Is the backing file only  
accessed by the parent process?  And if so, what is the purpose of the  
-u user option?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-08 Thread Ricardo Newbery

On Apr 8, 2008, at 8:26 AM, DHF wrote:

 Ricardo Newbery wrote:
 Regarding the potential management overhead... this is not relevant  
 to the question of whether this strategy would increase your site's  
 performance.  Management overhead is a separate question, and not  
 an easy one to answer in the general case.  The overhead might be a  
 problem for some.  But I know in my own case, the overhead required  
 to manage this sort of thing is actually pretty trivial.
 How do you manage the split ttl's?  Do you send a purge after a page  
 has changed or have you crafted another way to force a revalidation  
 of cached objects?


Yes, a purge is sent after the page has changed.  For Plone, all of  
this is easy to automate with the CacheFu add-on.  Although support  
for adding a Surrogate-Control header (or whatever you use to  
communicate the local ttl) requires some minor customization (about 5  
lines of code).

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Two New HTTP Caching Extensions

2008-04-08 Thread Ricardo Newbery

On Apr 7, 2008, at 3:18 PM, Jon Drukman wrote:

 Poul-Henning Kamp wrote:
 In message [EMAIL PROTECTED], Sam  
 Quigley writ
 es:
 ...just thought I'd point out another seemingly-nifty thing the  
 Squid
 folks are working on:

 http://www.mnot.net/cache_channels/
 and
 http://www.mnot.net/blog/2008/01/04/cache_channels

 Interesting to see what hoops they try to jump through these days...


 I just got through working at Yahoo and they have valid reasons to  
 want
 all these behaviors.  The thing I didn't like about the cache channel
 implementation is it involves squid polling an RSS feed every few
 seconds to determine which bits of the cache to invalidate.

 I'm looking at launching a small site for a client and the
 stale-while-revalidate/stale-on-error functionality is fairly  
 critical.
  I want to go with varnish, though. Front end cache server in India,
 pulling content from the USA... lots of potential for slow/dead
 connections back to the origin, so it would be great if Varnish would
 serve stale content in this eventuality.

 -jsd-


+1 on stale-while-revalidate.  I found this one to be real handy.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-04 Thread Ricardo Newbery

On Apr 4, 2008, at 2:50 AM, Michael S. Fischer wrote:

 On Thu, Apr 3, 2008 at 8:59 PM, Ricardo Newbery [EMAIL PROTECTED] 
  wrote:

 Well, first of all you're setting up a false dichotomy.  Not  
 everything
 fits neatly into your apparent definitions of dynamic versus  
 static.  Your
 definitions appear to exclude the use case where you have cacheable  
 content
 that is subject to change at unpredictable intervals but which is  
 otherwise
 fairly static for some length of time.

 In my experience, you almost never need a caching proxy for this
 purpose.  Most modern web servers are perfectly capable of serving
 static content at wire speed.  Moreover, if your origin servers have a
 reasonable amount of RAM and the working set size is relatively small,
 the static objects are already likely to be in the buffer cache.  In a
 scenario such as this, having caching proxies upstream for these sorts
 of objects can actually be *worse* in terms of performance -- consider
 the wasted time processing a cache miss for content that's already
 cached downstream.


Again, static content isn't only the stuff that is served from  
filesystems in the classic static web server scenario.  There are  
plenty of dynamic applications that process content from database --  
applying skins and compositing multiple elements into a single page  
while filtering every element or otherwise applying special processing  
based on a user's access privileges.  An example of this is a dynamic  
content management system like Plone or Drupal.  In many cases, these  
dynamic responses are fairly static for some period of time but  
there is still a definite performance hit, especially under load.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-04 Thread Ricardo Newbery

On Apr 4, 2008, at 2:04 PM, Michael S. Fischer wrote:

 On Fri, Apr 4, 2008 at 11:05 AM, Ricardo Newbery [EMAIL PROTECTED] 
  wrote:

 Again, static content isn't only the stuff that is served from
 filesystems in the classic static web server scenario.  There are  
 plenty of
 dynamic applications that process content from database --  
 applying skins
 and compositing multiple elements into a single page while  
 filtering every
 element or otherwise applying special processing based on a user's  
 access
 privileges.  An example of this is a dynamic content management  
 system like
 Plone or Drupal.  In many cases, these dynamic responses are fairly
 static for some period of time but there is still a definite  
 performance
 hit, especially under load.

 If that's truly the case, then your CMS should be caching the output  
 locally.


Should be?  Why?  If you can provide this capability via a separate  
process like Varnish, then why should your CMS do this instead?  Am  
I missing some moral dimension to this issue?  ;-)

In any case, both of these examples, Plone and Drupal, can indeed  
cache the output locally but that is still not as fast as placing a  
dedicated cache server in front.  It's almost always faster to have a  
dedicated single-purpose process do something instead of cranking up  
the hefty machinery for requests that can be adequately served by the  
lighter process.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-03 Thread Ricardo Newbery

On Apr 3, 2008, at 11:04 AM, Michael S. Fischer wrote:

 On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski [EMAIL PROTECTED]  
 wrote:
 and I don't wan't upstream caches or browsers to cache that long,  
 only
 varnish, so setting headers doesn't seem to fit.

 Why not?  Just curious.   If it's truly cachable content, it seems as
 though it would make sense (both for your performance and your
 bandwidth outlays) to let browsers cache.

 --Michael


Can't speak for the OP but a common use case is where you want an  
aggressive cache but still need to retain the ability to purge the  
cache when content changes.  As far as I know, there are only two ways  
to do this without contaminating downstream caches with potentially  
stale content... via special treatment in the varnish config (which is  
what the OP is trying to do) or using a special header that only your  
varnish instance will recognize (like Surrogate-Control, which as far  
as I know Varnish does not support out-of-the-box but Squid3 does).

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-03 Thread Ricardo Newbery

On Apr 3, 2008, at 12:45 PM, Michael S. Fischer wrote:

 On Thu, Apr 3, 2008 at 11:53 AM, Ricardo Newbery [EMAIL PROTECTED] 
  wrote:
 On Apr 3, 2008, at 11:04 AM, Michael S. Fischer wrote:
 On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski [EMAIL PROTECTED]  
 wrote:

 and I don't wan't upstream caches or browsers to cache that long,  
 only
 varnish, so setting headers doesn't seem to fit.


 Why not?  Just curious.   If it's truly cachable content, it seems  
 as
 though it would make sense (both for your performance and your
 bandwidth outlays) to let browsers cache.

 Can't speak for the OP but a common use case is where you want an
 aggressive cache but still need to retain the ability to purge the  
 cache
 when content changes.  As far as I know, there are only two ways to  
 do this
 without contaminating downstream caches with potentially stale  
 content...
 via special treatment in the varnish config (which is what the OP  
 is trying
 to do) or using a special header that only your varnish instance will
 recognize (like Surrogate-Control, which as far as I know Varnish  
 does not
 support out-of-the-box but Squid3 does).

 Seems to me that this is rather brittle and error-prone.

 - If a particular resource is truly dynamic, then it should not be
 cachable at all.
 - If a particular resource can be considered static (i.e. cachable),
 yet updateable, then it is *far* safer to version your URLs, as you
 have zero control over intermediate proxies.

 --Michael



If done correctly, this is neither brittle nor error-prone.  This is  
the point after all of the Surrogate-Control header -- A way for your  
backend to instruct your proxy (or surrogate if you insist) how to  
handle your content in a way that is invisible to intermediate proxies  
not under your control.

While not as flexible as the Surrogate-Control header, you can do the  
same just with special stanzas in your varnish.vcl.  In fact, the vcl  
man page contains one example of how to do this for all objects to  
enforce a minimum ttl:

  sub vcl_fetch {
  if (obj.ttl  120s) {
  set obj.ttl = 120s;
  }
  }

Or you can invent your own header... let's call it  X-Varnish-1day

  sub vcl_fetch {
  if (obj.http.X-Varnish-1day) {
  set obj.ttl = 86400s;
  }
  }

Neither of these two examples are unsafe and both are invisible to  
intermediate proxies.

With regards to URL versioning, this is indeed a powerful strategy --  
assuming your backend is capable of doing this.  But it's a strategy  
generally only appropriate for supporting resources like inline  
graphics, css, and javascript.  URL versioning is usually not  
appropriate for html pages or other primary resources that are  
intended to be reached directly by the end user and whose URLs must  
not change.

Ric







___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: cache empties itself?

2008-04-03 Thread Ricardo Newbery

On Apr 3, 2008, at 7:46 PM, Michael S. Fischer wrote:

 On Thu, Apr 3, 2008 at 7:37 PM, Ricardo Newbery [EMAIL PROTECTED] 
  wrote:

 URL versioning is usually not appropriate for html
 pages or other primary resources that are intended to be reached  
 directly by
 the end user and whose URLs must not change.

 Back to square one.  Are these latter resources dynamic, or are they  
 static?

 - If they are dynamic, neither your own proxies nor upstream proxies
 should be caching the content.
 - If they are static, then they should be cacheable for the same
 amount of time all the way upstream (modulo protected URLs).

 I've haven't yet seen a defensible need for varying cache lifetimes,
 depending on the proximity of the proxy to the origin server, as this
 request seems to be.  Of course, I'm open to being convinced otherwise
 :-)


Well, first of all you're setting up a false dichotomy.  Not  
everything fits neatly into your apparent definitions of dynamic  
versus static.  Your definitions appear to exclude the use case where  
you have cacheable content that is subject to change at unpredictable  
intervals but which is otherwise fairly static for some length of  
time.

Sometimes, in such a case, serving stale content for some time after  
an edit is an acceptable compromise between performance and freshness  
but often it is not.   And sometimes, impacting overall performance by  
hitting the backend for every such request is also undesirable.

Thankfully, those are not the only choices.  With a combination of  
PURGE requests and something like Surrogate-Control (or hardcoded  
behavior in your reverse-proxy config), you can still insure immediate  
freshness (or whatever level of freshness you require) without forcing  
your backend to do all the work.

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Authenticate or Authorization?

2008-03-27 Thread Ricardo Newbery


In the default vcl, we have the following test...

  if (req.http.Authenticate || req.http.Cookie) {
  pass;
  }


What issues an Authenticate header?  Was this supposed to be  
Authorization?

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Authenticate or Authorization?

2008-03-27 Thread Ricardo Newbery

On Mar 27, 2008, at 5:50 PM, Cherife Li wrote:

 On 03/28/08 06:47, Ricardo Newbery wrote:
 In the default vcl, we have the following test...
  if (req.http.Authenticate || req.http.Cookie) {
  pass;
  }
 What issues an Authenticate header?  Was this supposed to be   
 Authorization?
 I'm also wondering that whether this http.Authenticate means Proxy- 
 Authenticate
 , Proxy-Authorization, and WWW-Authenticate headers defined in RFC  
 2616.


WWW-Authenticate and Proxy-Authenticate are response headers, not  
request headers.  And they are supposed to accompany a 401 or 407  
response, neither of which should be cacheable in any event.

Proxy-Authorization is a request header but it would only be sent by a  
browser if Varnish first requested it with a 407 response, which I'm  
pretty sure Varnish does not do.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


what if a header I'm testing is missing?

2008-03-21 Thread Ricardo Newbery

This is a minor thing but I'm wondering if I'm making an incorrect  
assumption.

In my vcl file, I have lines similar to the following...

 if (req.http.Cookie  req.http.Cookie ~ (__ac=|_ZopeId=)) {
 pass;
 }

and I'm wondering if the first part of this is unnecessary.  For  
example, what happens if I have this...


 if (req.http.Cookie ~ (__ac=|_ZopeId=)) {
 pass;
 }

but no Cookie header is present in the request.  Is Varnish flexible  
enough to realize that the test fails without throwing an error?

Ric




___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Specification out of date?

2008-03-21 Thread Ricardo Newbery

On Mar 21, 2008, at 11:45 AM, Ricardo Newbery wrote:


 On Mar 21, 2008, at 5:08 AM, Dag-Erling Smørgrav wrote:

 Ricardo Newbery [EMAIL PROTECTED] writes:
 Dag-Erling Smørgrav [EMAIL PROTECTED] writes:
 I still don't understand why you want to go from hit to fetch.   
 Just
 pass it.
 Because a pass will not store the response in cache when it  
 otherwise
 should if it contains a public token.

 Dude, it's already in the cache.  That's how you ended up in vcl_hit
 in
 the first place.


 Doesn't matter.  An authenticated request should not pull from cache
 *unless* the public token is present.


Also, if the authenticated response *does* contain a public token,  
then it should replace the version in cache.

Note, I've already acknowledged that this second part of the logic  
might be flawed.  If *any* response contains a 'public' token, then  
theoretically *all* responses to the same request should contain the  
token (ignoring transient differences caused by backend changes).  So  
conversely, if we don't find the public token in the cache version,  
then it may be okay to assume that any subsequent request will  
continue to be non public.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Specification out of date?

2008-03-20 Thread Ricardo Newbery

On Mar 20, 2008, at 2:15 AM, Dag-Erling Smørgrav wrote:

 Ricardo Newbery [EMAIL PROTECTED] writes:
 [...]

 Yes, the spec is two years out of date.


Right.  That much was apparent.  My question again is shouldn't this  
document be updated?  And is there still an intent to implement any of  
this?


 If you want Varnish to obey Cache-Control, it is trivial to  
 implement in
 VCL.


Well... perhaps.

I think I can implement 'no-cache' and 'private' with the following  
stanza in vcl_fetch:

 if (obj.http.Cache-Control ~ (no-cache|private)) {
 pass;
 }

But this behavior is trivial to duplicate in Varnish with just s- 
maxage=0 so there is probably no advantage to this unless my backend  
can't set s-maxage for some reason.

I'm actually more interested in trying to reproduce the semantics of  
the 'public' token.  But I'm having trouble figuring out how to  
implement this one in vcl.  In the default vcl, authenticated requests  
are passed through before any cache check or backend fetch is  
attempted.  If I rearrange this a bit so that the authenticate test  
comes later, I think I run into a vcl limitation.

For example, the following seems like it should work:

1) Remove from vcl_recv the following...

  if (req.http.Authenticate || req.http.Cookie) {
  pass;
  }

2) Add to vcl_hit the following (after the !obj.cacheable test)...

  if (obj.http.Cache-Control ~ public ) {
  deliver;
  }
  if (req.http.Authenticate) {
  fetch;
  }

3) Add to vcl_fetch the following (after the other tests)...

  if (obj.http.Cache-Control ~ public ) {
  insert;
  }
  if (req.http.Authenticate) {
  pass;
  }

But the vcl man page appears to tell me that 'fetch' is not a valid  
keyword in vcl_hit so, if I believe the docs, then this is not going  
to work.

Do you have any suggestions on how to implement the 'public' token in  
vcl?

Ric




___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Specification out of date?

2008-03-20 Thread Ricardo Newbery

On Mar 20, 2008, at 6:07 AM, Dag-Erling Smørgrav wrote:

 Ricardo Newbery [EMAIL PROTECTED] writes:
 If an authenticated request comes in and I have a valid cached copy,
 Varnish should not return the cached copy *unless* the copy  
 contains a
 public' token.  It's not enough that Varnish previously tested for
 the public token before insertion as the previous request may have
 been a regular non-authenticated request which should be cached
 regardless.  So I need to test for the public token before both
 insertion and delivery from cache.

 I still don't understand why you want to go from hit to fetch.  Just
 pass it.


Because a pass will not store the response in cache when it otherwise  
should if it contains a public token.

Hmm, perhaps I'm making some error in logic.

If an item is in the cache and it doesn't have a 'public' token, then  
can I safely assume that authenticated version of the same item will  
also not contain a 'public' token?  My first thought was that I can't  
make this assumption.  But it's late now and my thinking is getting  
fuzzy.  I'll have to pick this up again later.

But if I tentatively accept this assumption for now, then do you see  
any problem with the same solution but with a 'pass' instead of  
'fetch'?  Like so...

1) Remove from vcl_recv the following...

  if (req.http.Authenticate || req.http.Cookie) {
  pass;
  }

2) Add to vcl_hit the following (after the !obj.cacheable test)...

  if (obj.http.Cache-Control ~ public ) {
  deliver;
  }
  if (req.http.Authenticate) {
  pass;
  }

3) Add to vcl_fetch the following (after the other tests)...

  if (obj.http.Cache-Control ~ public ) {
  insert;
  }
  if (req.http.Authenticate) {
  pass;
  }




___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


!obj.cacheable passes?

2008-03-19 Thread Ricardo Newbery

I'm looking at the default vcl and I see the following stanza:

  sub vcl_hit {
  if (!obj.cacheable) {
  pass;
  }
  deliver;

According to the vcl man page:

  obj.cacheable
True if the request resulted in a cacheable response.  A  
response
is considered cacheable if it is valid (see above), the  
HTTP status
code is 200, 203, 300, 301, 302, 404 or 410 and it has a  
non-zero
time-to-live when Expires and Cache-Control headers are  
taken into
account.


Something about this seems odd.  Perhaps someone can clear it up for me.

We drop into vcl_hit if the object is found in the cache -- before  
we attempt to fetch from the backend.  And a pass of course doesn't  
cache the response.  Why do we not attempt to cache the response if  
the copy in our cache is not cacheable?  Couldn't a subsequent  
response otherwise be cacheable?  In other words, shouldn't this  
stanza instead be something like this:

  sub vcl_hit {
  if (!obj.cacheable) {
  fetch;
  }
  deliver;
  }

And let vcl_fetch determine whether the new copy should be inserted  
into the cache?

If I understand correctly, 'vcl_hit' cannot currently be terminated  
with 'fetch'.   Why is that?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


hash with Accept-Encoding

2008-02-02 Thread Ricardo Newbery

I came across this line in an example vcl which confused me...

sub vcl_hash {
 set req.hash += req.http.Accept-Encoding;
}


This line seemed superfluous to me since it was my impression that  
varnish already took care of this automatically as long as the Vary  
header was set correctly.  Is this not the case?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish Hash

2007-12-06 Thread Ricardo Newbery

On Dec 6, 2007, at 2:34 PM, Poul-Henning Kamp wrote:

 In message [EMAIL PROTECTED], Erik writes:
 Just to make this clear, does varnish identify an object like this  
 in vcl_hash?

 sub vcl_hash {
 set req.hash += req.url;
 set req.hash += req.http.host;
 hash;
 }

 Well, mostly.  That is the primary identification, but each match
 can have multiple different objects, depending on the Vary header
 and ttl.


Apologies for butting into this thread...

Multiple objects depending on ttl?  Can you elaborate?

Also, how are purges done when variations exist?  I'm guessing all  
variations get purged.  Is this correct?

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: how to test varnish with port 8080

2007-11-05 Thread Ricardo Newbery

On Nov 5, 2007, at 10:48 AM, Damien Wetzel wrote:

 Hello,
 I'm trying varnish on a 64 bit machine on which the port 80 is already
 active, so i use the port 8080 to talk to varnish.
 I wondered if someone knew a way to tell firefox to use port 8080 by
 default to avoid me the pain of adding :8080 on all my requests.
 Many thanks for any ideas :)
 Damien,
 --  


Use a redirect, proxypass, or rewriterule in the apache config  
(assuming apache is the one hogging port 80) to send selected  
requests to port 8080.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: apache+zope+varnish

2007-09-24 Thread Ricardo Newbery

On Sep 24, 2007, at 8:57 AM, [EMAIL PROTECTED] wrote:

 Hi,

 My configuration :
 I've a apache server with the mod proxy activated listening in the  
 port 80 and redirect to 9080 where my plone site listen.

 And i know that varnish is the best solution to cache
 How integrated varnish in this achitecture ? (What port must listen  
 varnish in my architecture? How specify to apache to cache with  
 varnish? ...)

 I begin to install varnish with the source but i've some problem to  
 understand how implemented all this technologies (i'm french and ...)

 Thanks for your answer
 (i've read architect note and i'm beginning to read the man and  
 other doc but au user guide will be so useful)
 long life to varnish it will be a standard...


If you're using Plone, I suggest looking at CacheFu (http://plone.org/ 
products/cachefu).  Even if you're not using Plone, CacheFu generates  
some beta Varnish configs that might be useful as a reference.

Another one to look at is plone.recipe.varnish (http:// 
pypi.python.org/pypi/plone.recipe.varnish), which despite the name is  
NOT specific to Plone.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: apache+zope+varnish

2007-09-24 Thread Ricardo Newbery

On Sep 24, 2007, at 12:18 PM, jean-marc pouchoulon wrote:


 bonjour,


 And i know that varnish is the best solution to cache
 How integrated varnish in this achitecture ? (What port must listen
 varnish in my architecture? How specify to apache to cache with
 varnish? ...)


 put varnish in front of apache
 apache don't have to cache anything just using it to load balance  
 to zeo
 client

 I begin to install varnish with the source but i've some problem to
 understand how implemented all this technologies (i'm french  
 and ...)


 I'm french too and working for the the same big entity ( :-D )
 ...and we use varnish with zope/cps.
 I did some slides on varnish and had presented them to other systems
 administrators in June. If you want them and our config you can  
 contact me.

 Jean-marc Pouchoulon
 Rectorat de Montpellier


Use-cases differ of course, but I'm partial to the other way around.   
Put Apache in front of Varnish and Varnish in front of Zope.  And if  
you need load balancing, insert something like Pound between Varnish  
and the Zeo clients (although there is some work in Varnish trunk to  
add load balancing).  This configuration frees up Apache to serve  
other stuff besides the Varnish cached content.

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: 1.1.1 progress

2007-08-13 Thread Ricardo Newbery

On Aug 12, 2007, at 2:48 PM, Martin Aspeli wrote:

 Dag-Erling Smørgrav wrote:
 Martin Aspeli [EMAIL PROTECTED] writes:
 Are there any OS X fixes in the 1.1 branch?

 Some, yes.  At least, I hope the -flat_namespace hack is no longer
 required, but I don't have a Mac to test on right now.

 I just tried it, doing the following:

   $ svn co http://varnish.projects.linpro.no/svn/branches/1.1
   $ ./autogen.sh
   $ ./configure.sh --prefix=/path/to/install/dir
   $ make
   $ make install

 And it works! I no longer have to do the libtool patch from
 http://varnish.projects.linpro.no/ticket/118.

 Note that my environment is maybe not 100% normal, in that I have  
 GNU
 versions of various tools (including libtool) installed via MacPorts.
 But I definitely had to do the workaround before, and I no longer  
 do, so
 I assume it's fixed.

 If others can confirm, you may be able to close 118 now.

 Thanks!
 Martin


Not so fast.  I still needed to update automake as per the  
instructions on http://varnish.projects.linpro.no/wiki/Installation  
(slightly modified since the autogen.sh file is now a little different).

Also, I think you meant  ./configure, not ./configure.sh

Running OS X 10.4.10 on an Intel Core Duo.  Mostly stock with just a  
couple of Fink and MacPorts installed tools.

Ric



___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Small varnish 1.1 test with openrealty and joomla.

2007-07-31 Thread Ricardo Newbery

On Jul 31, 2007, at 10:16 AM, [EMAIL PROTECTED] wrote:

 Quoting Dag-Erling Smørgrav [EMAIL PROTECTED]:

 [EMAIL PROTECTED] writes:
 Quoting Dag-Erling Smørgrav [EMAIL PROTECTED]:
 The example you mention illustrates how to cache multiple  
 virtual hosts
 served by *separate* backends.  If all your virtual hosts are on  
 the
 same backend, you shouldn't need to do anything.
 That is what I thought so it must be something in my apache vhost
 configuration.  It really doesn't make sense yet, to me.

 Is Varnish passing the correct Host: header to Apache?

 AFAIK, it is passing what I am/was telling it in default.vcl.  The
 localhost trumps the orginal named base vhost for apache and that is
 what my configuration file was asking for.  So it is correct but not
 what I'm trying to do;)

 backend default {
set backend.host = 127.0.0.1;
set backend.port = 8080;
 }

 What I am trying to do is to get it to pass something like:

set backend.host = req.http.host;

 Which, from my limited perspective, would be a solution for this vhost
 issue although there are probably others and hopefully better and/or
 simpler options.

 Thanks again, DES, for spending time on this somewhat trivial issue
 that I'm having with my learning curve.


The following doesn't work?

vcl_recv {
 set backend.host = req.http.Host;
}

or if it's still missing the Host header, does the following work:

vcl_miss {
 set bereq.http.Host = req.http.Host;
}

and ditto for vcl_pipe and vcl_pass

Ric


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc