According to Ken Williams:
> >Another option is to set up whatever handler you want, on a development
> >or staging server (i.e., not the live one), and grab the pages with
> >lynx -dump or GET or an LWP script, and write them to the proper places
> >in the filesystem where the live server can access them. With a little
> >planning, this can be incorporated into a cron job that runs nightly
> >(or hourly, whatever) for stuff that is updated regularly but is
> >composed of discernable chunks.
>
> I've used this before and it works well. One disadvantage is that Luis
> would have to move all his existing scripts to different places, and fix
> all the file-path things that might break as a result. Seems like a
> front-end cache like squid is a better solution when Luis says he wants
> a cache on the front end.
>
> Putting squid in front of an Apache server used to be very popular - has
> it fallen out of favor? Most of the answers given in this thread seem
> to be more of the roll-your-own-cache variety.
It really depends on what you are doing. The real problem with
letting a front-end decide when a cache needs to be refreshed
is that it is usually wrong. If the back end can generate
predictably correct Expires: or Cache-Control headers, then
squid can mostly get it right. This will also make remote
caches work correctly. The trouble is that you generally
don't know when a dynamically generated page is going to
change. Also, squid will pull a fresh copy from the back
end whenever the user hits the 'reload' button, which tends
to be pretty often on dynamic pages that change frequently.
If you just want to control the frequency of doing some
expensive operation you might be able to do scheduled runs
that generate html snippets that are #included into *.shtml
pages, turning it into a cheap operation.
Les Mikesell
[EMAIL PROTECTED]