at a time earlier than now, darren chamberlain wrote:
> 
> Write a handler (or cgi script, or registry script, or NSAPI plugin, or PHP
> page) that handles 404 Errors, generates the (static) page, and writes it to
> the location in the file system where the requested page should live. The
> next time it is called, it will be treated like any other HTML file request.
> The fastest way to cache pages is to have them be regular HTML.

 i like this idea and i've heard of several people implementing it
 successfully. 

 so the interface to the caching code would be accesible anywhere? right?

 it could be fairly simple.. i think this sounds like a feasible project. it
 has a huge advantage over squid or other external proxies, b/c the validation
 logic doesn't have to fit the HTTP model.

 $cache->expire(key);
 $cache->store(key,value,{validate_options}); 
        - Mason already has some nice code to attach validation to the cache 
        entry.

 what else do you need externally?

 is it possible that Apache::Session could be used to handle storage? that way
 the caching code would be able to handle just about anything.. and the 
 back end storage is already generalized. if you used Session::FileStore, you
 could just set the Directory to be the document root, and the session_id to 
 the file name of the dynamic (and now static) page. 

 Aaron

> 
> Another option is to set up whatever handler you want, on a development or
> staging server (i.e., not the live one), and grab the pages 
> with lynx -dump or GET or an LWP script, and write them to the proper
> places in the filesystem where the live server can access them.
> With a little planning, this can be incorporated
> into a cron job that runs nightly (or hourly, whatever) for stuff that is
> updated regularly but is composed of discernable chunks.
> 
> Good luck.
> 
> (darren)
> 
> --
> Of course God is a vi user. If he used emacs, he'd still be waiting for it to
> load on the seventh day.

Reply via email to