Heck, no, you're not alone.  I've been doing various half-assed forms of caching
at Techspex.com for a couple of years, and it'd be great to do it right and
integrate it with the server for once instead of hoping that the code covers all
the bases.  Which it does now, of course.

There's a *lot* of this kind of thing that AOLserver would be excellent at, if I
just had the time.  The good news is I have a nibble for a large implementation
where I may be able to specify the platform -- if this works out, then I'm *very*
willing to do this or pay somebody else to do it the way I want to see it.  But
I'm real picky.

Failing that, eventually I hope to have the time to address this.

Oh, by the way, for really raunchy dynamic content (I have one section which is
hierarchically based and it takes a *loooong* time to generate) then regenning
the cache on a hit is often prohibitive.  I did things that way for a while, and
finally gave up on it and wrote code to rewrite static HTML on database changes
(fortunately in this case, the database changes infrequently.)  This would have
to be an option for such a caching system.

In general, what you're looking at is a separation of content generation from
content serving, with much finer-grained control over who does what, when.  It's
hard to get it right, but if you have any kind of dynamic traffic at all it sure
does pay off.

I've never tried compression of content -- what kind of browser support does it
have?

Michael

Dossy wrote:

> On 2001.04.01, Daniel P. Stasinski <[EMAIL PROTECTED]> wrote:
> > I'm not against compressing content, in fact it is a feature i've been
> > waiting for, but when some of my static html files can be 500k or more,
> > compressing them over and over seems like a waste.   For completely dynamic
> > content, you could then compress on the fly.   Someone else descriibed a
> > month or two ago that one other server keeps two versions of content, one
> > plain and one normal.
>
> This reminds me of the discussion I wanted to raise for a while now.
>
> I'd really like to see caching implemented the way Vignette StoryServer
> does it.  Before people start arguing _against_ it, I just want to know
> what people estimate the work effort would be.
>
> For those who don't know, the architecture is basically:
>
> - client requests a URL
> - server looks to see if the page is cacheable
>   - if page is cacheable, and is cached, serve page out of cache
>   - if page is cacheable, and is not cached, generate the cached
>     page, then serve the page out of cache
>   - if the page is not cacheable, generate it dynamically and serve
>     the generated page
>   - if the page is static, treat the static page as though it was
>     actually the cached version of a dynamic page, and serve it out
>     of "cache"
>
> StoryServer gives you the ability to clear pages out of cache
> (effectively deleting the cache files).  Otherwise, the pages stay
> cached (even across restarts of the app. server).
>
> The gimmick is that the document root of the webserver doubles as
> the "cache directory" into which cached documents are stored.  The
> "templates" (the pages that get turned into cached documents) are
> stored elsewhere.  You can either manually place documents into
> the cache directory and they'll get served, or if the file doesn't
> exist in the cache directory, you look to see if a template exists
> and then go and evaluate it, and possibly cache it.
>
> Perhaps this kind of mechanism along with on-the-fly-compression of
> content could be really neat.
>
> Perhaps you have a large chunk of content and compression would be
> really advantageous.  Perhaps the content only changes on an hourly
> basis.  Perhaps you could create a template which pulls the content
> from wherever it lives.  Perhaps, when a user requests the template,
> it gets cached.  Perhaps, if a user requests a compressed version,
> the _compressed version_ gets cached.
>
> Once an hour, schedule a clear-cache of both the cached page and
> the compressed version of the cached page.  The very next request
> for each type (compressed or non-) will force the content to be
> generated (and possibly compressed) and saved to cache.  All other
> requests until the next clear-cache don't incur the overhead of
> on-the-fly compression -- they just serve the page out of cache
> just like any other static content.
>
> IMHO, this is one of the big reasons why StoryServer actually wins
> in the web app. arena over AOLserver "out of the box" -- it's
> easy to make large pseudo-dynamic sites really scream, because
> of this very, very simple caching mechanism.  I know there's lots
> of ways of implementing _some sort_ of caching with AOLserver
> (ns_cache, nsv, etc.) but none this "simple" or "transparent" ...
> and I think it could be really useful.
>
> Anyone have any thoughts?  Am I alone on this one?
>
> - Dossy
>
> --
> Dossy Shiobara                       mail: [EMAIL PROTECTED]
> Panoptic Computer Network             web: http://www.panoptic.com/

Reply via email to