On Monday 04 February 2002 14:52, Steve Willer wrote:
> On Mon, 2002-02-04 at 14:34, Melvyn Sopacua wrote:
> > A while back, we had this discussion. I now see the benefit of different
> > cache stages, BUT - I still am very worried about the performance loss of
> > all those evalutations:
> > 1) To cache or not to cache?
> > 2) Dir structure and subsequent IO for cache stages
> > 3) Since you opened the can of worms, people will want dynamic caching,
> > ie: "Stage x writes <axkit:cache value="On"/> for stage y", for instance
> > to allow cached db queries.
>
> 1) This is an easy one: in the thing I'm playing with, the default is
> "no", even if there is a cache-load and cache-save engine configured in
> the pipeline. One of the pieces of code being executed in one of the
> steps (XSP, Embperl, what have you) would have to essentially set a
> global variable that says "please cache me, and use a time-to-live value
> of 15 minutes". As a result, the "cache or not to cache" code would say
> something like:
>     next if not session_param("pb_cacheme");
>
> 2) I'm currently thinking I'd have a pipeline spec that looks something
> like this (format here is "step_id engine_name href extraparams"):
>
> 1 engine  embperl _start.exml|SKIP
> 2 clear-pipeline
> 3 cache-load (hostname)/(request_uri)/(cookies)/ skipto=6
> 4 engine  xsp
> 5 cache-save (hostname)/(request_uri)/(cookies)/ (pb_cacheme) (pb_ttl)
> 6 done-if-subrequest
> 7 engine  xslt    (base).xsl|SKIP
> 8 engine  xslt    _directory.xsl|SKIP
> 9 engine  xslt    /kernel/_root_css(csslevel).xsl|/kernel/_root.xsl|SKIP
> 10 postprocess

I think what you need to do is be able to attach "cache attributes" to 
anything that's cached. At each step the caching engine can look for a cached 
version with matching attributes. The attributes can be arbitrary name/value 
pairs.

Then the entire pipeline becomes a matter of figuring out what processor gets 
called next and what attributes/values it wants to use for its validation, 
test the cache for a hit, then either process or serve the cache and go on to 
the next step.

You could use directives to build your pipeline like.

AxCacheMatcher Foo::Bar /my/xsp/stylesheet.xsp

where Foo::Bar would just be a perl hook that would return yeah or nay. 

On the flip side when a processor is DONE with generating content it could 
call

AxCacheSetter Foo::Baz /my/xsp/stylesheet.xsp

which would create a perl hash of name value pairs suitable for that instance.

All that is left of the caching for the core is to figure out how to store 
and retrieve (which you could do by using filenames created from an MD5 of 
the attributes).
>
> The second argument in the cache-load and cache-save parameters is the
> Big Ugly String to be used for the cache key. You can just MD5 it or
> whatever to build a more or less unique key. The work involved in this
> is one regular expression to convert words in parens into session
> values, an extra bit to build a cachekey, and a stat() call.
>
> 3) I don't know what that would do.
>
> One remark: The caching mechanism I had in mind was specifically
> targeted at supporting HTML::Mason-type caching, where you might have a
> page that has 5 "panels" in it, 3 of which are cached to various
> degrees. This requires that you have caching of the XSP result, but not
> the stuff afterwards. If you want to cache an entire page, lock stock
> and barrel, definitely just stick Squid in front of the web server.
>
> > The other things descibed below, do not change depending on the visitor,
> > but depending on actions taken by the CMS.
> > But that's of course purely because we don't serve shopping carts or
> > other visitor dependable data.
>
> Honestly, take a look at squid for this. http://www.squid-cache.org/ .
> It does a superb job of caching a web site in "http accelerator" mode.
> I've run some big sites off of it, including www.ftd.com and
> www.webpersonals.com .
>
> > An AXKit forum would be nice :->, but you'd need a solid XML database,
> > not a database that can 'work with XML'.
>
> Why? Maybe this is one of those larger issues that spawn off large
> articles on xml.com, but it seems to me that a forum message is just a
> record with a bunch of fields in it, which would turn into an XML
> container tag with a bunch of child tags only one level deep. There's no
> reason you couldn't just fetch all rows from an SQL db as an array of
> hashrefs, and automagically convert that to a shallow XML tree....which
> TaglibHelper lets you do, coincidentally.
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to