On Thu, Jan 16, 2003 at 06:33:52PM +0100, Honza Pazdziora wrote:
> On Thu, Jan 16, 2003 at 06:05:30AM -0600, Christopher L. Everett wrote:
> > 
> > Do AxKit and PageKit pay such close attention to caching because XML
> > processing is so deadly slow that one doesn't have a hope of reasonable
> > response times on a fast but lightly loaded server otherwise?  Or is
> > it because even a fast server would quickly be on its knees under
> > anything more than a light load?
> 
> It really pays off to do any steps that will increase the throughput.
> And AxKit is well suited for caching because it has clear layers and
> interfaces between them. So I see AxKit doing caching not only to get
> the performance, but also "just because it can". You cannot do the
> caching easily with more dirty approaches.
> 
> > With a MVC type architecture, would it make sense to have the Model
> > objects maintain the XML related to the content I want to serve as
> > static files so that a simple stat of the appropriate XML file tells
> > me if my cached HTML document is out of date?
> 
> Well, AxKit uses filesystem cache, doesn't it?
> 
> It really depends on how much precission you need to achieve. If you
> run a website that lists cinema programs, it's just fine that your
> public will see the updated pages after five minutes, not immediatelly
> after they were changed by the data manager. Then you can really go
> with simply timing out the items in the cache.
> 
> If you need to do something more real-time, you might prefer the push
> approach of MVC (because pull involves too much processing anyway, as
> you have said), and then you have a small problem with MySQL. As it
> lacks trigger support, you will have to send the push invalidation
> from you applications. Which might or might not be a problem, it
> depends on how many of them you have.

I have pages that update as often as 15 seconds.  I just use mtime() and
has_changed() properly in my custom provider Provider.pm's or rely on
the File::Provider's checking the stat of the xml files.  Mostly users are
getting cached files.

For xsp's that are no_cache(1), the code that generates the inforation that
gets sent throught the taglib does its own caching.  Just as if it were a
plain mod_perl handler.  they use IPC::MM and Cache::Cache (usually filecache)

I've fooled w/ having the cache use different databases but finally decided it
didn't make much of a difference since the os and disk can be tuned effectively.
The standard rules apply: put the cache on its own disk spindle, ie. not on 
the same physical disk as your sql database etc.  Makes a big difference ...
you can see w/ vmstat, systat etc.

The only trouble is cleaning up the ever growing stale cache.  So, I use this
simple script in my /etc/daily.local file, or a guy could use cron.

Its similar to what's openbsd uses for its cleaning of /tmp,/var/tmp in the
/etc/daily script.

Ed.

# cat /etc/clean_www.conf
CLEAN_WWW_DIRS="/u4/www/cache /var/www/temp"

# cat /usr/local/sbin/clean_www
#!/bin/sh -
# $Id: clean_www.sh,v 1.2 2003/01/03 00:18:27 entropic Exp $

: ${CLEAN_WWW_CONF:=/etc/clean_www.conf}

clean_dir() {
    dir=$1
    echo "Removing scratch and junk files from '$dir':"
    if [ -d $dir -a ! -L $dir ]; then
            cd $dir && {
            find . ! -name . -atime +1 -execdir rm -f -- {} \;
            find . ! -name . -type d -mtime +1 -execdir rmdir -- {} \; \
                >/dev/null 2>&1; }
    fi
}

if [ -f $CLEAN_WWW_CONF ]; then
    . $CLEAN_WWW_CONF
fi

if [ "X${CLEAN_WWW_CONF}" != X"" ]; then
    echo ""
    for cfg_dir in $CLEAN_WWW_DIRS; do
        clean_dir "${cfg_dir}";
    done
fi



Reply via email to