"michael d. ivey" said:

> On Mon, Nov 06, 2000 at 04:41:08PM +0000, Justin Mason wrote:
> >   1. instead of the users directly connecting to each .site-file host,
> >   sitescooper.org has a cron job which gets the timestamp on the file once
> >   a day and assembles an index.
> > 
> >   2. sitescooper (at most once a day) connects to sitescooper.org and gets
> >   the latest index.
> > 
> >   3. using its own site files and the index of "latest site files" it can
> >   work out which site files need updating.
> > 
> >   4. an alternative index download location can be used (command line
> >   arg?) in case sitescooper.org falls off the face of the net.
> 
> Will this still allow the (original) idea, of having a .site file that
> redirects to another site file?

Yep -- the above mechanism just means that the overhead for checking for
new site files, in general, for all sites, is 1 HTTP operation.

With this setup, what it would mean for the content provider would be
that, once the site is updated, sitescooper.org would notice this and note
it in the manifest file; from then on the users would download it to their
local disk (once) and it wouldn't need to be reloaded until the site file
changed again.

Does this fit with what you're thinking?

--j.
_______________________________________________
Sitescooper-talk mailing list
[EMAIL PROTECTED]
http://lists.sourceforge.net/mailman/listinfo/sitescooper-talk

Reply via email to