On Wed, Mar 12, 2014 at 6:40 AM, Richard Hipp <d...@sqlite.org> wrote:
> A new feature was recently added to Fossil that allows it to deny expensive
> requests (such as "blame" or "tarball" on a large repository) if the server
> load average is too high.  See
> http://www.fossil-scm.org/fossil/doc/tip/www/server.wiki#loadmgmt for
> further information.

Interesting.

> I am pleased to announce that this new feature has passed its first test.
>
> About three hours ago, a single user in Beijing began downloading multiple
> copies of the same System.Data.SQLite tarball.  As of this writing, he has
> so far attempted to download that one tarball 11,784 times (at last count -

> a rate of about one per second, and each request takes about 3.1 seconds of
> CPU time in order to compute the 80MB tarball.

> And if you have alternative suggestions about how to keep a light-weight
> host running smoothly under a massive Fossil request load, please post
> follow-up comments.

How sensible do you think would it be to have a (limited-size)
(in-memory|disk) cache to hold the most recently requested tarballs ?
That way a high-demand tarball, etc. would be computed only once and
then served statically from the cache.

Note that I actually see this as a possible complement to the load mgmt feature.
The cache would help if demand is high for a small number of
revisions, whereas load mgmt would kick in and restrict load if the
access pattern of revisions is sufficiently random/spread out to
negate the cache (i.e. cause it to thrash).

Side note: While the same benefits could be had by putting a regular
web cache in front of the fossil server, i.e. a squid or the like this
would require more work to set up and admin. And might be a problem
for the truly dynamic parts of the fossil web ui. An integrated cache
just for the assets which are expensive to compute and yet
(essentially) static does not have these issues.

I mentioned in-memory and disk ... I can see that a two-level scheme
here ... A smaller in-memory cache for the really high-demand pieces
with LRU, and a larger disk cache for the things not so much in-demand
at the moment, but possibly in the future. The disk cache could
actually be much larger (disks are large and cheap these days), this
would help with random access attacks (as they would become
asymptotically more difficult as the disk cache over time extends its
net of quickly served assets).



-- 
Andreas Kupries
Senior Tcl Developer
Code to Cloud: Smarter, Safer, Faster(tm)
F: 778.786.1133
andre...@activestate.com
http://www.activestate.com
Learn about Stackato for Private PaaS: http://www.activestate.com/stackato

EuroTcl'2014, July 12-13, Munich, GER
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to