On Mon, 11 Jul 2016 11:02:44 +1000 Curtis Maloney <[email protected]> wrote:
Hi Curtis > Two things come to mind to contribute to this discussion. Thanks for contributing :) > 1. Early on, you talk of how many millions of pages you have... > typically, caching is the tradeoff of only holding the most in-demand > pages, not all of them... so don't worry if you can't manage to store > all of them. Have been thinking about this as well, but I still would like to try and "cache them all" in this first iteration. If not all should be cached a memory based approach with Cache2 would be the way to go, it has all the bells and whistles I need. > 2. Most file systems don't handle a large number of file in one dir > efficiently [some use btrees or other indices, but still], so using a > tree of directories based on either (a) a fixed prefix of the > key/filename or (b) a hash of that, can often significantly speed things. Correct, that is also what I try to do with my approach: using the applications URI as a directory structure to split out files over various directories *and* to make it easier to locate a certain page within the cache. > And another thought occurs... if you compress the content as you write > it, uwsgi may be able to ship if compressed for you without any work. > In fact, if you cache into the right place you can have static-serve do > all the lookup for you [but not expiration...] Compressing did not come to mind yet (Nginx is in front of this setup and does Gzipping at the moment), but can make for a nice addition later. One thing that you made me notice, however, was expiration. Caching to static files is all fine and fast, but with the current approach (static:) I have no way of expiring a page after a set time ... have I? > -- > Curtis Cheers, Tim _______________________________________________ uWSGI mailing list [email protected] http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
