Let's try again and hope the email doesn't escape out this time
> Thanks for contributing :) >> >> > 1. Early on, you talk of how many millions of pages you have... >> > typically, caching is the tradeoff of only holding the most in-demand >> > pages, not all of them... so don't worry if you can't manage to store >> > all of them. >> >> Have been thinking about this as well, but I still would like to try and >> "cache them all" in this first iteration. If not all should be cached a >> memory based approach with Cache2 would be the way to go, it has all the >> bells and whistles I need. >> > > > What I have seen with one relative large news paper company did - Cache all generated pages in database (MySQL in their case) using a custom middleware - For the latest news (created within 24h) give cache time 1 minute, as journalists want to see corrected errors live right away - For older pages give cache time for 24h, as the news is old in any case and there are unlikely any changes - The middleware simply stores the generated HTML in the database table or gives out a prestored cache text blob if there is one available. Databases can easily scale to 20M+ rows of 20kbytes content or so. -- Mikko Ohtamaa http://opensourcehacker.com http://twitter.com/moo9000
_______________________________________________ uWSGI mailing list [email protected] http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
