Two things come to mind to contribute to this discussion.

1. Early on, you talk of how many millions of pages you have... typically, caching is the tradeoff of only holding the most in-demand pages, not all of them... so don't worry if you can't manage to store all of them.

2. Most file systems don't handle a large number of file in one dir efficiently [some use btrees or other indices, but still], so using a tree of directories based on either (a) a fixed prefix of the key/filename or (b) a hash of that, can often significantly speed things.

And another thought occurs... if you compress the content as you write it, uwsgi may be able to ship if compressed for you without any work. In fact, if you cache into the right place you can have static-serve do all the lookup for you [but not expiration...]

--
Curtis

On 11/07/16 10:58, Tim van der Linden wrote:
On Sun, 10 Jul 2016 17:44:44 -0700
John Burk <[email protected]> wrote:

Hi John

Thanks for the reply.

Not a direct answer to your question, but another thing to avoid; with that
many files, be careful that your filesystem doesn't run out of inodes.  Had
that happen in scenarios with many many small tables in MySQL.

A fair warning indeed, this bit me once before too :)

I tried to prepare for that this time around when setting up the filesystem, it 
has roughly 190 million available inodes so I should be safe for some time to 
come.

John Burk
[email protected]

Cheers,
Tim
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to