On Sun, Nov 28, 2010 at 11:43:48PM +0100, Joerg Sonnenberger wrote: > A radix tree is kind of a bad choice for this purpose. The easiest > approach is most likely to have [a btree]
I would go with an expanding hash table of some kind, e.g. size is 2^n pages, hash & (2^n - 1) tells you the page to look at, when you fill up double the size and take an extra bit from the hash value. If you use a good hash function you should get decent occupancy rates; expanding requires rewriting every page, but for systems where the number of uids is more or less constant over time (which is most systems) this won't happen very much... and remember that for 30,000 users the total size of quota data is only about 1M anyhow so chugging it around once in a while isn't a big deal. However, I'm still not convinced the sparse file really is a serious problem in practice. -- David A. Holland dholl...@netbsd.org