This patch implements support for using large memory pages on systems that supports it (must be enabled with -L on the command line). The main purpose of using large memory pages is to increase the address-space to be used without causing a TLB miss (see http://en.wikipedia.org/wiki/Translation_lookaside_buffer for a description of TLB.). When using large pages the slab allocator will allocate the total cache size during startup in one big malloc instead of calling malloc each time we need a new slab page (in order to get the biggest available pages on the system). (and since malloc use internal mutex-locking, we could block waiting for other threads calling malloc (with friends). Since access to the slab allocator is guarded by a single mutex, all access to the slab allocator would suffer from this.)

The tests remain to be written, and I am a bit unsure how to do it. There are at least two problems I see for a generic test:
1) Not all systems supports multiple page sizes (or that the code don't know how to enable them). On those platforms a warning is printed if you start memcached with -L, but you will still get the benefits from the slab allocator that it grabs all memory up front (benefit == you will not call malloc when a slabclass needs more space).
2) Your system may have been running for a long time, so that the system memory is too fragmented to actually be able to return large memory pages.

To verify the behavior without -L is pretty simple, just start memcached and look at the memory footprint. If we start with -L the memory footprint should be a little bit bigger than the memory requested with -m, but to verify that we got large pages are a bit more difficult (due to 2 above).

If we ignore the problem in nr. 2 above, we could (at least on Solaris) look at the result of "pmap -s <memcached pid>" and compare the pagesizes reported there with the one returned by "pagesize -a".

Comments anyone?

Trond

Attachment: largepage.diff.gz
Description: GNU Zip compressed data


Reply via email to