Currently the -L flag is only enabled if
HAVE_GETPAGESIZES&&HAVE_MEMCNTL. I'm curious what the motivation is
for something like that? In our experience, for some memcache pools we
end up fragmenting memory due to the repeated allocation of 1MB slabs
around all the other hashtables and free lists going on. We know we
want to allocate all memory upfront, but can't seem to do that on a
Linux system.
To put it more concretely, here is a proposed change to make -L do a
contiguous preallocation even on machines without getpagesizes tuning.
My memcached server doesn't seem to crash, but I'm not sure if that's
a proper litmus test. What are the pros/cons of doing something like
this?
Thanks,
Mike
--- memcached.c 2009-07-10 11:22:58.408580000 -0700
+++ ../memcached-1.4.0-orig/memcached.c 2009-07-10 11:22:09.715629000
-0700
@@ -3761,11 +3761,13 @@
"-f <factor> chunk size growth factor (default: 1.25)\n"
"-n <bytes> minimum space allocated for key+value+flags
(default: 48)\n"
+#if defined(HAVE_GETPAGESIZES) && defined(HAVE_MEMCNTL)
"-L Try to use large memory pages (if
available). Increasing\n"
" the memory page size could reduce the
number of TLB misses\n"
" and improve the performance. In order to
get large pages\n"
" from the OS, memcached will allocate the
total item-cache\n"
" in one large chunk.\n"
+#endif
);
printf("-D <char> Use <char> as the delimiter between key
prefixes and IDs.\n"
@@ -4080,9 +4082,10 @@
break;
case 'L' :
#if defined(HAVE_GETPAGESIZES) && defined(HAVE_MEMCNTL)
- enable_large_pages();
+ if (enable_large_pages() == 0) {
+ preallocate = true;
+ }
#endif
- preallocate = true;
break;
case 'C' :
settings.use_cas = false;