On Feb 22, 2008, at 6:38 AM, dormando wrote:
Hey,
Some comments:
Minor bracket style... Most of the code is:
function (blah blah) {
vs some of yours is:
function (blah blah)
{
Fixed
Also:
+ if (prealloc) {
+ /* Allocate everything in a big chunk with malloc */
+ mem_base = malloc(mem_limit);
+ if (mem_base != NULL) {
+ mem_current = mem_base;
+ mem_avail = mem_limit;
+ } else {
+ fprintf(stderr, "ABORT: Failed to allocate requested
memory\n");
+ exit(EXIT_FAILURE);
+ }
+ }
... probably shouldn't bomb out internally on failure.
Fixed. It will fall back to "incremental" allocation if it cannot allocate everything in one big chunk.
Hrm. I think the flow might go over better if 'prealloc' is only set totrue if 'enable_large_pages' returns true. (it's void now, but you get the idea). If you're not trying to set up a largepage environment, you're more likely to fail a massive malloc() anyway.
Fixed.
Most of the text needs cleaning up, but I can do that. However, the biggest issue here is that the whole thing is Solaris specific :)
So unless you're running solaris, this essentially gives you an option which will most likely just cause memcached to fail on large systems. I don't actually know how to make this work under linux offhand.Something something hugetlb. I can do the sysadmin-y parts and allocate the 2MB pages in the OS, but will have to read up on the programmer- y parts.What do you think about removing the option if built without the propersupport? The contiguous allocation doesn't give all that much benefit otherwise.
Done. Trond
largepage.diff.gz
Description: GNU Zip compressed data
