I recall testing Vmalloc against libumem (perhaps not the latest version)
and a few other malloc implementations (Hoard, JEMalloc, MTmalloc,
TCmalloc) about a year ago while still at AT&T. The only other malloc that
was competitive with Vmalloc over a whole range of tests (in the list that
Glenn gave) was Google's TCmalloc. TCmalloc and Vmalloc edged each other on
time depending on tests and platforms (Solaris vc Linux) but Vmalloc
usually won on space compaction.

Over all, I think the thread-preferred model of Vmalloc allows better
adaptation than other thread-specific mallocs in terms of managing all
memory in the arena for an environment where threads (and parallel
processes) are not uniform in their computational loads. This was proven
out in the Daytona database system where Vmalloc was used to manage shared
memory (shmem) across processes doing rather different types of
computations.

Phong


On Wed, Jan 22, 2014 at 2:46 AM, Glenn Fowler <glenn.s.fow...@gmail.com>wrote:

> Phong peruses the list
>
> the top comment in src/lib/libast/vmalloc/vmbest.c:
>
> /*      Best-fit allocation method extended for concurrency.
> **
> **      The memory arena of a region is organized as follows:
> **      1. Raw memory are (possibly noncontiguous) segments (Seg_t)
> obtained
> **         via Vmdisc_t.memoryf. Segments may be further partitioned into
> **         "superblocks" on requests from memory packs (more below). The
> **         collection of superblocks is called the "segment pool".
> **      2. A region consists of one or more packs. Each pack typically
> holds
> **         one or more superblocks from the segment pool. These blocks are
> **         partitioned into smaller blocks for allocation. General
> allocation
> **         is done in a best-fit manner with a splay tree holding free
> blocks
> **         by size. Caches of small blocks are kept to speed up allocation.
> **      3. Packs are created dynamically and kept in an array. An
> allocation
> **         request uses the ASO's ID of the calling thread/process to hash
> **         into this array and search for packs with available memory.
> Thus,
> **         allocation is thread-preferred but not thread-specific since
> **         all packs may be used by all threads/processes.
> **
> **      Written by Kiem-Phong Vo, 01/16/1994, 12/21/2012.
> */
>
> if you want to compare malloc implementation X against vmalloc w.r.t.
> multi thread/proc then look at these tests
>
> tcache-scratch.c
> tcache-thrash.c
> tcontent.c
> tmtmalloc.c
> tperform.c
> tsafemalloc.c
> tsignal.c
>
> in src/cmd/tests/vmalloc
>
> I believe the tests use just malloc()/free() when VMALLOC is not defined
> (i.e., non-ast builds)
> I'm sure the list would be interested in comparison results
>
> to see how the tests are run for ast malloc:
>
> bin/package use
> cd tests
> nmake -n test.vmalloc
>
>
> On Mon, Jan 20, 2014 at 6:32 PM, Irek Szczesniak <iszczesn...@gmail.com>wrote:
>
>> Does anyone have Phong Vo <k...@research.att.com>'s new email address?
>> The following work (i.e. the per thread caching etc) in Illumos might
>> be very interesting...
>>
>> Forwarded conversation
>> Subject: [developer] 4489-4491 ptcumem and other libumem enhancements
>> ------------------------
>>
>> From: Robert Mustacchi <r...@joyent.com>
>> Date: Thu, Jan 16, 2014 at 3:59 AM
>> To: illumos Developer <develo...@lists.illumos.org>
>>
>>
>> This combines work done over the past few years at Joyent enhancing
>> libumem. It's broken down into two commits. One which adds 128k slabs
>> and makes VM_BESTFIT the default, the other which adds per thread
>> caching umem. For more on umem, see
>> http://dtrace.org/blogs/rm/2012/07/16/per-thread-caching-in-libumem/.
>>
>> https://us-east.manta.joyent.com/rmustacc/public/webrevs/4489/index.html
>>
>> Robert
>>
>
> _______________________________________________
> ast-developers mailing list
> ast-developers@lists.research.att.com
> http://lists.research.att.com/mailman/listinfo/ast-developers
>
>
_______________________________________________
ast-developers mailing list
ast-developers@lists.research.att.com
http://lists.research.att.com/mailman/listinfo/ast-developers

Reply via email to