# New Ticket Created by Mike Lambert # Please include the string: [netlabs #642] # in the subject line of all future correspondence about this issue. # <URL: http://bugs6.perl.org/rt2/Ticket/Display.html?id=642 >
This patch, also from Peter's recent patch, ensures that the destination pool is between 1 and 8 times the size of the amount we really need (which can be accurately determined via pool->reclaimable). Before, when it copied a pool, it never shrunk the size of the pool. I've modified his patch to remove some unnecessary calculations. before after gc_alloc_new.pbc 4.155999 3.756002 gc_alloc_reuse.pbc 16.574 9.423002 gc_generations.pbc 4.025 5.278002 gc_header_new.pbc 3.686 3.615 gc_header_reuse.pbc 5.577999 4.908003 gc_waves_headers.pbc 3.815002 3.675001 gc_waves_sizeable_data.pbc 8.383002 9.403999 gc_waves_sizeable_headers.pbc 5.668 6.268999 Results here show better performance on gc_alloc_reuse (hell, all these patches show better performance there. ;) Worse performance is seen on the wave tests containing pool data and in generations, mainly due to the fact that these tests go outside of the 8 times factor in memory usage. The after version causes parrot to more closely follow the required needs of the program (within 8x), which unfortunately causes it to grow the pool more often. The before-parrot's approach of keeping the original poolsize lets parrot avoid having to grow the pool more often, since the largest blcok remains the largest block for the duration of the program. Regardless of the fact that this isn't returning any memory to parrot, I also believe it to be the cause of some slowdowns due to other evidence I've noticed. (Is calloc() time proportional to the size you request?) Slightly better performance is seen in the some tests (gc_alloc_new, gc_header_reuse, and gc_waves_headers). And honestly, I really have no idea why this is. :) I'm a bit dubious on the value of this code. While it doesn't really provide much benefit as far as the tests go, one of Parrot's goals was to actually return memory to the system (as opposed to Perl5), so something like this is needed. Still, I think a better algorithm can be devised. Mike Lambert Index: resources.c =================================================================== RCS file: /cvs/public/parrot/resources.c,v retrieving revision 1.60 diff -u -r1.60 resources.c --- resources.c 26 May 2002 20:20:08 -0000 1.60 +++ resources.c 29 May 2002 08:47:38 -0000 @@ -29,8 +29,14 @@ * that must be available for reclamation before a compaction run will * be initiated. This parameter is stored in the per-pool structure, * and can therefore be modified for each pool if required. + * MINIMUM_MEMPOOL_SIZE is applied to the estimated non-reclaimable + * size to give the smallest size for the 'after' pool. + * MAXIMUM_MEMPOOL_SIZE is applied to the estimated non-reclaimable + * size to give the largest size for the 'after' pool. */ #define RECLAMATION_FACTOR 0.20 +#define MINIMUM_MEMPOOL_SIZE 1 +#define MAXIMUM_MEMPOOL_SIZE 8 /* Function prototypes for static functions */ static void *mem_allocate(struct Parrot_Interp *interpreter, size_t *req_size, @@ -821,6 +827,7 @@ compact_string_pool(struct Parrot_Interp *interpreter, struct Memory_Pool *pool) { + UINTVAL estimated_size, min_size, max_size; UINTVAL total_size; struct Memory_Block *new_block; /* A pointer to our working block */ char *cur_spot; /* Where we're currently copying to */ @@ -840,6 +847,17 @@ /* Find out how much memory we've used so far. We're guaranteed to use no more than this in our collection run */ total_size = pool->total_allocated; + estimated_size = pool->total_allocated - pool->reclaimable; + min_size = ((UINTVAL)(estimated_size * MINIMUM_MEMPOOL_SIZE) + + pool->minimum_block_size-1); + max_size = ((UINTVAL)(estimated_size * MAXIMUM_MEMPOOL_SIZE) + + pool->minimum_block_size-1); + if (total_size < min_size) { + total_size = min_size; + } + if (total_size > max_size) { + total_size = max_size; + } /* Snag a block big enough for everything */ new_block = alloc_new_block(interpreter, total_size, NULL);