On Sun, Jan 11, 2009 at 7:47 PM, John McCabe-Dansted <[email protected]> wrote: > > Seems solid on 32bit hardy. However, I am getting 26% more memory > allocated than used (with regression.sh, attached): > > OrigDataSize: 221980 kB > ComprDataSize: 129086 kB > MemUsed: 163116 kB > > XvMalloc got within 12% of ideal in all your tests. I take it ideal is > not the same as zero fragmentation? >
"Ideal" allocator is same as zero fragmentation. However your particular case seems to have trigerred some bad case behavior. One such bad case is, if threre are too many compressed pages with size just > PAGE_SIZE/2 - in this case we will end up allocating 1 full 4k page for every compressed page. To analyze further, it will be great to have histogram exported to show count of pages compressed for various size ranges (I will *hopfully* do this soon). Also, effect of such bad cases can be reduced by using pages size > 4k - these will not be h/w large pages but just an abstraction layer over xvMalloc that provides GetPtr(), PutPtr() implementations for physically discontiguous pages (i.e. some way to *atomically* map two or more physically discontiguous pages to contiguous virtual memory locations). Data presented at xvMalloc wiki (http://code.google.com/p/compcache/wiki/xvMalloc) uses "randomized workload generator" where chances for each sized allocation is same - this seems to be hiding this bad behavior. We should be able to repro this bad behavior by setting "preferred size range" in area just > PAGE_SIZE / 2 - this should help checking solutions for this issue. Thanks, Nitin _______________________________________________ linux-mm-cc mailing list [email protected] http://lists.laptop.org/listinfo/linux-mm-cc
