Quoth Greg Ungerer: > In this example it is better to malloc(1000) than 1000 malloc(1)'s. > Lots of small allocations is slow and generally not good for > fragmentation. > > "Small" in this case is a relative thing. Large allocations (especially > say larger than about 64k) can be difficult for the kernel/mmap to > satisfy - due to fragmentation, after the system has been running for > a while. Allocations of around 4k should always be simple to allocate - > up until you run out of memory anyway.
Especially since (at least the last time I checked) there's about 8 bytes overhead per allocation, and once you go past 4k allocations consume whole pages. And since the default allocator only allocates pages in powers of two, if you try to allocate 64k exactly you'll actually end up allocating 64k+8 bytes => 128k (most of which will be wasted). (I'm not entirely sure if the kernel will put 60k of it back on the free list, but I'm reasonably sure that even one byte over the line will consume a whole page.) _______________________________________________ uClinux-dev mailing list [email protected] http://mailman.uclinux.org/mailman/listinfo/uclinux-dev This message was resent by [email protected] To unsubscribe see: http://mailman.uclinux.org/mailman/options/uclinux-dev
