2013/6/15 Nick Coghlan <ncogh...@gmail.com>: > The only reason for the small object allocator to exist is because > operating system allocators generally aren't optimised for frequent > allocation and deallocation of small objects. You can gain a *lot* of > speed from handling those inside the application. As the allocations > grow in size, though, the application level allocator just becomes > useless overhead, so it's better to delegate those operations directly > to the OS.
Why not using PyObject_Malloc() for all allocations? PyObject_Malloc() fallbacks to malloc() if the size is larger than a threshold (512 bytes in Python 3.4). Are PyObject_Realloc() and PyObject_Free() more expensive than realloc() and free() (when the memory was allocated by malloc)? > The only question mark in my mind is over the GIL-free raw allocation > APIs. I think it makes sense to at least conditionally define those as > macros so an embedding application can redirect *just* the allocations > made by the CPython runtime (rather than having to redefine > malloc/realloc/free when building Python), but I don't believe the > case has been adequately made for making the raw APIs configurable at > runtime. Dropping that aspect would at least eliminate the > PyMem_(Get|Set)RawAllocators() APIs. PyMem_SetRawAllocators() is required for the two use cases: use a custom memory allocator (embedded device and Python embedded in an application) and setup an hook for debug purpose. Without PyMem_SetRawAllocators(), allocations made by PyMem_RawMalloc() would go to the same place than the rest of the "Python memory", nor seen by debug tools. It becomes worse with large allocations kept for a long time. Victor _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com