2017-06-01 22:16 GMT+02:00 Serhiy Storchaka :
> The issue [1] still is open. Patches neither applied nor rejected. They
> exposes the speed up in microbenchmarks, but it is not large. Up to 40% for
> iterating over enumerate() and 5-7% for hard integer computations like
>
01.06.17 21:44, Larry Hastings пише:
p.s. Speaking of freelists, at one point Serhiy had a patch adding a
freelist for single- and I think two-digit ints. Right now the only int
creation optimization we have is the array of constant "small ints"; if
the int you're constructing isn't one of
On 06/01/2017 02:20 AM, Victor Stinner wrote:
I would like to understand how private free lists are "so much"
faster. In fact, I don't recall if someone *measured* the performance
speedup of these free lists :-)
I have, recently, kind of by accident. When working on the Gilectomy I
turned
01.06.17 12:20, Victor Stinner пише:
2017-06-01 10:40 GMT+02:00 Antoine Pitrou :
This is already exactly how PyObject_Malloc() works. (...)
Oh ok, good to know...
IMHO the main thing the
private freelists have is that they're *private* precisely, so they can
avoid a
I thought pymalloc is SLAB allocator.
What is the difference between SLAB and pymalloc allocator?
On Thu, Jun 1, 2017 at 6:20 PM, Victor Stinner wrote:
> 2017-06-01 10:40 GMT+02:00 Antoine Pitrou :
>> This is already exactly how PyObject_Malloc()
2017-06-01 10:40 GMT+02:00 Antoine Pitrou :
> This is already exactly how PyObject_Malloc() works. (...)
Oh ok, good to know...
> IMHO the main thing the
> private freelists have is that they're *private* precisely, so they can
> avoid a couple of conditional branches.
I
Hi,
As you said, I think PyObject_Malloc() is fast enough.
But PyObject_Free() is somewhat complex.
Actually, there are some freelists (e.g. tuple, dict, frame) and
they improve performance significantly.
My "global unified freelist" idea is unify them. The merit is:
* Unify
On Thu, 1 Jun 2017 09:57:04 +0200
Victor Stinner wrote:
>
> By the way, Naoki INADA also proposed a different idea:
>
> "Global freepool: Many types has it’s own freepool. Sharing freepool
> can increase memory and cache efficiency. Add PyMem_FastFree(void*
> ptr,