On 15.10.2020 15:50, Victor Stinner wrote:
> Le mer. 14 oct. 2020 à 17:59, Antoine Pitrou a écrit :
>> unpack-sequence is a micro-benchmark. (...)
>
> I suggest removing it.
>
> I removed other similar micro-benchmarks from pyperformance in the
> past, since they can easily be misunderstood and
Le mer. 14 oct. 2020 à 17:59, Antoine Pitrou a écrit :
> unpack-sequence is a micro-benchmark. (...)
I suggest removing it.
I removed other similar micro-benchmarks from pyperformance in the
past, since they can easily be misunderstood and misleading. For
curious people, I'm keeping a collection
> Would it be possible instead to run git-bisect for only a _particular_
benchmark? It seems that may be all that’s needed to track down particular
regressions. Also, if e.g. git-bisect is used it wouldn’t be every e.g.
10th revision but rather O(log(n)) revisions.
That only works if there is a si
MOn Wed, Oct 14, 2020 at 8:03 AM Pablo Galindo Salgado
wrote:
> > Would it be possible rerun the tests with the current
> setup for say the last 1000 revisions or perhaps a subset of these
> (e.g. every 10th revision) to try to binary search for the revision which
> introduced the change ?
>
> Ev
On 14.10.2020 17:59, Antoine Pitrou wrote:
>
> Le 14/10/2020 à 17:25, M.-A. Lemburg a écrit :
>>
>> Well, there's a trend here:
>>
>> [...]
>>
>> Those two benchmarks were somewhat faster in Py3.7 and got slower in 3.8
>> and then again in 3.9, so this is more than just an artifact.
>
> unpack-se
Le 14/10/2020 à 17:25, M.-A. Lemburg a écrit :
>
> Well, there's a trend here:
>
> [...]
>
> Those two benchmarks were somewhat faster in Py3.7 and got slower in 3.8
> and then again in 3.9, so this is more than just an artifact.
unpack-sequence is a micro-benchmark. It's useful if you want t
I suggest to limit to one "dot" per week, since CodeSpeed (the website
to browse the benchmark results) is somehow limited to 50 dots (it can
display more if you only display a single benchmark).
Previously, it was closer to one "dot" per month which allowed to
display a timeline over 5 years. In
On 14.10.2020 16:14, Antoine Pitrou wrote:
> Le 14/10/2020 à 15:16, Pablo Galindo Salgado a écrit :
>> Hi!
>>
>> I have updated the branch benchmarks in the pyperformance server and now
>> they include 3.9. There are
>> some benchmarks that are faster but on the other hand some benchmarks
>> are su
> I wouldn't worry about a small regression on a micro- or mini-benchmark
while the overall picture is
stable.
Absolutely, I agree is not something to *worry* but I think it makes sense
to investigate as
the possible fix may be trivial. Part of the reason I wanted to recompute
them was because
th