[issue38767] Replace Mersenne Twister RNG with a PCG family algorithm

2019-11-11 Thread Tim Peters


Tim Peters  added the comment:

Python is a general-purpose language, and as such I believe it's inappropriate 
for it to be "a leader" in adopting new PRNGs.  That's for area specialists to 
pioneer.

If NumPy switched, that is a good reason to evaluate this again.  But for 
backward compatibility we'd probably still need to support the current Twister 
anyway (for programs that set the seed, we try to keep results bit-for-bit 
reproducible across releases).

Note that Python didn't switch _to_ the Twister before it was, in effect, a de 
facto standard across many scientific libraries.

Something not noted in the earlier report:  TOMS rejected the PCG paper, so it 
will probably never be published.  I don't much care, but those who do care 
about peer-reviewed publication may see that as a deal killer.

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue38767>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38626] small change at bisect_left function for easy understanding

2019-10-28 Thread Tim Peters


Tim Peters  added the comment:

So as far as possible, CPython only uses __lt__ ("<") element comparisons for 
its order-sensitive algorithms.  This is documented for list.sort(), but the 
bisect and heapq modules strive to do the same.

The point is to minimize the number of comparison methods a new type needs to 
implement to participate (and "just one:  __lt__" is ideal).

Your change would require they implement "__le__" too, so is unlikely to be 
accepted for CPython.

--
assignee: rhettinger -> 
nosy: +tim.peters -rhettinger

___
Python tracker 
<https://bugs.python.org/issue38626>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38490] statistics: add covariance and Pearson's correlation

2019-10-17 Thread Tim Peters


Tim Peters  added the comment:

I'm in favor of adding all of this (covariance, coefficient, linear 
regression).  It's still at the level of elementary statistics, and even taught 
in watered down "business statistics" classes.  It's about the minimum that can 
be done beyond single-variable stats.

But I also think this defines the limit of what the core "should" include in 
this area.

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue38490>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-10-14 Thread Tim Peters


Tim Peters  added the comment:

> While Neil & I haven't thought of ways that can go wrong now
> beyond that a "surprise finalizer" may get run any number of
> times ...

Speaking of which, I no longer believe that's true.  Thanks to the usual layers 
of baffling renaming via macro after macro, I missed object.c's 
PyObject_CallFinalizerFromDealloc(), which both respects and sets the "am I 
finalized?" flag regardless of how, e.g., an object with a __del__ ends its 
life (and possibly resurrects).

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-10-14 Thread Tim Peters


Tim Peters  added the comment:

> I'm often amazed it works at all, let alone perfectly. ;-P

Indeed!  Every time I take a break from gc and come back, I burn another hour 
wondering why it doesn't recycle _everything_ ;-)

> But what happens if the GC doesn't see that WZ is trash?
> Then it will not be cleared.  Hang it off Y so the GC
> can't find it.

In that case it won't think C is trash either (a weakref holds a strong 
reference to its callback), so C won't be tp_clear'ed - if WZ isn't in the 
unreachable set, C can't be either.

> We can set things up so that Z is freed before WZ (e.g.
> WZ in a second and newer cycle).  Then, the callback
> might still run.

Since C hasn't been damaged, and nothing reachable from C would have been 
considered trash either, I don't see a problem.

> On further thought, this seems safe (but maybe broken) because
> of the handle_weakrefs() logic.  The GC will think WZ is going
> to outlive Z so it will call it before doing any tp_clear calls.

Don't think that applies.  As above, WZ isn't in the unreachable set, so gc 
knows nothing about it.

Z will eventually be reclaimed via refcounting side effect, C _will_ run, and 
then WZ, and then C, will be reclaimed by refcounting.

Or am I missing something?

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38379] finalizer resurrection in gc

2019-10-13 Thread Tim Peters


Tim Peters  added the comment:

Everything here has been addressed, so closing this.  zleak.py can apparently 
run forever now without leaking a byte :-)

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue38379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38437] Set GC_DEBUG for debug builds of the interpreter

2019-10-10 Thread Tim Peters


Tim Peters  added the comment:

+1.  This code got quite brittle when they decided to fit two pointers, a fat 
integer, and 3 flags into a struct with room for only the two pointers ;-)  
It's a mine field now.  Enabling one of the few automated mine detectors is 
thoroughly sensible.

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue38437>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38379] finalizer resurrection in gc

2019-10-09 Thread Tim Peters


Tim Peters  added the comment:

I checked the stat fix into master, but GH failed to backport to 3.7 or 3.8 and 
I'm clueless.  More info in the PR.  Does someone else here know how to get a 
backport done?

--
stage: patch review -> backport needed
versions: +Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue38379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38379] finalizer resurrection in gc

2019-10-09 Thread Tim Peters


Tim Peters  added the comment:


New changeset ecbf35f9335b0420cb8adfda6f299d6747a16515 by Tim Peters in branch 
'master':
bpo-38379:  don't claim objects are collected when they aren't (#16658)
https://github.com/python/cpython/commit/ecbf35f9335b0420cb8adfda6f299d6747a16515


--

___
Python tracker 
<https://bugs.python.org/issue38379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38379] finalizer resurrection in gc

2019-10-08 Thread Tim Peters


Tim Peters  added the comment:

PR 16658 aims to repair the stats reported.

--

___
Python tracker 
<https://bugs.python.org/issue38379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38379] finalizer resurrection in gc

2019-10-08 Thread Tim Peters


Change by Tim Peters :


--
keywords: +patch
pull_requests: +16241
stage: needs patch -> patch review
pull_request: https://github.com/python/cpython/pull/16658

___
Python tracker 
<https://bugs.python.org/issue38379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38382] statistics.harmonic_mean fails to raise error with negative input that follows a 0

2019-10-06 Thread Tim Peters


Tim Peters  added the comment:

I don't have a problem with the current behavior (early out on zero, even if 
later arguments are senseless).  So:

> * Just document that there is an early-out for zero.

--

___
Python tracker 
<https://bugs.python.org/issue38382>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38379] finalizer resurrection in gc

2019-10-06 Thread Tim Peters


Tim Peters  added the comment:

Just noting that check_garbage() currently only determines which trash objects 
are now directly reachable from outside.  To be usable for the intended 
purpose, it would need to go on to compute which trash objects are reachable 
from those too.

Maybe a new function

static void
deduce_unreachable(PyGC_Head *base, PyGC_Head *unreachable)

that packaged the steps through move_unreachable().  The main function would 
use that on `young`, and check_garbage() on `unreachable` to find the _still_ 
unreachable objects.

I don't care about the expense.  Outside of zleak.py, the number of unreachable 
objects is usually small, so it's usually cheap to make another pass over just 
them.

--

___
Python tracker 
<https://bugs.python.org/issue38379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38379] finalizer resurrection in gc

2019-10-05 Thread Tim Peters


New submission from Tim Peters :

While people are thinking about gc, zleak.py shows a small bug, and a possible 
opportunity for improvement, in the way gc treats finalizers that resurrect 
objects.

The bug:  the stats keep claiming gc is collecting an enormous number of 
objects, but in fact it's not collecting any.  Objects in the unreachable set 
shouldn't add to the "collected" count unless they _are_ collected.

Output:

resurrecting
collect 202
gen 2 stats {'collections': 2, 'collected': 202, 'uncollectable': 0}
resurrecting
collect 404
gen 2 stats {'collections': 3, 'collected': 606, 'uncollectable': 0}
resurrecting
collect 606
gen 2 stats {'collections': 4, 'collected': 1212, 'uncollectable': 0}
resurrecting
collect 808
gen 2 stats {'collections': 5, 'collected': 2020, 'uncollectable': 0}
resurrecting
collect 1010
gen 2 stats {'collections': 6, 'collected': 3030, 'uncollectable': 0}
...

Memory use grows without bound, and collections take ever longer.

The opportunity:  if any finalizer resurrects anything, gc gives up.  But the 
process of computing whether anything was resurrected also determines which 
initially-trash objects are reachable from the risen dead.  Offhand I don't see 
why we couldn't proceed collecting what remains trash.  Then zleak.py would 
reclaim everything instead of nothing.

Sketch:  rather than just set a flag, check_garbage() could move now-reachable 
objects to the old generation (and, for each one moved, decrement the count of 
collected objects).  Then delete_garbage() could proceed on what remains in the 
unreachable list.

--
components: Interpreter Core
files: zleak.py
messages: 354020
nosy: nascheme, pitrou, tim.peters
priority: normal
severity: normal
stage: needs patch
status: open
title: finalizer resurrection in gc
type: behavior
versions: Python 3.9
Added file: https://bugs.python.org/file48644/zleak.py

___
Python tracker 
<https://bugs.python.org/issue38379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38373] List overallocation strategy

2019-10-04 Thread Tim Peters


Tim Peters  added the comment:

Don't know.  Define "the problem" ;-)  As soon as the allocation is over 512 
bytes (64 pointers), it's punted to the system malloc family.  Before then, do 
a relative handful of relatively small memcpy's really matter?

pymalloc is faster than system mallocs, and probably uses space more 
efficiently.  There are no pure wins here :-)

--

___
Python tracker 
<https://bugs.python.org/issue38373>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38373] List overallocation strategy

2019-10-04 Thread Tim Peters


Tim Peters  added the comment:

WRT pymalloc, it will always copy on growing resize in this context.  A 
pymalloc pool is dedicated to blocks of the same size class, so if the size 
class increases (they're 16 bytes apart now), the data must be copied to a 
different pool (dedicated to blocks of the larger size class).

--

___
Python tracker 
<https://bugs.python.org/issue38373>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-10-04 Thread Tim Peters


Tim Peters  added the comment:

BTW, the phrase "missing tp_traverse" is misleading.  If an object with a NULL 
tp_traverse appears in a gc generation, gc will blow up the next time that 
generation is collected.  That's always been so - gc doesn't check whether 
tp_traverse is NULL, it just calls it.  It's tp_clear that it checks, because 
that one is optional.

I don't recall any problem we've had with extensions implementing the gc 
protocol incorrectly or incompletely.  It's this issue's problem:  containers 
not participating in gc _at all_.

If we had a traditional mark-sweep collector, that would be massively 
catastrophic.  Miss a pointer and you can conclude a live object is actually 
trash, and tear it down while it's still in use.

Our problem is the opposite:  miss a pointer and we can conclude a trash object 
is actually live.  At the start, the worst that _could_ do is leak memory.  
It's the later introduction of time-to-die finalizers (weakref callbacks at 
first) that created the opportunity for segfaults:  the only 100% clearly safe 
way to run finalizers in cyclic trash is to force them to run _before_ anything 
at all is torn down by force (tp_clear).  But to run them in advance, we have 
to know the relevant objects _are_ trash.  Which we can't always know if 
containers don't always participate.

While Neil & I haven't thought of ways that can go wrong now beyond that a 
"surprise finalizer" may get run any number of times, that doesn't mean far 
worse things can't happen - just that they'll surprise us when they do :-)

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-10-03 Thread Tim Peters


Tim Peters  added the comment:

My understanding is that the CFFI types at issue don't even have 
Py_TPFLAGS_HAVE_GC.  They're completely invisible to gc.  As Armin says in the 
CFFI issue report (linked to earlier), he never got the impression from the 
docs that he needed to implement anything related to cyclic gc.

Since Armin is extremely capable, conscientious, and reasonable, that tells me 
our docs are lacking.

It was intended at the start that the worst that could happen if a type ignored 
the gc protocols was that memory may leak.  That story changed radically when 
weakrefs with callbacks were added - but nobody knew it at the time because the 
weakref design gave no thought to cyclic gc.  It's been driven by segfaults 
ever since ;-)

We're doing all we can to keep non-cooperating code "working", and will 
continue to do so.  But who knows?  The next segfault may be one we can't hack 
around.  It's fundamentally insane to expect any gc to work perfectly when it 
may be blind to what the object graph _is_.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-10-03 Thread Tim Peters


Tim Peters  added the comment:

Loose ends.  Telegraphic because typing is hard.

1. Docs should be changed to encourage implementing the full gc protocol for 
"all" containers.  Spell out what can go wrong if they don't.  Be upfront about 
that history has, at times, proved us too optimistic about that ever since 
weakrefs were added.

2. Surprise finalizers (SF).  All weakrefs to trash objects were already 
cleared.  So if a SF can access trash, it must be via a chain of strong refs.  
But in that case, they wouldn't be trash to begin with (since they're reachable 
from the SF, which is a S because we believed it wasn't trash).

So best optimistic guess is that only this can go wrong:  we promise that a 
finalizer will run at most once (even if the object is resurrected), but a SF 
neither records that it has been run nor recognizes that (if so) it's already 
been run.  IOW, a finalizer will be run at most once when it's not a surprise, 
but will run every time it is a SF.

3. get_objects() exposing invalid objects.  That's no good.  Not due to missing 
tp_traverse, but that the aggregate of trash objects revealed by tp_traverses 
exceed the aggregate of those reclaimed by tp_clears and subsequent refcounting.

Must not move invalids to older generation.  Move to new internal (to 
delete_garbage) list instead.  That list should be empty at delete_garbage's 
end.  If not, I'd be happy to die with a fatal runtime error.  Else, e.g.,

- add to count of trash objects that could not be collected

- append them to gc.garbage, each as the sole attribute of a new (say) 
`_GCInvalid` container, whose repr/str only shows the address of the invalid 
object.  So the top-level objects in gc.garbage remain valid, but the daring 
can access the invalid objects indirectly.  At least their type pointers should 
be intact.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-10-01 Thread Tim Peters


Tim Peters  added the comment:

Neil, about this comment:

#   - ct is not yet trash (it actually is but the GC doesn't know because of
# the missing tp_traverse method).

I believe gc should know ct is trash.  ct is in the cf list, and the latter 
does have tp_traverse.

What gc won't know is that `a` is trash, because `a` is attached to ct, and ct 
doesn't have tp_traverse.

It should blow up anyway :-)

Maybe with a simpler structure it would be easier to rearrange code to nudge 
the callback into getting cleared before use?

Z <- Y <- A <-> B -> WZ -> C

where WZ is a weakref to Z with callback C, and Y doesn't implement 
tp_traverse.  The only cycle is between A and B, which could just as well be 
the same object.  All the other stuff hangs off that cycle.

It's all trash, but we won't know in advance that Z is part of it.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-30 Thread Tim Peters

Tim Peters  added the comment:

Ɓukasz, all type objects have tp_clear slots, and always did.  The patch in 
question put something useful in the function object's tp_clear slot instead of 
leaving it NULL.  No interface, as such, changes either way.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-30 Thread Tim Peters


Tim Peters  added the comment:

Yes, it's better to have tp_clear than not for a variety of reasons (including 
setting examples of best practice).

Best I can tell, the patch for BPO-33418 was reverted _only_ to worm around the 
crash in _this_ report.  That's no longer needed.  Or, if it is, we've missed 
something fundamental in the fix for this bug.  In which case, reverting the 
other patch is the only clear way to find that out quickly.  I favor getting it 
right.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-30 Thread Tim Peters


Tim Peters  added the comment:

It's unclear to me whether BPO-33418 was a bug or a contrived annoyance :-)

If someone believes it was worth addressing, then what it did is the only way 
to fix it, so should be restored now.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-30 Thread Tim Peters


Tim Peters  added the comment:

FWIW, I agree with Neil in all respects about the release:  his patch is the 
best approach, plugs segfaulting holes that have been there for many years, and 
the earlier patches aren't needed anymore.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-30 Thread Tim Peters


Tim Peters  added the comment:

Neil, my brief msg 10 minutes before yours suggested the same thing (just clear 
the weakref), so it must be right ;-)

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-30 Thread Tim Peters


Tim Peters  added the comment:

Neil, how about this alternative:  leave the weakref implementation alone.  If 
we find a trash weakref, simply clear it instead.  That would prevent callbacks 
too, & would also prevent the weakref from being used to retrieve its 
possibly-trash-too referent.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-30 Thread Tim Peters


Tim Peters  added the comment:

> Would the attached rough patch (gc_disable_wr_callback.txt)
> be a possible fix?  When we find W inside handle_weakrefs(),
> we mark it as trash and will not execute the callback.

It's semantically correct since we never wanted to execute a callback from a 
trash weakref to begin with.  The nature of the error violated that intent:  
the callback from a trash weakref occurs outside gcmodule, where it has no idea 
that the weakref is trash.  After your patch, it would know.

How effective is it?  Well, failure modes "like this" ;-)

If a callback function is trash, delete_garbage will eventually clear it.  But 
if the function is trash the weakref containing it must also be trash (if it 
weren't trash, neither would be the reachable-from-it callback function).  Then 
since the containing weakref is trash, your patch will find it and prevent the 
callback.

So this "should" fix the original problem, even if tp_clear were restored for 
function objects (which it probably should be).

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-29 Thread Tim Peters


Tim Peters  added the comment:

Ah, nevermind my last comment - yes. handle_weakrefs will clear all weakrefs to 
the objects we know are trash.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-29 Thread Tim Peters


Tim Peters  added the comment:

> I see that handle_weakrefs() calls _PyWeakref_ClearRef() and that
> will clear the weakref even if it doesn't have callback.  So, I
> think that takes care for the hole I was worried about.  I.e. a
> __del__ method could have a weakref to an non-valid object.
> However, because handle_weakrefs() will find that weakref, it will
> have been cleared when the __del__ method executes.

There's no problem with objects we _know_ are trash.  Their finalizers are all 
force-run before delete_garbage starts.

It's "surprise" finalizers, triggered by refcounts falling to 0 while 
delete_garbage is running.  They're invisible to handle_weakrefs.

However, to the extent that delete_garage is effective in deallocating objects 
soon after clearing them, to that extent also will invalidated objects clear 
weak references to them as a matter of course.












'

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-29 Thread Tim Peters


Tim Peters  added the comment:

> Note that my flags show that W *is* in 'unreachable'.  It has
> to be otherwise F would not have tp_clear called on it.

Right!  W holds a strong reference to F, so if W were thought to be reachable, 
F would be too.  But F isn't.


> But when delete_garbage() gets done, it moves the object to 'old'.  I
> think that means gc.get_objects() called after the collection completes
> can reveal objects that have had their tp_clear called (if tp_clear
> didn't result in them being freed).

If the tp_clear implementations aren't complete enough to break all cycles, 
that could happen.

In a world that took cyclic trash "very seriously", it would move them to a new 
internal list instead, then at the end of the function die if that list isn't 
empty ("we ran all tp_clears but cyclic trash still remains:  your object 
implementations inherently leak memory").


> I think the difference is that non-weakref finalizers have strong
> references to objects that they can access when they run. 
> So, if we haven't found them, they will keep all the objects that
> they refer to alive as well (subtract_refs() cannot account for
> those refs).  So those objects will all be valid.

That's persuasive :-)  For essentially the same reason, if a "surprise" 
finalizer runs during delete_garbage, it can't ressurect (or in any other way 
access) anything we knew was trash.


> There seems a hole though.  Non-weakref finalizers could have a
> weakref (without callback) to something in the garbage set.  Then,
> when the finalizer runs during delete_garbage(), that finalizer
> code can see non-valid objects via the weakref.  I think this can
> only happen if there are missing/incomplete tp_traverse methods.

Which is what I mean by a "surprise" finalizer:  a trash object T with a 
finalizer but we don't KNOW T is trash.  If we know T is trash then we 
force-run its finalizer before delete_garbage starts, so it can only see valid 
objects.


> We can have finalizer code running during delete_garbage().

Except that was never the intent (quite the contrary!).  The more people have 
moved to demanding that gc be a full-featured all-purpose collector, the more 
important it becomes that types implement what such a thing needs.  At a bare 
minimum, any robust gc needs to be 100% sure of the difference between trash 
and non-trash, and tp_traverse is the one & only way we have to discover 
anything about the object graph.  "Almost all" types that can participate in 
cycles need to implement it.

Or suffer rare shutdown segfaults that somehow nobody ever managed to stumble 
into before ;-)

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-29 Thread Tim Peters


Tim Peters  added the comment:

Fleshing out something I left implicit:  if there's a trash object T with a 
finalizer but we don't KNOW it's trash, we won't force-run its finalizer before 
delete_garbage starts either.  Then, really the same thing:  we may tp_clear 
some piece of trash T's finalizer needs before enough cycles are broken that 
T's finalizer gets run as a routine consequence of T's refcount falling to 0.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-29 Thread Tim Peters


Tim Peters  added the comment:

Sorry, this is very hard for me - broke an arm near the shoulder on Tuesday, 
and between bouts of pain and lack of good sleep, concentration is nearly 
impossible.  Typing with one hand just makes it worse :-(

We must know that F is trash, else we never would have called tp_clear(F).

We must NOT know CT is trash.  If we did know that, then we would have found W 
too, and then:

1. If we thought W was trash, we would have suppressed the callback.

2. If we thought W was not trash, we would have invoked the callback directly, 
long before delete_garbage even started (i.e., F would have been intact).

So if CT _is_ trash, we didn't know it.  If/when delete_garbage broke enough 
cyclea that CT's refcount fell to 0, other parts of Python would have invoked 
W's callback as a matter of course, with F possibly already tp_clear'ed.

Neil, at a high level it's not REALLY surprisimg that gc can screw up if it 
can't tell the difference between trash & treasure ;-)  Long ago, though, it's 
possible the only real failure mode was leaking objects that were actually 
unreachable.

Offhand not worried about get_objects().  That returns objects from generation 
lists, but gc moves possible trash among temporary internal (not generation) 
lists as it goes along.

Am concerned that weakref callbacks may not be the only hole.  We force-run 
finalizers before ever calling tp_clear for the same reason we force-run 
weakref callbacks:  finalizers may also need to access trash objecta while 
they're still sane.  Having finalizers on objects in trash cycles isn't special 
anymore (we used to refuse to run them).  But my brain is fried again for now 
;-)

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-28 Thread Tim Peters


Tim Peters  added the comment:

> call_func and remove are part of a reference cycle. A forced garbage
> collection breaks the cycle and removes the two objects, but they are
> not removed in the expected order:
>
> * first: call_func
> * then: remove
>
> The crash requires to destroy the objects in the reverse order

Victor, I'm not sure what you had in mind there, so please flesh it out if the 
following isn't clear?

In any case "like" the one you constructed, __del__  will always run first, 
with everything needed wholly intact.

Because tp_clear can leave an object in a useless (or worse) state, gc runs all 
finalizers and all weakref callbacks before tp_clear is run on anything.  
tp_clear is used only in delete_garbage, which is the last non-trivial thing gc 
does.  Before then, clears and deallocations happen only as side effects of 
refcounting semantics while running finalizers and callbacks.  So refcount 
precedence order is respected, and nothing will get torn down before its 
refcount hits 0.

At the time gc first calls tp_clear, the intent is that no non-trivial code 
will run again - just tp_clear implementations applying Py_CLEAR to objects' 
members, and deallocations happening as refcounts fall to 0.  No finalizers, no 
callbacks, none, ever ;-)

So what I saw your program doing wasn't just an accident relying on the order 
the objects happened to appear in the list.  Regardless of that order, gc 
forces __del__ to run before tp_clear is applied to anything, and the call_func 
instance is wholly intact when its __del__ runs.

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38006] Crash in remove() weak reference callback of weakref.WeakValueDictionary at Python exit

2019-09-27 Thread Tim Peters


Tim Peters  added the comment:

tp_clear implementations are necessary to reclaim trash cycles.  They're always 
best practice for objects that may be in trash cycles.   tuples are just "cute 
rebels" that way ;-)

Best guess is that the (some) extension isn't playing by the rules.  A weakref 
callback shouldn't be called at all unless the weakref is reachable - but in 
that case the callback function is necessarily reachable too (revealed by a 
weakrefobject's tp_traverse), so delete_garbage() should never have called 
clear() on the callback function to begin with.

Note:  I don't believe the weakref design gave any thought to cyclic gc.  The 
PEP didn't even mention it.  I got dragged into it when Zope started 
segfaulting.  They don't play well together voluntarily, but they've been 
forced to coexist, and Zope long ago provoked every fundamental problem I'm 
aware of.

That "the rules" are subtle doesn't excuse not following them ;-)

--

___
Python tracker 
<https://bugs.python.org/issue38006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37812] Make implicit returns explicit in longobject.c (in CHECK_SMALL_INT)

2019-09-20 Thread Tim Peters


Tim Peters  added the comment:

Sorry, but there was nothing wrong with the CHECK_SMALL_INT macro, to my eyes, 
to begin with - except that it was burdened with an over-elaborate "do ... 
while(0)" wrapper.

Not all macros are _intended_ to be "cheap functions".  Like this one, some are 
intended to be merely textual replacement, to minimize tedious and error-prone 
redundant typing, in a specific file.  Even "bulletproofing them" with tricks 
like "do ... while(0)" cruft is counterproductive in such cases.  They're not 
_intended_ to capture abstractions, neither for general use.  This one was 
intended to check its argument and possibly return, exactly what its expansion 
did (after mentally tossing out the distracting "do while" noise).

Nobody competent to address logic errors in longobject.c was confused about 
that in the slightest.  Knowing the details of how small integers may be cached 
is part of what "competent" means in this context, and what the macro does is 
so simple it's no burden to learn or to use.

Is there an "aesthetic code cleanup" patch that's ever turned out well? ;-)

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37812>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38200] Adding itertools.pairwise to the standard library?

2019-09-18 Thread Tim Peters


Tim Peters  added the comment:

There's an eternal culture clash here:  functional languages have a long 
history of building in just about everything of plausible use, regardless of 
how trivial to build on other stuff.  This started when LISP was barely 
released before (cadr x) was introduced as a shorthand for (car (cdr x)), and 
(caddr x) for (car (cdr (cdr x))), and so on.  Which more modern functional 
languages also supply (second x) and (third x) spellings for (_and_ nth(2, x) 
and nth(3, x) spellings).

This one is harder to get right than those, but not hard at all.  But it's not 
coincidence that itertoolz[1] (note the trailing 'z') also supplies it, spelled 
`sliding_window(width, iterable)` there.  Working with finite difference 
algorithms is probably "the most obvious" use case for a width of 2.

More esoterically, one of my favorite "technical indicators" for stock analysis 
is a dead simple 20-period simple moving average, which can be built very 
conveniently (although not very efficiently - but I don't usually care) by 
mapping a mean function over a sliding window of width 20.

BTW, if you want padding on each end, you can apply pairwise to `chain([first], 
iterable, [last])`.

A related function is breaking an iterable into _non_-overlapping chunks of a 
given width.  itertoolz spells that "partition".  For me that comes up more 
often than overlapping windows.

I like having these things around, but it's not a big deal.  Perhaps it would 
be an easier decision in Python if we gave up on believing that everything in 
itertools _must_ be coded in C.  In functional idioms sometimes speed isn't the 
point at all, but rather using conventional names for simple but compound 
functionality.  Like that "sliding window" is a concept in its own right.  If 
I'm _picturing_ an algorithm in terms of a sliding window, then - of course - 
the shortest distance to working code is to use a facility that already 
implements that concept.

Which is a long way of saying +0.

[1] https://toolz.readthedocs.io/en/latest/api.html

--

___
Python tracker 
<https://bugs.python.org/issue38200>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38105] hash collision when hash(x) == -2 causes many calls to __eq__

2019-09-13 Thread Tim Peters


Tim Peters  added the comment:

Some results of the "add perturb shifted left 1 instead" approach.  These come 
from using an old pile of Python code I have that allows for easy investigation 
of different collision probe strategies.

- As expected, because it has no fixed points, there are no bad cases in a 
dict/set with only two elements, _unless_ you're searching for a third distinct 
element.  Ironically, if they all, e.g., have hash code -2, no problem at all:  
then 3 probes settle it.  But if they're "random" hash codes (hashes of strings 
are good enough for testing this), then every now and again it can take up to 
10 probes.  It all depends on accidents of how the various bits get fed in 
across perturb shifts.  However, with random hashes, searching for an element 
that's there never takes more than 2 probes in the new way (again because 
there's never a fixed point).  Under the current way I've seen it take as many 
as 6.

- On average, with random hashes and 2 elements in an 8-slot table, the average 
number of probes with a successful search fall from 1.07 to 1.06, and on an 
unsuccessful search from 1.33 to 1.30.  Theoretically ideal ("uniform hashing" 
- each object visits the slots in its own random permutation of slot order) are 
1.06/1.29.

- Random hashes in an 8-slot table:  average # probes for successful and 
failing searches

1 entry
current 1.00 1.14
new 1.00 1.12
ideal   1.00 1.12

2 entries
current 1.07 1.33
new 1.06 1.30
ideal   1.06 1.29

3 entries
current 1.16 1.60
new 1.14 1.56
ideal   1.14 1.50

4 entries
current 1.27 2.00
new 1.25 1.93
ideal   1.23 1.80

5 entries
current 1.42 2.66
new 1.38 2.56
ideal   1.34 2.25 

Note:  those aren't typos ;-)  For example, if there's only 1 entry, how can it 
take more than one probe for a failing search?  Easy!  The first probe hits the 
only entry, it doesn't match, and so it takes a second probe to be sure the new 
element isn't present.  In a perfect world, that would happen 12.5% of the time 
(1 in 8).  But in the presence of fixed points ("current"), it can take _more_ 
than 2.  The most probes I saw in "current" was 6 (it hit the only entry 5 
times in a row).

- That account ends at 5 entries because we don't let an 8-slot table contain 
more than 5 entries.

- Small as those differences are, they get smaller for larger tables.  By the 
time we get to 1024 slots, three digits aren't enough to distinguish "current" 
from "ideal".  Even at 256 slots, it's just 1-or-2 ULP wobble.

So that's the good side:

- Eliminates the "surprises" in this bug report.

- For more realistic ordinary cases ("random" hashes), for small tables it buys 
small but very consistent reductions in average probes needed for both 
successful and failing searches.  It also, in general (but not spelled out 
above), cuts the rare longest probe chains.

The bad side:

- For small tables we're talking about nanoseconds regardless.

- That adding a new shift is "free" relies on that x86's LEA instruction 
supports implicitly shifting an addend, and on compilers exploiting that.

- Some cases will get worse.  That's just pragmatic fact.  The intuition here 
is that `perturb` is _trying_ to get high-order hash bits into play, and 
"current" gets 5 more into play on each iteration.  But "new" shifts perturb 
left 1 before adding, preventing them from having any effect at all on the last 
new bit.

This can have seemingly chaotic effects.  For example, with entries of the form 
i*1024 in a table with 2**13 slots and 5,461 entries ("full" - the most we 
allow),

current 3.09 5.52
new 3.22 4.78
ideal   1.65 3.00

So the change hurts successful lookups but helps failing ones.  For a full 
table with 16 slots, the reverse.  For a full table with 2**10 slots, it hurts 
both.

Bottom line?  You tell me :-)  It's just not compelling to me either way.  If I 
were writing the code from scratch, I'd do it.  But because of seemingly 
chaotic effects talked about near the end above, there's always a danger _some_ 
code important to someone will take a hit.  It's just as true that their code 
will enjoy an unexpected speed boost, but nobody screams about those ;-)

--

___
Python tracker 
<https://bugs.python.org/issue38105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24416] Have date.isocalendar() return a structseq instance

2019-09-13 Thread Tim Peters


Tim Peters  added the comment:

I agree with Raymond here:  using collections.namedtuple is fine in the pure 
Python version.  Since Raymond checked in doc changes to purge the phrase 
"struct sequences" (thanks again for that!), it's consistent with everything 
else now for the new datetime docs to say that this function returns a "named 
tuple" (the docs are now clear that there may be more than one implementation 
of such a thing, and that the only things you can _rely_ on are that they 
support named element access in addition to tuple methods).

--

___
Python tracker 
<https://bugs.python.org/issue24416>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38105] hash collision when hash(x) == -2 causes many calls to __eq__

2019-09-12 Thread Tim Peters


Tim Peters  added the comment:

Following up, at least under Visual Studio for x86 "it's free" to change the 
code to add in `perturb` shifted left.  The C source:

perturb >>= PERTURB_SHIFT;
i = (i*5 + (perturb << 1) + 1) & mask;

compiles to this, where I added comments relating the instructions to the 
source code:

lea rax,[r14+r14*4]  # rax = i*5
shr r12,5# r12(perturb) >>= 5
lea r14,[r12*2+1]# r14(i) = (perturb << 1) + 1
add r14,rax  # r14(i) += i*5
and r14,rbx  # r14(i) &= mask

Good job!  There are actually fewer machine instructions than C operators.

As noted, this change would eliminate the possibility of fixed points from the 
start, so would eliminate the symptoms in this specific bug report.

Which I don't much care about ;-)  But I _do_ care about that, in general, the 
early stages of collision resolution for small tables can "frequently" visit 
slots more than once.  This is true even if the hashes aren't identical.  And 
it would remain true even with the change (cycles can still exist - that there 
are no fixed points just means there are no more cycles of the minimum possible 
length 1).

Improving that is worth some effort, but not much.  In general, hash codes 
aren't identical, and in small tables redundant slot visits just cost the time 
to read the hash code from cache and re-deduce that it's not equal to what 
we're looking for (__eq__ isn't typically called).

In larger tables redundant slot visits in the early stages are too rare to care 
about.

In small tables, it's worth something to guarantee that the first probe after a 
collision can't hit exactly the same slot again (in a random model, there's a 
12.5% chance of that happening now in the smallest tables - 12.5% is 1 in 8 - 
and this bug report contrived cases where the chance is 100% across a dozen 
iterations).

Anyone up a for a world of tedious benchmarking? ;-)

--

___
Python tracker 
<https://bugs.python.org/issue38105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38105] hash collision when hash(x) == -2 causes many calls to __eq__

2019-09-12 Thread Tim Peters


Tim Peters  added the comment:

A more principled change would be to replace instances of this:

i = (i*5 + perturb + 1) & mask;

with this:

i = (i*5 + (perturb << 1) + 1) & mask;

The latter spelling has no fixed points.  That's easy to see:  `(perturb << 1) 
+ 1` is necessarily odd, and then `i*5 + ODD` is even when `i` is odd, and vice 
versa.

I had hoped that the time for a new shift could overlap with the multiply 
latency, but no such luck.  At least Visual Studio didn't multiply to begin 
with:  it first computes `i*4 + perturb` cheaply with a single LEA instruction, 
then adds 1 (INC), then adds in `i` again.

I expect it would be able to fold in a "free" shifted add of perturb with 
another LEA, so the latter spelling isn't necessarily more expensive.  But I'm 
generally loathe to increase operation counts relying on clever compilers to 
exploit processor-specific tricks.

--

___
Python tracker 
<https://bugs.python.org/issue38105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38105] hash collision when hash(x) == -2 causes many calls to __eq__

2019-09-12 Thread Tim Peters


Tim Peters  added the comment:

Something that may be slightly worth pursuing:  in

j = (5*j + 1) % 2**power

to get a full-period sequence hitting every integer in range(2**power) exactly 
once, the multiplier (5) is a bit delicate (it has to be congruent to 1 mod 4), 
but the addend (1) only needs to be odd.  When the hash code has a long string 
of high-order 1 bits, the lower bits of `perturb` are all ones repeatedly 
across iterations, and a string of `power` one bits is congruent to -1 mod 
2**power, cancelling out the +1.

Which people stumbled into here:  all negative ints with small absolute value 
_do_ have a long string of high-order 1 bits (as did also my contrived 2**60 - 
2).

If we changed the addend to, say, +3 instead, it would still be possible to 
contrive cases as bad, but you'd be unlikely to stumble into them by accident, 
and they would be sensitive to the value of `power` (the table's current size). 
 For example, for a table size of 8 (the smallest we use), only ints ending 
with bits 101 are congruent to -3 mod 8.  So maximally "bad" hash codes would 
be of the form (bits, where "x" is "doesn't matter", and grouped into chunks of 
PERTURB_SHIFT (5) bits from the right):

... xx101 xx101 ... xx101 xx101 x
 
Provided changing +1 to +3 didn't slow anything down (you never know until you 
try!), that would be fine by me.  But I'm not bothered by the current behavior, 
so I won't be pursuing it.

--

___
Python tracker 
<https://bugs.python.org/issue38105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38105] hash collision when hash(x) == -2 causes many calls to __eq__

2019-09-11 Thread Tim Peters


Tim Peters  added the comment:

Note that you can contrive similar cases with positive hash codes too.  For 
example, force both hash codes to 2**60 - 2.
 
The salient points are that 0 * 5 is congruent to 0 mod any power of 2, while 
-2 * 5 = -10 is congruent to -2 mod 8, so they're fixed points of the `i = 5*i 
+ 1 + perturb` iteration _given that_ perturb is congruent to -1 (so cancels 
out the +1) until enough shifts have been done to get 0 bits into the lowermost 
bits retained by the mask (in which case perturb is no longer congruent to -1, 
and so no longer cancels out the +1).  Since PERTURB_SHIFT is 5, on a 64-bit 
box it can take 13 shifts before `perturb` is entirely cleared.  As internal 
comments note, it's only _then_ that the recurrence is guaranteed to visit 
every slot exactly once.

I don't care.  It would be nice if it could guarantee to visit every slot 
exactly once from the start - which it _would_ do if we didn't use `perturb` at 
all.  But using `perturb` is extremely valuable for avoiding quadratic-time 
behavior in large tables in bad real cases.  The drawback is that it can stick 
in a fixed point for 13 shifts on a 64-bit box (7 shifts on a 32-bit box).  But 
that's the worst it can do, and it's rare.

I don't believe it's worth a single cycle, overall, to avoid it.

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue38105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38096] Clean up the "struct sequence" / "named tuple" docs

2019-09-11 Thread Tim Peters


Tim Peters  added the comment:

Paul, please heed what Raymond said:  it's not good to merge another core dev's 
PR unless they ask you to.  Besides what Raymond said, a core dev may well 
check in incomplete work for any number of reasons (e.g., to see how the 
automated test runs turn out).  When I do that, I add a "DO NOT MERGE" label 
just to be sure, but that really shouldn't be necessary.

Raymond, I added a review anyway, despite that it's already been merged.  One 
comment suggests repairing an obvious trivial typo.

--

___
Python tracker 
<https://bugs.python.org/issue38096>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38060] precedence (relational, logical operator)not working with single value

2019-09-10 Thread Tim Peters


Tim Peters  added the comment:

I don't believe that would improve the docs, but suit yourself.  This is hardly 
a FAQ, but instead a peculiar case where, for whatever reason, someone is 
saying "I'm puzzled by what `or` does, but didn't read the docs for `or`".  
Most people confused by `or` almost certainly _do_ read the docs for `or` ;-)

As is, in the precedence table the word "or" is already a direct hyperlink to 
the section _about_ "or", and similarly for "and".  It can't be made any easier 
than that to find the whole story.

If you do want to pursue adding more cruft to the docs, note that the same 
would be needed for the `- if - else -` operator too.

--

___
Python tracker 
<https://bugs.python.org/issue38060>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38096] Clean up the "struct sequence" / "named tuple" docs

2019-09-10 Thread Tim Peters


New submission from Tim Peters :

The Glossary has this entry:

"""
struct sequence
A tuple with named elements. Struct sequences expose an interface similar to 
named tuple in that elements can be accessed either by index or as an 
attribute. However, they do not have any of the named tuple methods like 
_make() or _asdict(). Examples of struct sequences include sys.float_info and 
the return value of os.stat().
"""

It's trying to document the internal CPython implementation `structseq` detail, 
a class of which can only be created from the C API (no Python interface is 
exposed).

But the glossary _also_ has an entry for "named tuple", and the docs just above 
are confusing that for the related but distinct `collections.namedtuple` 
facility, which does allow creating a named tuple class from Python, but is NOT 
the only kind of "named tuple".

This is a mess ;-)

I suggest:

- Throw away the "struct sequence" glossary entry, and remove all references to 
it from the docs.

- Use "named tuple" everywhere instead.

- Clarify the "named tuple" docs to make clear that it applies to any class 
that supports tuple methods and also named elements.

- Make clear that `collections.namedtuple` classes are "named tuples" classes, 
but not the only kind, and support a number of methods beyond what's required 
for "named tuples".

- Optionally, effectively expose `structseq` to Python code, so that users have 
one obvious way to create their own bare-bones named tuple types.

- I'd prefer to insist that for any named tuple class C,

  issubclass(C, tuple)
  
  is True.  That's true now of structseqs and collections.namedtuples, and is a 
sane way to make clear that they must support all tuple methods.
  
I'd mark this as "newcomer friendly", except it's likely to become a 
bikeshedding nightmare ;-)

--
assignee: docs@python
components: Documentation
messages: 351742
nosy: docs@python, tim.peters
priority: normal
severity: normal
status: open
title: Clean up the "struct sequence" / "named tuple" docs
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38096>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38060] precedence (relational, logical operator)not working with single value

2019-09-10 Thread Tim Peters


Tim Peters  added the comment:

Ah, so you were expecting an error!  That helps.

But that's not how the language works, or how it's documented to work, as has 
been explained in quite some detail already.

In general, precedence _constrains_ evaluation order, but does not _define_ it. 
 In Python or any other language.  In

 9 or 7 > "str"

the precedence rules say the expression groups as

 9 or (7 > "str")

rather than as, say,

 (9 or 7) > "str"

but says nothing more than just that.  It's a very intentional - and documented 
- feature of "and" and "or" that they do NOT evaluate their right-hand operands 
at all if the result of evaluating the left-hand operand first is enough to 
settle the issue.  The precedence rules only define what the "left-hand" and 
"right-hand" operand expressions _are_.

I don't think the docs could be materially clearer about this.  For example, 
from section "6.11. Boolean operations":

"""
The expression `x or y` first evaluates x; if x is true, its value is returned; 
otherwise, y is evaluated and the resulting value is returned.
"""

Precedence rules alone are too feeble to capture that.

--

___
Python tracker 
<https://bugs.python.org/issue38060>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38060] precedence (relational, logical operator)not working with single value

2019-09-09 Thread Tim Peters


Tim Peters  added the comment:

BTW, the docs also spell out that "and" and "or" ALWAYS evaluate their left 
operand before their right operand, and don't evaluate the right operand at all 
if the result of evaluating the left operand is true (for "or") or false (for 
"and").  So, e.g., 

result = (EXPR1) or (EXPR2)

acts like

result = EXPR1
if not bool(result):
result = EXPR2

--

___
Python tracker 
<https://bugs.python.org/issue38060>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38060] precedence (relational, logical operator)not working with single value

2019-09-09 Thread Tim Peters


Tim Peters  added the comment:

@sangeetamchauhan, the reply you got in the image you attached was in error - 
kind of.  Section "6.16. Operator precedence" defines Python's operator 
precedence:

https://docs.python.org/3/reference/expressions.html#index-92

"""
The following table summarizes the operator precedence in Python, from lowest 
precedence (least binding) to highest precedence (most binding).
"""

As you can see there, "and" and "or" are very near the top of the table, so 
bind very weakly - almost anything else on either side gets evaluated first.  
In particular, all comparison operators bind more tightly than "and" and "or".

It's the bitwise operators (| & ^) that bind more tightly than comparisons.

I asked at the start "What do you expect?" but you never answered.  You just 
keep repeating that it's wrong.  Sorry, but I still have no idea what you think 
"should" happen instead.

As I also said the start,

9 or 7 > "str"

groups as

9 or (7 > "str")

exactly as the operator precedence table says it should, and returns 9, exactly 
as the docs for "or" say it should do.

What about that do you think is wrong?  Please be very specific, and back your 
answer with a reference to what the docs actually say.

--

___
Python tracker 
<https://bugs.python.org/issue38060>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38060] precedence (relational, logical operator)not working with single value

2019-09-08 Thread Tim Peters


Tim Peters  added the comment:

I'm sorry you're not satisfied with the answer, but I'm a bona fide expert on 
this and you're not going to get anyone to agree with your confusion here ;-)

But the bug tracker is not the right place for tutorials.  Please take this up 
on, e.g., the Python mailing list instead.  There is no bug here.

One hint:  in

EXPR1 or EXPR2

bool(EXPR1) is _always_ evaluated first.  It makes no difference at all to this 
whether EXPR1 is "x < y" or "1 + 2 + 3" or "9".  Trying to make a special case 
out of "a numerical value" is entirely in your own head:  the language does 
nothing of the sort.

   9 or (ANYTHING_AT_ALL)

always results in 9, for the same reason

   4+5 or (ANYTHING_AT_ALL)

always results in 9.  Whether the left-hand expression evaluating to 9 is a 
literal or a complex expression is irrelevant.

In the same way, e.g.,

   x = 9
and
   x = 4+6

both bind x to 9.  A numeric literal is just as much "an expression" as any 
other kind of expression.  "Single value" has nothing to do with this.

--

___
Python tracker 
<https://bugs.python.org/issue38060>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38060] precedence (relational, logical operator)not working with single value

2019-09-08 Thread Tim Peters


Tim Peters  added the comment:

It's working fine.  What do you expect?  For example,

9 or 7 > "str"

groups as

9 or (7 > "str")

9 is evaluated for "truthiness" first, and since it's not 0 it's considered to 
be true.  That's enough to determine the result of "or", so (7 > "str") isn't 
evaluated at all.  9 is the result.

This is all working as designed and as documented, so I'm closing this report.

--
nosy: +tim.peters
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue38060>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24416] Have date.isocalendar() return a structseq instance

2019-09-07 Thread Tim Peters


Change by Tim Peters :


--
assignee:  -> rhettinger

___
Python tracker 
<https://bugs.python.org/issue24416>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24416] Have date.isocalendar() return a structseq instance

2019-09-06 Thread Tim Peters


Tim Peters  added the comment:

I favor making this a structseq, primarily based on Paul's attempt to find 
actual use cases, which showed that named member access would be most useful 
most often.  I have no intuition for that myself, because while I wrote the 
original functions here, I've never used them and never will ;-)  But if I ever 
did, I'd probably want the ISO week number, and would appreciate not having to 
remember which meaningless index means "week".

Don't care about moving pickles to older versions.

Don't care about the putative speed hit - it's still measured in nanoseconds, 
so can still create over a million per second.  In apps with a large number of 
dates, they're typically not _calculated_, but read up from a relatively 
snail-like database or network connection.  I doubt anyone would notice if 
`.isocalendar()` got 10x slower.

Note:  I'm unassigning myself, because I have nothing else to say :-)

--
assignee: tim.peters -> 

___
Python tracker 
<https://bugs.python.org/issue24416>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38033] Use After Free: PyObject_Free (valgrind)

2019-09-04 Thread Tim Peters


Tim Peters  added the comment:

You're probably chasing ghosts ;-)  Please read about what needs to be done to 
use valgrind successfully with Python:

https://github.com/python/cpython/blob/master/Misc/README.valgrind

--
nosy: +tim.peters
title: Use After Free: PyObject_Free -> Use After Free: PyObject_Free (valgrind)

___
Python tracker 
<https://bugs.python.org/issue38033>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38012] Python Fuction min is case sentive ?

2019-09-02 Thread Tim Peters


Tim Peters  added the comment:

> Also, THE min("Infinity") is priniting "I".  Practically,
> the minimum value is "e" right ?

Sorry, I have no idea what "practically" means to you.  It's just a fact that 
all uppercase ASCII letters compare less than all lowercase ASCII letters:

>>> 'I' < 'e'
True
>>> 'Z' < 'a'
True

This isn't just Python - it's an industry-wide standard.  Look at any reference 
for the ASCII character set; for example, here:

http://www.asciitable.com/

Try posting about what you're trying to accomplish on the Python mailing list?  
The issue tracker isn't the right place for tutorials.

--

___
Python tracker 
<https://bugs.python.org/issue38012>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38012] Python Fuction min is case sentive ?

2019-09-02 Thread Tim Peters


Tim Peters  added the comment:

This has nothing in particular do with `min()`.  As strings, 'I' < 'i', and 'F' 
< 'I'.  For example,

>>> 'I' < 'i'
True

>>> sorted("InFinity")
['F', 'I', 'i', 'i', 'n', 'n', 't', 'y']

That's all working as intended and as documented, so I'm closing this report.

--
nosy: +tim.peters -xtreak
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue38012>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37992] Change datetime.MINYEAR to allow for negative years

2019-08-31 Thread Tim Peters


Tim Peters  added the comment:

This just isn't going to happen.  There's no agreement to be had.  For example, 
the proleptic Gregorian calendar _does_ have a "year 0", and so also does ISO 
8601.

Version 1.0 of the XML schema spec did not have a year 0, but _claimed_ to be 
compatible with ISO 8601 - which it wasn't in this respect.  So version 1.1 
added year 0, despite then being incompatible with version 1.0 of itself in 
this respect[1].

> I am not sure that adding this feature would be worth
> the support burden it would bring.

I am:  it wouldn't ;-)  

[1] https://en.wikipedia.org/wiki/Astronomical_year_numbering

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37992>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37454] Clarify docs for math.log1p()

2019-08-23 Thread Tim Peters


Tim Peters  added the comment:

> Sometimes you guys make me feel dumb as a rock.

I expect we all do that favor for each other at times ;-)

--

___
Python tracker 
<https://bugs.python.org/issue37454>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37810] ndiff reports incorrect location when diff strings contain tabs

2019-08-21 Thread Tim Peters


Change by Tim Peters :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue37810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37893] pow() should disallow inverse when modulus is +-1

2019-08-20 Thread Tim Peters


Tim Peters  added the comment:

I don't have a problem with the trivial ring - I wasn't being that high-minded 
;-)  I was testing a different inverse algorithm, and in the absence of errors 
checked that

minv(a, m) * a % m == 1

for various a and m >= 0.  Of course that failed using pow(a, -1, m) instead 
when m=1.  Offhand, I couldn't imagine a plausible use case for finding an 
inverse mod 1 - and still can't ;-)  In abstract algebra, sure - but for 
concrete numerical computation?  Oh well.

In any case, testing

(minv(a, m) * a - 1) % m == 0

instead appears to work for all non-error cases.

--

___
Python tracker 
<https://bugs.python.org/issue37893>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37893] pow() should disallow inverse when modulus is +-1

2019-08-20 Thread Tim Peters


Tim Peters  added the comment:

Mark, to save you the hassle, I'm closing this myself now.  Thanks for the 
feedback!

--
assignee:  -> tim.peters
resolution:  -> not a bug
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue37893>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37893] pow() should disallow inverse when modulus is +-1

2019-08-20 Thread Tim Peters


Tim Peters  added the comment:

Yup, you have a point there! :-)  I guess I'm just not used to 0 being a 
multiplicative identity.

Don't know what other systems do.  Playing with Maxima, modulo 1 it seems to 
think 0 is the inverse of everything _except_ for 0.  `inv_mod(0, 1)` returns 
`false`.  Modulo -1, everything I tried returned `false`.

Which makes less sense to me.

Fine by me if you want to close this as "not a bug".

--

___
Python tracker 
<https://bugs.python.org/issue37893>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37893] pow() should disallow inverse when modulus is +-1

2019-08-20 Thread Tim Peters


Tim Peters  added the comment:

@Batuhan, fine by me if you want to take this on!  It should be relatively 
easy.  But Mark wrote the code, so it's really up to him.  While I doubt this, 
he may even argue that it's working correctly already ;-)

--

___
Python tracker 
<https://bugs.python.org/issue37893>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37893] pow() should disallow inverse when modulus is +-1

2019-08-20 Thread Tim Peters


New submission from Tim Peters :

For example, these should all raise ValueError instead:

>>> pow(2, -1, 1)
0
>>> pow(1, -1, 1)
0
>>> pow(0, -1, 1)
0
>>> pow(2, -1, -1)
0
>>> pow(1, -1, -1)
0
>>> pow(0, -1, -1)
0

--
components: Library (Lib)
messages: 350015
nosy: mark.dickinson, tim.peters
priority: normal
severity: normal
stage: needs patch
status: open
title: pow() should disallow inverse when modulus is +-1
type: behavior
versions: Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37893>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37863] Speed up hash(fractions.Fraction)

2019-08-18 Thread Tim Peters


Tim Peters  added the comment:

For posterity:

"Modular Inverse Algorithms Without Multiplications for Cryptographic 
Applications"
Laszlo Hars
https://link.springer.com/article/10.1155/ES/2006/32192

"""
On the considered computational platforms for operand lengths used in 
cryptography, the fastest presented modular inverse algorithms need about twice 
the time of modular multiplications, or
even less.
"""

Lars considered a great many algorithm variations, and wrote up about ten of 
the best.  Several things surprised me here.  Most surprising:  under some 
realistic assumptions, _the_ best turned out to be a relatively simple 
variation of Euclid extended GCD.  Nutshell:  during a step, let the difference 
between the bit lengths of the then-current numerator and denominator be `f`.  
Then look at a few leading bits to pick whichever of `s` in {f-1, f, f+1} will 
clear the largest handful of leading bits in `numerator - (denominator << s)` 
(this test is meant to be fast, not exact - it's a refinement of an easier 
variant that just picks s=f).  The "quotient" in this step is then 2**s, and 
the rest is shifting and adding/subtracting.  No division or multiplication is 
needed.

This has a larger expected iteration count than some other methods, but, like 
regular old Euclid GCD, needs relatively little code per iteration.

--

___
Python tracker 
<https://bugs.python.org/issue37863>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37863] Speed up hash(fractions.Fraction)

2019-08-17 Thread Tim Peters


Tim Peters  added the comment:

Some random notes:

- 1425089352415399815 appears to be derived from using the golden ratio to 
contrive a worst case for the Euclid egcd method.  Which it's good at :-)  Even 
so, the current code runs well over twice as fast as when replacing the 
pow(that, -1, P) with pow(that, P-2, P).

- Using binary gcd on the same thing requires only 46 iterations - and, of 
course, no divisions at all.  So that could be a big win.  There's no possible 
way to get exponentiation to require less than 60 iterations, because it 
requires that many squarings just to reach the high bit of P-2.  However, I 
never finishing working out how to extend binary gcd to return inverses too.

- All cases are "bad" for pow(whatever, P-2, P) because P-2 has 60 bits set.  
So we currently need 60 multiplies to account for those, in addition to 60 
squarings to reach P-2's high bit.  A significant speedup could probably have 
been gotten just by rewriting whatever**(P-2) as

(whatever ** 79511827903920481) ** 29

That is, express P-2 as its prime factorization.  There are 28 zero bits in the 
larger factor, so would save 28 multiply steps right there.  Don't know how far 
that may yet be from an optimal addition chain for P-2.

- The worst burden of the P-2-power method is that there's no convenient way to 
exploit that % P _can_ be very cheap, because P has a very special value.  The 
power method actually needs to divide by P on each step.  As currently coded, 
the egcd method never needs to divide by P (although _in_ the egcd part it 
divides narrower & narrower numbers < P).

- Something on my todo list forever was writing an internal routine for 
squaring, based on that (a+b)**2 = a**2 + 2ab + b**2.  That gives 
Karatsuba-like O() speedup but with simpler code, enough simpler that it could 
probably be profitably applied even to a relatively small argument.

Which of those do I intend to pursue?  Probably none :-(  But I confess to be 
_annoyed_ at giving up on extending binary gcd to compute an inverse, so I may 
at least do that in Python before I die ;-)

--

___
Python tracker 
<https://bugs.python.org/issue37863>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29535] datetime hash is deterministic in some cases

2019-08-16 Thread Tim Peters


Tim Peters  added the comment:

I'm with Mark:  leave numeric hashes alone.  There's no reason to change them, 
and in addition to what Mark wrote it's a positively Good Thing that `hash(i) 
== i` for all sufficiently small ints.  Not only is that efficient to compute, 
it guarantees there are no collisions _at all_ in the common case of a dict 
indexed by a contiguous range of integers.

The _purpose_ of `hash()` in Python isn't to create an illusion of randomness, 
it's to support efficient dicts and sets.  Mucking with string hashes was a 
pragmatic hack to alleviate concerns about DOS attacks specific to 
string-indexed dicts.  A better technical solution to that would have been to 
introduce a different flavor of dict with guaranteed good worst-case behaviors, 
but, pragmatically, a great many users would never realize that's what they 
really wanted, and it wouldn't have helped pre-existing code at all.

But there's no reason to spread that hack beyond the use cases that needed it, 
and much reason not to.

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue29535>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37863] Speed up hash(fractions.Fraction)

2019-08-15 Thread Tim Peters


Tim Peters  added the comment:

Mark, I did just a little browsing on this.  It seems it's well known that egcd 
beats straightforward exponentiation for this purpose in arbitrary precision 
contexts, for reasons already sketched (egcd needs narrower arithmetic from the 
start, benefits from the division narrowing on each iteration, and since the 
quotient on each iteration usually fits in a single "digit" the multiplication 
by the quotient goes fast too).

But gonzo implementations switch back to exponentiation, using fancier 
primitives like Montgomery multiplication.

As usual, I'm not keen on bloating the code for "state of the art" giant int 
algorithms, but suit yourself!  The focus in this PR is dead simple spelling 
changes with relatively massive payoffs.

--

___
Python tracker 
<https://bugs.python.org/issue37863>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37863] Speed up hash(fractions.Fraction)

2019-08-15 Thread Tim Peters


Tim Peters  added the comment:

Well, details matter ;-)  Division in Python is expensive.  In the 
exponentiation algorithm each reduction (in general) requires a 122-by-61 bit 
division.  In egcd, after it gets going nothing exceeds 61 bits, and across 
iterations the inputs to the division step get smaller each time around.

So, e.g., when Raymond tried a Fraction with denominator 5813, "almost all" the 
egcd divisions involved inputs each with a single internal Python "digit".  But 
"almost all" the raise-to-the-(P-2) divisions involve a numerator with 5 
internal digts and 3 in the denominator.  Big difference, even if the total 
number of divisions is about the same.

--

___
Python tracker 
<https://bugs.python.org/issue37863>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37863] Speed up hash(fractions.Fraction)

2019-08-15 Thread Tim Peters


Tim Peters  added the comment:

Why I expected a major speedup from this:  the binary exponentiation routine 
(for "reasonably small" exponents) does 30 * ceiling(exponent.bit_length() / 
30) multiply-and-reduces, plus another for each bit set in the exponent.  
That's a major difference between exponents of bit lengths 61 
((P-2).bit_length()) and 1 ((1).bit_length()).  Indeed, I bet it would pay in 
`long_pow()` to add another test, under the `if (Py_SIZE(b) < 0)` branch, to 
skip the exponentiation part entirely when b is -1.  `long_invmod()` would be 
the end of it then.  Because I expect using an exponent of -1 for modular 
inverse will be overwhelmingly more common than using any other negative 
exponent with a modulus.

--

___
Python tracker 
<https://bugs.python.org/issue37863>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37863] Speed hash(fractions.Fraction)

2019-08-14 Thread Tim Peters


New submission from Tim Peters :

Recording before I forget.  These are easy:

1. As the comments note, cache the hash code.

2. Use the new (in 3.8) pow(denominator, -1, modulus) to get the inverse 
instead of raising to the modulus-2 power.  Should be significantly faster.  If 
not, the new "-1" implementation should be changed ;-)  Will require catching 
ValueError in case the denom is a multiple of the modulus.

3. Instead of multiplying by the absolute value of the numerator, multiply by 
the hash of the absolute value of the numerator.  That changes the 
multiplication, and the subsequent modulus operation, from unbounded-length 
operations to short bounded-length ones.  Hashing the numerator on its own 
should be significantly faster, because the int hash doesn't require any 
multiplies or divides regardless of how large the numerator is.

None of those should change any computed results.

--
messages: 349789
nosy: tim.peters
priority: low
severity: normal
stage: needs patch
status: open
title: Speed hash(fractions.Fraction)
type: performance
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37863>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37831] bool(~True) == True

2019-08-12 Thread Tim Peters


Tim Peters  added the comment:

BTW, I should clarify that I think the real "sin" here was making bool a 
subclass of int to begin with.  For example, there's no sane reason at all for 
bools to support division, and no reason for a distinct type not to define "~" 
any way it feels like.  Advertised inheritance is usually a Bad Idea ;-)

--

___
Python tracker 
<https://bugs.python.org/issue37831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37831] bool(~True) == True

2019-08-12 Thread Tim Peters


Tim Peters  added the comment:

I don't agree that "~" doesn't "work".  If people are reading it as "not", 
they're in error.  The Python docs say ~x

means

the bits of x inverted

and that's what it does.  There's no sense it which it was _intended_ to be 
logical negation, no more in Python than in C (C has the distinct unary prefix 
"!" operator for truthiness negation, and, as you note, Python has "not").

It's educational ;-) to learn how analogies between ints and bools can break 
down.  For logical negation in the sense they want here, it's not "~x" they 
want but "~x & 1" - that is, if someone insists on using an inappropriate 
operator.

--

___
Python tracker 
<https://bugs.python.org/issue37831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37831] bool(~True) == True

2019-08-12 Thread Tim Peters


Tim Peters  added the comment:

Mark, isn't `int()` the obvious way "to convert an integer-like thing to an 
actual int"?

>>> int(True)
1
>>> int(False)
0

For the rest, I'm -True on making ~ do something magical for bools inconsistent 
with what it does for ints.  "is-a" is a promise.

--

___
Python tracker 
<https://bugs.python.org/issue37831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37807] Make hash() return a non-negative number

2019-08-11 Thread Tim Peters


Tim Peters  added the comment:

I agree:  we "shouldn't have" documented anything about hash codes beyond the 
invariants needed to guarantee they work for their intended purpose, chiefly 
that x == y implies hash(x) == hash(y).

Which leads to your other question ;-)  That invariant is tricky to maintain 
across a pile of distinct but comparable numeric types!  Whatever we return for 
an int needs also to be what's returned for a float that happens to have the 
same value, and whatever we return for a float needs also to be what's returned 
for a fraction.Fraction with the same value, and so on.

So Python uses a single numeric hashing algorithm defined on mathematical 
rationals, described here:

https://docs.python.org/3.8/library/stdtypes.html#hashing-of-numeric-types

Presumably that's documented so that people defining their own numeric types 
have a clue about how to implement compatible hash codes.

Anyway, that algorithm uses a large fixed prime P (different on 32- and 64-bit 
boxes), and uses operations modulo P.  That's why the full bit width isn't 
used.  And not just any prime P, it picks one for which computing the remainder 
mod P is especially efficient.  2**61 - 1 is as good as it gets on 64 bit boxes.

I didn't pick this algorithm (I suspect Mark did), but I certainly approve of 
it.  It's clever and general enough to apply to any plausible subset-of-real 
numeric type short of the constructive reals (where equality isn't even 
theoretically decidable, so "x == y implies ..." can't get off the ground ;-) ).

--

___
Python tracker 
<https://bugs.python.org/issue37807>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37807] Make hash() return a non-negative number

2019-08-10 Thread Tim Peters


Tim Peters  added the comment:

Well, I have no code that would benefit from this change.  What's the point?  
Sure, I use _parts_ of hash codes at times, but, e.g.,

index = the_hash_code & SOME_LOW_BIT_MASK

in Python couldn't care less about the sign of `the_hash_code` - it returns a 
non-negative int regardless.  The same applies if, e.g., SOME_LOW_BIT_MASK is 
`(1 << sys.hash_info.hash_bits) - 1`, giving a full-width non-negative hash 
without losing a bit of "entropy".  But I've never felt a need for the latter.  
Likewise for doing, e.g.,

index = the_hash_code % SOME_POSITIVE_PRIME

in Python.  % takes the sign (positive here) of the modulus in Python.

So this _looks_ like pointlessly disruptive code churn to me.  What killer use 
case am I missing?

--

___
Python tracker 
<https://bugs.python.org/issue37807>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37787] Minimum denormal or ** bug

2019-08-07 Thread Tim Peters


Tim Peters  added the comment:

Since this depends on the platform libm implementation of pow(), I'm closing 
this as "won't fix".

Steve, on the chance you're serious ;-) , there are implementations of the 
"constructive reals", which indeed act like infinite-precision floats.  But 
they tend to be very slow, and, e.g., testing two of them for equality is, in 
general, undecidable even in theory.  Here's one such:

https://www.chiark.greenend.org.uk/~sgtatham/spigot/

--
resolution:  -> wont fix
stage:  -> resolved
status: open -> closed
versions:  -Python 3.7

___
Python tracker 
<https://bugs.python.org/issue37787>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37004] SequenceMatcher.ratio() noncommutativity not well-documented

2019-08-07 Thread Tim Peters


Change by Tim Peters :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed
versions: +Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37004>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37787] Minimum denormal or ** bug

2019-08-07 Thread Tim Peters


Tim Peters  added the comment:

Python delegates exponentiation with a Python float result to the platform C's 
double precision `pow()` function.  So this is just what the Windows C pow(2.0, 
-1075.0) returns.  All native floating point operations are subject various 
kinds of error, and this is one.

>>> import math
>>> math.pow(2.0, -1075.0)
5e-324
>>> math.pow(2.0, -1074.0) # same thing
5e-324

To avoid intermediate libm rounding errors, use ldexp instead:

>>> math.ldexp(1, -1074) # 2.0 ** -1074
5e-324
>>> math.ldexp(1, -1075) # 2.0 ** -1075
0.0

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37787>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37727] error past 15 places

2019-07-30 Thread Tim Peters


Tim Peters  added the comment:

You'll see much the same in every programming language that supports your 
computer's floating-point hardware.  Start by reading this gentle introduction:

https://docs.python.org/3/tutorial/floatingpoint.html

This bug tracker isn't a place for tutorials, though, and understanding 
floating-point deeply is a massive undertaking.

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37727>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37537] Compute allocated blocks in _Py_GetAllocatedBlocks()

2019-07-10 Thread Tim Peters


Change by Tim Peters :


--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37537>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37543] Optimize pymalloc for non PGO build

2019-07-10 Thread Tim Peters


Change by Tim Peters :


--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37543>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37500] 3.8.0b2 no longer optimizes away "if 0:" ?

2019-07-05 Thread Tim Peters


Tim Peters  added the comment:

> we could say that it does not matter if
>
> def f():
>   if 0:
> yield
>
> should be or not a generator

Slippery slope arguments play better if they're made _before_ a decade has 
passed after the slope was fully greased.

There's nothing accidental about how `yield` behaves here.  I wrote the 
original generator PEP, and very deliberately added these to its doctests (in 
test_generators.py):

"""
>>> def f():
...if 0:
...yield
>>> type(f())



>>> def f():
... if 0:
... yield 1
>>> type(f())


>>> def f():
...if "":
...yield None
>>> type(f())

"""

Any alternate implementation that decided that whether "it's a generator" 
depended on optimizations would likely fail at least one of those tests.  It 
was intended to be solely a compile-time decision, based purely on syntactic 
analysis.

So I've seen no reason to believe - or expect - that the damage here goes - or 
will ever go - deeper than that some dead code isn't raising compile-time 
errors in some rare cases (statements allowed only at function level being used 
in dead code outside function level).

Which should be fixed, if possible.  But, as damage goes - sorry! - it just 
seems minimal to me.

--

___
Python tracker 
<https://bugs.python.org/issue37500>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37500] 3.8.0b2 no longer optimizes away "if 0:" ?

2019-07-05 Thread Tim Peters


Tim Peters  added the comment:

> Using jumps is not removing the optimization
> entirely, is just a weaker and more incomplete
> way of doing the same. 

Sorry, I'm afraid I have no idea what that means.  The generated code before 
and after was wildly different, as shown in Ned's original report here.  In 
what possible sense was his "if 0:" being "optimized" if it generated code to 
load 0 onto the stack, then POP_JUMP_IF_FALSE, and then jumped over all the 
code generated for the dead block?

The generated code after is nearly identical if I replace his "if 0:" with "if 
x:" (which the compiler takes to mean a global or builtin about whose 
truthiness it knows nothing at all).  Indeed, the only difference in the byte 
code is that instead of doing a LOAD_CONST to load 0, it does a LOAD_GLOBAL to 
load x.

So, to my eyes, absolutely nothing of the optimization remained.  At least not 
in the example Ned posted here.

--

___
Python tracker 
<https://bugs.python.org/issue37500>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37500] 3.8.0b2 no longer optimizes away "if 0:" ?

2019-07-05 Thread Tim Peters


Tim Peters  added the comment:

> This is the expected result of fixing a bug that has been
> open since 2008

It's the expected result of fixing a bug _by_ eliminating the optimization 
entirely.  It's not an expected result of merely fixing the bug.  It's quite 
obviously _possible_ to note that

if 0:
return

is forbidden at module level, yet not emit any bytecode for it.  Indeed, it's 
so obvious that everyone here did it in their head without even noticing they 
had done so ;-)

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37500>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37500] 3.8.0b2 no longer optimizes away "if 0:" ?

2019-07-05 Thread Tim Peters


Tim Peters  added the comment:

There's "correctness" that matters and "correctness" that's merely pedantic ;-) 
 CPython has acted the current way for about 15 years (since 2.4 was released), 
and this is the first time anyone has raised an objection.  That's just not 
enough harm to justify tossing out well over a decade of "facts on the ground". 
 Practicality beats purity.

I, of course, have no objection to detecting syntax errors in dead code.  It's 
the disruptive _way_ that's being done that's objectionable.  I don't have a 
better way in mind, nor the bandwidth to try to dream one up.  But, at heart, 
I'd rather it never "get fixed" than get fixed at this cost.

--

___
Python tracker 
<https://bugs.python.org/issue37500>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37500] 3.8.0b2 no longer optimizes away "if 0:" ?

2019-07-05 Thread Tim Peters


Tim Peters  added the comment:

I hate this change :-(  The code generated for something like this today:

def f():
if 0:
x = 1
elif 0:
x = 2
elif 1:
x = 3
elif 0:
x = 4
else:
x = 5
print(x)

is the same as for:

def f():
x = 3
print(x)

No tests or jumps at all.  That made the optimization an extremely efficient, 
and convenient, way to write code with the _possibility_ of using different 
algorithms by merely flipping a 0 and 1 or two in the source code, with no 
runtime costs at all (no cycles consumed, no bytecode bloat, no useless 
unreferenced co_consts members, ...).  Also a zero-runtime-cost way to 
effectively comment out code blocks (e.g., I've often put expensive debug 
checking under an "if 1:" block).

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37500>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37454] Clarify docs for math.log1p()

2019-07-01 Thread Tim Peters


Tim Peters  added the comment:

Mark's analysis is spot-on - good eye :-)

Here under 3.7.3 [MSC v.1916 64 bit (AMD64)] on win32, in the original script 
it makes no difference at all for negative "small x" (where, as Mark said, `1 - 
random.random()` is exactly representable):

Counter({'equal': 50058, 'offset_log': 41936, 'regular_log': 8006})
Counter({'equal': 10})

Changing to Mark's 1e-3 * random.random():

Counter({'offset_log': 99958, 'equal': 42})
Counter({'offset_log': 99884, 'equal': 116})

And going on to use 1e-6 instead:

Counter({'offset_log': 10})
Counter({'offset_log': 10})

--

___
Python tracker 
<https://bugs.python.org/issue37454>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37434] Segfault in typeobject.c at _PyObject_GC_UNTRACK(type)

2019-06-29 Thread Tim Peters


Tim Peters  added the comment:

I haven't used protobuf, but it's _generally_ true that crashes that occur for 
the first time in the presence of C or C++ extension modules are due to subtle 
(or not so subtle) mistakes in using the sometimes-delicate Python C API.  So 
it's too soon to play "pin the blame on the donkey here" ;-)  But resolving it 
really needs someone who knows what protobuf is doing.

--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37434>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37448] obmalloc: radix tree for tracking arena address ranges

2019-06-29 Thread Tim Peters


Change by Tim Peters :


--
nosy: +tim.peters

___
Python tracker 
<https://bugs.python.org/issue37448>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37295] Possible optimizations for math.comb()

2019-06-17 Thread Tim Peters


Tim Peters  added the comment:

In real life, I expect 99.999%+ of calls will be made with small arguments, so 
(1) is worth it.  I like Mark's suggestion to use uint64_t so the acceptable 
range doesn't depend on platform.  At least in the world I live in, 32-bit 
boxes are all but extinct anyway.

I honestly wouldn't bother with more than that.  It's fun to optimize 
giant-integer algorithms with an ever-ballooning number of different clever 
approaches, but Python is an odd place for that.  People looking for blazing 
fast giant-int facilities generally want lots & lots of them, so are better 
steered toward, e.g., gmp.  That's its reason for existing.

For example, their implementation of binomial coefficients uses special 
division algorithms exploiting that the quotient is exact (no remainder):

https://gmplib.org/manual/Exact-Division.html#Exact-Division

There's just no end to potential speedups.  But in Python, I expect a vast 
majority of users will be happy if they get the right answer for the number of 
possible poker hands ;-)

Oh ya - some smart kid will file a bug report about the inability to interrupt 
the calculation of a billion-bit result, so (4) is inevitable.  Me, I'd wait 
for them to complain, and encourage _them_ to learn something useful by writing 
a patch to fix it :-)

--

___
Python tracker 
<https://bugs.python.org/issue37295>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32846] Deletion of large sets of strings is extra slow

2019-06-15 Thread Tim Peters


Tim Peters  added the comment:

Thanks, Terry!  Based on your latest results, "quadratic time" isn't plausible 
here anymore, so I'm closing this.  Nasty cache effects certainly played a 
role, but they were just a flea on the dog ;-)

--
resolution:  -> fixed
stage: commit review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue32846>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32846] Deletion of large sets of strings is extra slow

2019-06-14 Thread Tim Peters


Tim Peters  added the comment:

Raymond, please read my very recent comment one above yours.  A (overall) 
quadratic-time algorithm (O(A**2) where A is the number of arenas) in 
obmalloc.c is (to my eyes) probably the _primary_ cause of the sloth here.  
That's been fixed for 3.8, but I don't have enough RAM even to run Terry's test 
code to confirm it.

I can confirm that there's no quadratic-time behavior anymore deleting large 
sets of strings, but only until I run out of RAM.  Neither is the behavior 
linear, but it's much closer to linear than quadratic now.

If someone with more RAM can confirm this for larger sets, then fine by me if 
this is closed again.

--

___
Python tracker 
<https://bugs.python.org/issue32846>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32846] Deletion of large sets of strings is extra slow

2019-06-14 Thread Tim Peters


Change by Tim Peters :


--
stage: resolved -> commit review

___
Python tracker 
<https://bugs.python.org/issue32846>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32846] Deletion of large sets of strings is extra slow

2019-06-14 Thread Tim Peters


Tim Peters  added the comment:

Looks likely that the _major_ cause of the quadratic-time delete behavior was 
due to that obmalloc used a linear-time method to keep its linked list of 
usable arenas sorted in order of number of free pools.  When a pool became 
unused, its arena's count of free pools increased by one, and then order was 
restored by moving the arena "to the right" in the linked list, one slot at a 
time.

When there were only a few hundred arenas, nobody noticed.  But programs with 
thousands of arenas could suffer very noticeable sloth, and programs with 
hundreds of thousands could appear to hang (could require days to complete 
deleting).

This was fixed in issue #37029, using a constant-time method to restore sorted 
order, scheduled for release in Python 3.8.

--
stage:  -> resolved
versions: +Python 3.8 -Python 3.9

___
Python tracker 
<https://bugs.python.org/issue32846>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37257] obmalloc: stop simple arena thrashing

2019-06-12 Thread Tim Peters


Change by Tim Peters :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue37257>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37257] obmalloc: stop simple arena thrashing

2019-06-12 Thread Tim Peters


Tim Peters  added the comment:


New changeset d1c85a27ea9fe70163cad3443d5e534d94f08284 by Tim Peters in branch 
'master':
bpo-37257:  obmalloc:  stop simple arena thrashing (#14039)
https://github.com/python/cpython/commit/d1c85a27ea9fe70163cad3443d5e534d94f08284


--

___
Python tracker 
<https://bugs.python.org/issue37257>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37257] obmalloc: stop simple arena thrashing

2019-06-12 Thread Tim Peters


Change by Tim Peters :


--
keywords: +patch
pull_requests: +13903
stage: test needed -> patch review
pull_request: https://github.com/python/cpython/pull/14039

___
Python tracker 
<https://bugs.python.org/issue37257>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37257] obmalloc: stop simple arena thrashing

2019-06-12 Thread Tim Peters


Change by Tim Peters :


--
type:  -> performance

___
Python tracker 
<https://bugs.python.org/issue37257>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37257] obmalloc: stop simple arena thrashing

2019-06-12 Thread Tim Peters


New submission from Tim Peters :

Scenario:  all arenas are fully used.  A program then runs a loop like:

while whatever:
p = malloc(n)
...
free(p)

At the top, a new arena has to be created, and a single object is taken out of 
a single pool.  At the bottom, that object is freed, so the arena is empty 
again, and so is returned to the system.  Which cycle continues so long as the 
loop runs.  Very expensive.

This is "something like" what happens in practice, and has been reported 
anecdotally for years, but I've never been clear on _exactly_ what programs 
were doing in such cases.  Neil S pointed out this recent report here:

https://mail.python.org/pipermail/python-dev/2019-February/156529.html

Which may or may not be relevant.  Inada?

The "fix" proposed there:

-if (nf == ao->ntotalpools) {
+if (nf == ao->ntotalpools && ao != usable_arenas) {

doesn't appeal to me, because it can lead to states where obmalloc never 
returns empty arenas, no matter how many pile up.  For example, picture a 
thousand arenas each with just one used pool.  The head of the arena list 
becomes free, so is left alone, but moves to the end of the list (which is 
sorted by number of free pools).  Then the new head of the list becomes free, 
and ditto.  On & on.  We're left with a list containing a thousand wholly 
unused arenas.

So I suggest instead:

+if (nf == ao->ntotalpools && ao->nextarena != NULL) {

That is, instead of exempting the head of the list, exempt the tail of the 
list.  In the example above, the first 999 arenas are returned to the system, 
but the last one remains in the list for reuse.  In general, the change would 
allow for at most one empty arena in the list.

We can't in general predict the future, but this would be enough to stop the 
thrashing in the specific scenario given at the top, with no apparent serious 
failure modes (potentially "wasting" one arena is minor).

--
assignee: tim.peters
components: Interpreter Core
messages: 345407
nosy: inada.naoki, nascheme, tim.peters
priority: normal
severity: normal
stage: test needed
status: open
title: obmalloc:  stop simple arena thrashing
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37257>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37211] obmalloc: eliminate limit on pool size

2019-06-09 Thread Tim Peters


Change by Tim Peters :


--
assignee:  -> tim.peters

___
Python tracker 
<https://bugs.python.org/issue37211>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



<    1   2   3   4   5   6   7   8   9   10   >