Heads up! I found my PSF Board voting info in my gmail spam folder
today; looks like it was mailed out this morning.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
[Skip Montanaro ]
> I subscribe to the python/cpython stuff on GitHub. I find it basically
> impossible to follow because of the volume.
> ...
> How (if at all) do people deal with this firehose of email? Am I the
> only person dumb enough to have tried?
My observation is that, over time, all
[Barry Scott and Steve Dower share tips for convincing Visual Studio
to show assembler without recompiling the file]
Thanks, fellows! That mostly ;-) workedl. Problem remaining is that
breakpoints just didn't work. They showed up "visually", and in the table
of set breakpoints, but code went
[Guido]
> I don't think there's a way to do a PGO build from Visual Studio; but
> a command prompt in the repo can do it using `PCbuild\build.bat --pgo`.
> Just be patient with it.
Thanks! That worked, and was easy, and gave me an executable that runs
"// 10" at supernatural speed.
Alas, Visual
[Tim, incidentally notes that passing 10 as the divisor to inplace_divrem1()
is "impossibly fast" on Windows, consuming less than a third the time as
when passing seemingly any other divisor]
[Mark Dickinson, discovers much the same is true under other, but not all,
Linux-y builds, due to the
[Gregory P. Smith ]
> ...
> That only appears true in default boring -O2 builds. Use
> `./configure --enable-optimizations` and the C version is *much* faster
> than your asm one...
>
> 250ns for C vs 370ns for your asm divl one using old gcc 9.3 on my
> zen3 when compiled using
]Mark Dickinson ]
>> Division may still be problematic.
Heh. I'm half convinced that heavy duty bigint packages are so often
written in assembler because their authors are driven insane by trying
to trick C compilers into generating "the obvious" machine
instructions needed.
An alternative to HW
>> The reason for digits being a multiple of 5 bits should be revisited vs
>> its original intent
> I added that. The only intent was to make it easier to implement
> bigint exponentiation easily ...
That said, I see the comments in longintrepr.h note a stronger constraint:
"""
the marshal code
[Gregory P. Smith ]
> The reason for digits being a multiple of 5 bits should be revisited vs
> its original intent
I added that. The only intent was to make it easier to implement
bigint exponentiation easily while viewing the exponent as being in
base 32 (so as to chew up 5 bits at a time)..
I started writing up a SortedDict use case I have, but it's very
elaborate and I expect it would just end with endless pointless
argument about other approaches I _could_ take. But I already know all
those ;-)
So let's look at something conceptually "dead easy" instead: priority
queues. They're a
[Christopher Barker ]
> Earlier in the thread, we were pointed to multiple implementations.
>
> Is this particular one clearly the “best”[*]?
>
> If so, then sure.
>
> -CHB
>
> [*] best meaning “most appropriate for the stdlib”. A couple folks have
> already pointed to the quality of the code. But
[Bob Fang ]
> This is a modest proposal to consider having sorted containers
> (http://www.grantjenks.com/docs/sortedcontainers/) in standard library.
+1 from me, but if and only if Grant Jenks (its author) wants that too.
It's first-rate code in all respects, including that it's a fine
example
[Christopher Barker ]
> Maybe a stupid question:
>
> What are use cases for sorted dicts?
>
> I don’t think I’ve ever needed one.
For example, for some mappings with totally ordered keys, it can be
useful to ask for the value associated with a key that's not actually
there, because "close to the
[Ethan Furman ]
> When is an empty container contained by a non-empty container?
That depends on how the non-empty container's type defines
__contains__. The "stringish" types (str, byte, bytearray) work _very_
differently from others (list, set, tuple) in this respect.
t in x
for the latter
[Raymond Bisdorff ]
> I fully agree with your point. By default, all the components of the
> tuple should be used in the comparison.
>
> Yet, I was confused by the following result.
> >>> from operator import itemgetter
> >>> L = [(1, 'a'), (2, 'b'), (1, 'c'), (2, 'd'), (3, 'e')]
> >>>
[Raymond Bisdorff ]
> ...
> Please notice the following inconsistency in Python3.10.0 and before of
> a sort(reverse=True) result:
>
> >>> L = [(1, 'a'), (2, 'b'), (1, 'c'), (2, 'd'), (3, 'e')]
> >>> L.sort(reverse=True)
> >>> L
> >>> [(3, 'e'), (2, 'd'), (2, 'b'), (1, 'c'), (1, 'a')]
Looks
Sorry for the spam! A bunch of these were backed up in the moderation
queue. I used the UI to set the list to auto-discard future messages
from this address, but then clicked "Accept" in the mistaken sense of
"yes, accept my request to auto-nuke this clown". But it took "Accept"
to mean "sure
[Laurent Lyaudet ]
> ...
> My benchmarks could be improved but however I found that Shivers' sort
> and adaptive Shivers' sort (aka Jugé's sort) performs better than
> Tim's sort.
Cool! Could you move this to the issue report already open on this?
Replace list sorting merge_collapse()?
[me]
> If you want more active moderation, volunteer for the job. I'd happily
> give it up, and acknowledge that my laissez-faire moderation approach
> is out of style.
But, please, don't tell _me_ off-list that you volunteer. I want no
say in who would become a new moderator - I'm already doing
Various variations on:
> ... I am also considering unsubscribing if someone doesn't step in and stop
> the mess going on between Brett and Marco. ...
Overall, "me too!" pile-ons _are_ "the [bulk of the] mess" to most
list subscribers.
It will die out on its own in time. Dr. Brett should know by
[Marco Sulla ]
> I repeat, even the worst AI will understand from the context what I
> meant.
Amazingly enough, the truth value of a proposition does not increase
via repetition ;-)
>>> bool(True * 1_000_000_000)
True
>>> bool(False * 1_000_000_000)
False
> But let me do a very rude example:
>
[Marco Sulla ]
> It's the Netiquette, Chris. It's older than Internet. It's a gross
> violation of the Netiquette remarking grammatical or syntactical
> errors. I think that also the least advanced AI will understand what I
> meant.
As multiple people have said now, including me, they had no idea
[Marco Sulla ]
> Oh, this is enough. The sense of the phrase was very clear and you all
> have understood it.
Sincerely, I have no idea what "I pretend your immediate excuses."
means, in or out of context.
> Remarking grammatical errors is a gross violation
> of the Netiquette. I ask
Sorry, all! This post was pure spam - I clicked the wrong button on
the moderator UI. The list has already been set to auto-reject any
future posts from this member.
On Mon, Aug 9, 2021 at 10:51 AM ridhimaortiz--- via Python-Dev
wrote:
>
> It is really nice post. https://bit.ly/3fsxwwl
>
[Ethan Furman]
> A question [1] has arisen about the viability of `random.SystemRandom` in
> Pythons before and after the secrets module was introduced
> (3.5 I think) -- specifically
>
> does it give independent and uniform discrete distribution for
> cryptographic purposes across
[Dan Stromberg ]
> ...
> Timsort added the innovation of making mergesort in-place, plus a little
> (though already common) O(*n^2) sorting for small sublists.
Actually, both were already very common in mergesorts. "timsort" is
much more a work of engineering than of insight ;-) That is, it
FYI, I just force-unsubscribed this member (Hoi Lam Poon) from
python-dev. Normally I don't do things like that, since, e.g, we have
no way to know whether the sender address was spoofed in emails we
get. But in this case Hoi's name has come up several times as the
sender of individual spam, and
I'm guessing it's time to fiddle local CPython clones to account for
master->main renaming now?
If so, I've seen two blobs of instructions, which are very similar but
not identical:
Blob 1 ("origin"):
"""
You just need to update your local clone after the branch name changes.
>From the local
[Julien Danjou]
> ...
> Supposedly PyObject_Malloc() returns some memory space to store a
> PyObject. If that was true all the time, that would allow anyone to
> introspect the allocated memory and understand why it's being used.
>
> Unfortunately, this is not the case. Objects whose types are
[Tim]
> ...
> Alas, the higher preprocessing costs leave the current PR slower in "too
> many" cases too, especially when the needle is short and found early
> in the haystack. Then any preprocessing cost approaches a pure waste
> of time.
But that was this morning. Since then, Dennis changed
[Tim]
>> Note that no "extra" storage is needed to exploit this. No character
>> lookups, no extra expenses in time or space of any kind. Just "if we
>> mismatch on the k'th try, we can jump ahead k positions".
[Antoine Pitrou ]
> Ok, so that means that on a N-character haystack, it'll always do
[Tim Peters, explains one of the new algorithm's surprisingly
effective moving parts]
[Chris Angelico ]
> Thank you, great explanation. Can this be added to the source code
> if/when this algorithm gets implemented?
No ;-) While I enjoy trying to make hard things clear(er),
I don't plan on making a series of these posts, just this one, to give
people _some_ insight into why the new algorithm gets systematic
benefits the current algorithm can't. It splits the needle into two
pieces, u and v, very carefully selected by subtle linear-time needle
preprocessing (and it's
[Marco Sulla]
> Excuse me if I intrude in an algorithm that I have not understood, but
> the new optimization can be applied to regexps too?
The algorithm is limited to searching for fixed strings.
However, _part_ of our regexp implementation (the bit that looks ahead
for a fixed string) will
[Guido]
> I am not able to dream up any hard cases -- like other posters,
> my own use of substring search is usually looking for a short
> string in a relatively short piece of text. I doubt even the current
> optimizations matter to my uses.
I should have responded to this part differently.
[Dennis Sweeney ]
> Here's my attempt at some heuristic motivation:
Thanks, Dennis! It helps. One gloss:
>
> The key insight though is that the worst strings are still
> "periodic enough", and if we have two different patterns going on,
> then we can intentionally split them apart.
The
[Guido]
> The key seems to be:
Except none of that quoted text (which I'll skip repeating) gives the
slightest clue as to _why_ it may be an improvement. So you split the
needle into two pieces. So what? What's the _point_? Why would
someone even imagine that might help?
Why is one half then
[Steven D'Aprano ]
> Perhaps this is a silly suggestion, but could we offer this as an
> external function in the stdlib rather than a string method?
>
> Leave it up to the user to decide whether or not their data best suits
> the find method or the new search function. It sounds like we can offer
[Guido]
> Maybe someone reading this can finish the Wikipedia page on
> Two-Way Search? The code example trails off with a function with
> some incomprehensible remarks and then a TODO..
Yes, the Wikipedia page is worse than useless in its current state,
although some of the references it lists
Rest assured that Dennis is aware of that pragmatics may change for
shorter needles.
The code has always made a special-case of 1-character needles,
because it's impossible "even in theory" to improve over
straightforward brute force search then.
Say the length of the text to search is `t`, and
Fredrik Lundh crafted our current string search algorithms, and
they've served us very well. They're nearly always as fast as
dumbest-possible brute force search, and sometimes much faster. This
was bought with some very cheap one-pass preprocessing of the pattern
(the substring to search _for_),
I'm surprised nobody has mentioned this: there are no "unboxed" types
in CPython - in effect, every object user code creates is allocated
from the heap. Even, e.g., integers and floats. So even non-contrived
code can create garbage at a ferocious rate. For example, think about
this simple
One microscopic point:
[Guido]
> ...
> (if `.x` is unacceptable, it’s unclear why `^x` would be any
> better),
As Python's self-appointed spokesperson for the elderly, there's one
very clear difference: a leading "." is - literally - one microscopic
point, all but invisible. A leading caret is
[Paul Moore ]
> (This is a genuine question, and I'm terrified of being yelled at for
> asking it, which gives an idea of the way this thread has gone - but I
> genuinely do want to know, to try to improve my own writing).
>
> What *is* the correct inclusive way to refer to an unidentified person
[Victor Stinner ]
> If someone continues to feed the PEP 8 discussion, would it be
> possible to change their account to require moderation for 1 day or
> maybe even up to 1 week? I know that Mailman 3 makes it possible.
I see no such capability. I could, for example, manually fiddle
things so
[Brett Cannon wrote:]
> Regardless of what side you fall on, I think we can agree that
> emotions are running very high at the moment. Nothing is going
> to change in at least the next 24 hours, so I am personally
> asking folks to step back for at least that long and think about:
>
> Is what you
t looks like it’s happening on
>>> python-dev too” to mean that the request was for both lists.
[Tim Peters]
>> It depends on who you want to annoy least ;-)
>>
>> If it's the position of the PSF that some kind(s) of messages must be
>> suppressed, then I'll need a mo
[Ernest W. Durbin III ]
> Reviewing, I may have misinterpreted the message from PSF Executive
> Director regarding the situation.
>
> It does appear that python-ideas moderators contacted postmaster@.
> Appears I misread a message saying “it looks like it’s happening on
> python-dev too” to mean
[Ernest W. Durbin III ]
> At the request of the list moderators of python-ideas and python-dev,
> both lists have been placed into emergency moderation mode. All
> new posts must be approved before landing on the list.
>
> When directed by the list moderators, this moderation will be disabled.
I
[Tim]
See reply to Glenn. Can you give an example of a dotted name that is
not a constant value pattern? An example of a non-dotted name that is?
If you can't do either (and I cannot)), then that's simply what "if
[Rhodri James ]
>>> case long.chain.of.attributes:
[Tim]
>>
[Rhodri James ]
> I'm seriously going to maintain that I will forget the meaning of "case
> _:" quickly and regularly,
Actually, you won't - trust me ;-)
> just as I quickly and regularly forget to use
> "|" instead of "+" for set union. More accurately, I will quickly and
> regularly forget
[Taine Zhao ]
> "or" brings an intuition of the execution order of pattern matching, just
> like how people already know about "short-circuiting".
>
> "or" 's operator precedence also suggests the syntax of OR patterns.
>
> As we have "|" as an existing operator, it seems that there might be
>
[Ethan Furman ]
> "case _:" is easy to miss -- I missed it several times reading through the
> PEP.
As I said, I don't care about "shallow first impressions". I care
about how a thing hangs together _after_ climbing its learning curve -
which in this case is about a nanometer tall ;-)
You're
[Tim]
>> ".NAME" grated at first, but extends the idea that dotted names are
>> always constant value patterns to "if and only if". So it has mnemonic
>> value. When context alone can't distinguish whether a name is meant as
>> (in effect) an lvalue or an rvalue, no syntax decorations can prevent
You got everything right the first time ;-) The PEP is an extended
illustration of "although that way may not be obvious at first unless
you're Dutch".
I too thought "why not else:?" at first. But "case _:" covers it in
the one obvious way after grasping how general wildcard matches are.
For posterity, just recording best guesses for the other mysteries left hanging:
- PYTHONTRACEMALLOC didn't work for you because Victor's traceback
showed that Py_FinalizeEx was executing _PyImport_Fini,, one statement
_after_ it disabled tracemalloc via _PyTraceMalloc_Fini.
- The address passed
[Skip Montanaro ]
> ...
> I thought setting PYTHONTRACEMALLOC should provoke some useful output,
> but I was confused into thinking I was (am?) still missed something
> because it continued to produce this message:
>
> Enable tracemalloc to get the memory block allocation traceback
Ah, I
[Skip Montanaro ]
> I've got a memory issue in my modified Python interpreter I'm trying
> to debug. Output at the end of the problematic unit test looks like this:
To my eyes, you left out the most important part ;-) A traceback
showing who made the fatal free() call to begin with.
In debug
[Tim]
>> PyObject_RichCompareBool(x, y, op) has a (valuable!) shortcut: if x
>> and y are the same object, then equality comparison returns True
>> and inequality False. No attempt is made to execute __eq__ or
>> __ne__ methods in those cases.
>> ...
>> If it's intended that Python-the-language
[Guido]
> Honestly that looked like a spammer.
I approved the message, and it looked like "probably spam" to me too.
But it may have just been a low-quality message, and the new moderator
UI still doesn't support adding custom text to a rejection message.
Under the old system, I _would_ have
[Terry Reedy ]
]& skipping all the parts I agree with]
> ...
> Covered by "For user-defined classes which do not define __contains__()
> but do define __iter__(), x in y is True if some value z, for which the
> expression x is z or x == z is true, is produced while iterating over y.
> " in
>
>
[Tim]
>> I think it needs more words, though, to flesh out what about this is
>> allowed by the language (as opposed to what CPython happens to do),
>> and to get closer to what Guido is trying to get at with his
>> "*implicit* calls". For example, it's at work here, but there's not a
>> built-in
[Terry Reedy ]
> ...
> It is, in the section on how to understand and use value comparison
> *operators* ('==', etc.).
> https://docs.python.org/3/reference/expressions.html#value-comparisons
>
> First "The default behavior for equality comparison (== and !=) is based
> on the identity of the
[Inada Naoki ]
> FWIW, (list|tuple).__eq__ and (list|tuple).__contains__ uses it too.
> It is very important to compare recursive sequences.
>
> >>> x = []
> >>> x.append(x)
> >>> y = [x]
> >>> z = [x]
> >>> y == z
> True
That's a visible consequence, but I'm afraid this too must be
considered an
PyObject_RichCompareBool(x, y, op) has a (valuable!) shortcut: if x
and y are the same object, then equality comparison returns True and
inequality False. No attempt is made to execute __eq__ or __ne__
methods in those cases.
This has visible consequences all over the place, but they don't
[Serhiy Storchaka]
> This is not the only difference between '.17g' and repr().
>
> >>> '%.17g' % 1.23456789
> '1.23456788'
> >>> format(1.23456789, '.17g')
> '1.23456788'
> >>> repr(1.23456789)
> '1.23456789'
More amazingly ;-), repr() isn't even always the same as a %g format
[Pau Freixes ]
> Recently I've been facing a really weird bug where a Python program
> was randomly segfaulting during the finalization, the program was
> using some C extensions via Cython.
There's nothing general that can be said that would help. These
things require excruciating details to
[Larry]
> It's a lightweight abstract dependency graph. Its nodes are opaque,
> only required to be hashable. And it doesn't require that you give it
> all the nodes in strict dependency order.
>
> When you add a node, you can also optionally specify
> dependencies, and those dependencies aren't
[Larry]
> Here is the original description of my problem, from the original email in
> this thread. I considered this an adequate explanation of my problem
> at the time.
>> I do have a use case for this. In one project I maintain a "ready" list of
>> jobs; I need to iterate over it, but I also
[Nick Coghlan ]
> I took Larry's request a slightly different way: he has a use case where
> he wants order preservation (so built in sets aren't good), but combined
> with low cost duplicate identification and elimination and removal of
> arbitrary elements (so lists and collections.deque aren't
[Tim]
>> - I don't have a theory for why dict build time is _so_ much higher
>> than dict lookup time for the nasty keys.
To be clearer, in context this was meant to be _compared to_ the
situation for sets. These were the numbers:
11184810 nasty keys
dict build 23.32
dict lookup
[Tim]
> I know the theoretical number of probes for dicts, but not for sets
> anymore. The latter use a mix of probe strategies now, "randomish"
> jumps (same as for dicts) but also purely linear ("up by 1") probing
> to try to exploit L1 cache.
>
> It's not _apparent_ to me that the mix actually
>> Also, I believe that max "reasonable" integer range of no collision
>> is (-2305843009213693951, 2305843009213693951), ...
> Any range that does _not_ contain both -2 and -1 (-1 is an annoying
> special case, with hash(-1) == hash(-2) == -2), and spans no more than
> sys.hash_info.modulus
[Kyle]
> ...
> For some reason, I had assumed in the back of my head (without
> giving it much thought) that the average collision rate would be the
> same for set items and dict keys. Thanks for the useful information.
I know the theoretical number of probes for dicts, but not for sets
anymore.
Sorry! A previous attempt to reply got sent before I typed anything :-(
Very briefly:
> >>> timeit.timeit("set(i for i in range(1000))", number=100_000)
[and other examples using a range of integers]
The collision resolution strategy for sets evolved to be fancier than
for dicts, to reduce
t;>> data structure that clearly describes what it does based on the name alone,
>>> IMO that's a million times better for readability purposes.
>>>
>>> Also, this is mostly speculation since I haven't ran any benchmarks for an
>>> OrderedSet implementation, but
[Inada Naoki ]
> I just meant the performance of the next(iter(D)) is the most critical part
> when you implement orderdset on top of the current dict and use it as a queue.
Which is a good point. I added a lot more, though, because Wes didn't
even mention queues in his question:
[Wes Turner ]
[Wes Turner ]
>> How slow and space-inefficient would it be to just implement the set methods
>> on top of dict?
[Inada Naoki ]
> Speed: Dict doesn't cache the position of the first item. Calling
> next(iter(D)) repeatedly is O(N) in worst case.
> ...
See also Raymond's (only) message in this
[Nick]
> I must admit that I was assuming without stating that a full OrderedSet
> implementation would support the MutableSequence interface.
Efficient access via index position too would be an enormous new
requirement, My bet: basic operations would need to change from O(1)
to O(log(N)).
[David Mertz ]
> It's not obvious to me that insertion order is even the most obvious or
> most commonly relevant sort order. I'm sure it is for Larry's program, but
> often a work queue might want some other order. Very often queues
> might instead, for example, have a priority number assigned
[Nick]
> I took Larry's request a slightly different way:
Sorry, I was unclear: by "use case" I had in mind what appeared to me
to be the overwhelming thrust of the _entirety_ of this thread so far,
not Larry's original request.
> he has a use case where he wants order preservation (so built in
[Nick Coghlan ]
> Starting with "collections.OrderedSet" seems like a reasonable idea,
> though - that way "like a built-in set, but insertion order preserving" will
> have an obvious and readily available answer, and it should also
> make performance comparisons easier.
Ya, I suggested starting
[Larry]
> "I don't care about performance" is not because I'm aching for Python to
> run my code slowly. It's because I'm 100% confident that the Python
> community will lovingly optimize the implementation.
I'm not ;-)
> So when I have my language designer hat on, I really don't concern
...
[Larry]
>> One prominent Python core developer** wanted this feature for years, and I
>> recall
>> them saying something like:
>>
>> Guido says, "When a programmer iterates over a dictionary and they see the
>> keys
>> shift around when the dictionary changes, they learn something!" To
[Larry]
> Didn't some paths also get slightly slower as a result of maintaining
> insertion order when mixing insertions and deletions?
I paid no attention at the time. But in going from "compact dict" to
"ordered dict", deletion all by itself got marginally cheaper. The
downside was the
[Tim]
>> If it's desired that "insertion order" be consistent across runs,
>> platforms, and releases, then what "insertion order" _means_ needs to
>> be rigorously defined & specified for all set operations. This was
>> comparatively trivial for dicts, because there are, e.g., no
>> commutative
[Petr Viktorin ]
> ...
> Originally, making dicts ordered was all about performance (or rather
> memory efficiency, which falls in the same bucket.) It wasn't added
> because it's better semantics-wise.
As I tried to flesh out a bit in a recent message, the original
"compact dict" idea got all
[Raymond]
> ...
> * The ordering we have for dicts uses a hash table that indexes into a
> sequence.
> That works reasonably well for typical dict operations but is unsuitable for
> set
> operations where some common use cases make interspersed additions
> and deletions (that is why the LRU
[Tim]
> BTW, what should
>
> {1, 2} | {3, 4, 5, 6, 7}
>
> return as ordered sets? Beats me.;
[Larry]
> The obvious answer is {1, 2, 3, 4, 5, 6, 7}.
Why? An obvious implementation that doesn't ignore performance entirely is:
def union(smaller, larger):
if len(larger) <
[Guido]
> ...
> the language should not disappoint them, optimization opportunities be damned.
I would like to distinguish between two kinds of "optimization
opportunities": theoretical ones that may or may not be exploited
some day, and those that CPython has _already_ exploited.
That is, we
[Larry Hastings ]
> As of 3.7, dict objects are guaranteed to maintain insertion order. But set
> objects make no such guarantee, and AFAIK in practice they don't maintain
> insertion order either.
If they ever appear to, it's an accident you shouldn't rely on.
> Should they?
>From Raymond,
[Skip Montanaro ]
> ...
> I don't think stable code which uses macros should be changed (though
> I see the INCREF/DECREF macros just call private inline functions, so
> some conversion has clearly been done). Still, in new code, shouldn't
> the use of macros for more than trivial use cases
Short course: a replacement for malloc for use in contexts that can't
"move memory" after an address is passed out, but want/need the
benefits of compactification anyway.
Key idea: if the allocator dedicates each OS page to requests of a
specific class, then consider two pages devoted to the
[Tim]
> While python-dev has several "official" moderators, best I can tell
> I'm the only one who has reviewed these messages for years.
I should clarify that! That's not meant to be a dig at the other
moderators. I review everything because I'm retired and am near the
computer many hours
[This is about the mailing list, not about Python development]
That python-dev-owner has gotten two complaints about this message so
far suggests I should explain what's going on ;-)
New list members are automatically moderated. Their posts sit in a
moderation queue waiting for moderator
[Guido]
> I don't see how this debate can avoid a vote in the Steering Council.
FWIW, I found Nick's last post wholly persuasive: back off to
SyntaxError for now, and think about adding a more specific exception
later for _all_ cases (not just walrus) in which a scope conflict
isn't allowed
[Barry Warsaw ]
> bpo-37757: https://bugs.python.org/issue37757
Really couldn't care less whether it's TargetScopeError or
SyntaxError, but don't understand the only rationale given here for
preferring the latter:
> To me, “TargetScopeError” is pretty obscure and doesn’t give users an
> obvious
[Brett Cannon ]
> We probably need to update https://devguide.python.org/committing/ to
> have a step-by-step list of how to make a merge works and how to
> handle backports instead of the wall of text that we have. (It's already
> outdated anyway, e.g. `Misc/ACKS` really isn't important as git
[Mariatta ]
- Since this is a 1st-time contributor, does it need a change to the ACKS
file?
>
> I think the change is trivial enough, the misc/acks is not necessary.
>
> - Anything else?
>
>
> 1. Does it need to be backported? If so, please add the "needs backport to
> .." label.
>
> 2. Add the
https://github.com/python/cpython/pull/13482
is a simple doc change for difflib, which I approved some months ago.
But I don't know the current workflow well enough to finish it myself.
Like:
- Does something special need to be done for doc changes?
- Since this is a 1st-time contributor,
1 - 100 of 962 matches
Mail list logo