[Python-Dev] Re: Preventing Unicode-related gotchas (Was: pre-PEP: Unicode Security Considerations for Python)

2021-11-03 Thread Chris Jerdonek
On Tue, Nov 2, 2021 at 7:21 AM Petr Viktorin  wrote:

> That brings us to possible changes in Python in this  area, which is an
> interesting topic.


Is there a use case or need for allowing the comment-starting character “#”
to occur when text is still in the right-to-left direction? Disallowing
that would prevent Petr’s examples in which active code is displayed after
the comment mark, which to me seems to be one of the more egregious
examples. Or maybe this case is no worse than others and isn’t worth
singling out.

—Chris




>
> As for \0, can we ban all ASCII & C1 control characters except
> whitespace? I see no place for them in source code.
>
>
> For homoglyphs/confusables, should there be a SyntaxWarning when an
> identifier looks like ASCII but isn't?
>
> For right-to-left text: does anyone actually name identifiers in
> Hebrew/Arabic? AFAIK, we should allow a few non-printing
> "joiner"/"non-joiner" characters to make it possible to use all Arabic
> words. But it would be great to consult with users/teachers of the
> languages.
> Should Python run the bidi algorithm when parsing and disallow reordered
> tokens? Maybe optionally?
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/TGB377QWGIDPUWMAJSZLT22ERGPNZ5FZ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/3OTEOUYXN6H7BCLFVJ4QV3PL5RPIXFNL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python multithreading without the GIL

2021-10-08 Thread Chris Jerdonek
On Fri, Oct 8, 2021 at 8:11 AM Guido van Rossum  wrote:

> To be clear, Sam’s basic approach is a bit slower for single-threaded
> code, and he admits that.
>

Is it also slower even when running with PYTHONGIL=1? If it could be made
the same speed for single-threaded code when running in GIL-enabled mode,
that might be an easier intermediate target while still adding value.

—Chris


But to sweeten the pot he has also applied a bunch of unrelated speedups
> that make it faster in general, so that overall it’s always a win. But
> presumably we could upstream the latter easily, separately from the
> GIL-freeing part.
>
> On Fri, Oct 8, 2021 at 07:42 Łukasz Langa  wrote:
>
>>
>> > On 8 Oct 2021, at 10:13, Steven D'Aprano  wrote:
>> >
>> > Hi Sam,
>> >
>> > On Thu, Oct 07, 2021 at 03:52:56PM -0400, Sam Gross wrote:
>> >
>> >> I've been working on changes to CPython to allow it to run without the
>> >> global interpreter lock. I'd like to share a working proof-of-concept
>> that
>> >> can run without the GIL.
>> >
>> > Getting Python to run without the GIL has never been a major problem for
>> > CPython (and of course some other Python interpreters don't have a GIL
>> > at all).
>>
>> On the first page of Sam's design overview he references Gilectomy by
>> name.
>>
>> > Single threaded code is still, and always will be, an important part of
>> > Python's ecosystem. A lot of people would be annoyed if the cost of
>> > speeding up heavily threaded Python by a small percentage would be to
>> > slow down single-threaded Python by a large percentage.
>>
>> Quoting that same design document, Sam writes: "The new interpreter
>> (together with the GIL changes) is about 10% faster than CPython 3.9
>> on the single-threaded pyperformance benchmarks."
>>
>> - Ł
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/JO7OQCHZKIFNKSXTTXT2JBCF5H47M7OO/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> --
> --Guido (mobile)
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/XQOOGKH5PIFBHJRK7W2LMX32DIGIH4KX/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/STEMG6WAORYZ2WVMXZZPYSQVEUNNXCSW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: name for new Enum decorator

2021-06-03 Thread Chris Jerdonek
On Wed, Jun 2, 2021 at 8:04 PM Ethan Furman  wrote:

>
> Thank you everyone for your ideas!  Instead of adding another
> single-purpose decorator, I'm toying with the idea of
> adding a general purpose decorator that accepts instructions.  Something
> along the lines of:
> ...
> Thoughts?
>

I had a couple comments. First, are these checks going to run at import
time? The reason I ask is that, in cases where the enums aren't generated
dynamically, it seems like it could make sense for efficiency to have the
option of running them instead during tests. The decorator would still be
useful because it could tell the test harness what checks to run.

My other comment is whether this design would permit checks that take
arguments. For example, in the case of the "continuous" check, one might
want to be able to say, "this Flag enum should span the first N bits" as
opposed to using the highest occurring bit as the N. That way you can be
sure you didn't leave off any flags at the end.

--Chris



>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NDNMEHATHT6QHMT7IZLNGJ55TO525ZZZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: name for new Enum decorator

2021-06-02 Thread Chris Jerdonek
On Thu, May 27, 2021 at 8:30 PM Ethan Furman  wrote:

> But what if we have something like:
>
>  class Color(Flag):
>  RED = 1# 0001
>  BLUE = 4   # 0100
>  WHITE = 7  # 0111
>
> As you see, WHITE is an "alias" for a value that does not exist in the
> Flag (0010, or 2).  That seems like it's probably an error.


Are there use cases where you want to support gaps in the bits? If not,
your decorator could accept which flags should be spanned. That seems
useful, too. It would be a strictly stronger check. That would change the
kind of name you want, though.

Otherwise, some things that occur to me: factorable, factorizable,
factorized, decomposable. The thinking here is that each named member
should be capable of being decomposed / factored into individual values
that themselves have names.

--Chris



>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S6232IQIUAZKRXNPDPCS755LCVAFBE37/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 659: Specializing Adaptive Interpreter

2021-05-13 Thread Chris Jerdonek
On Wed, May 12, 2021 at 10:48 AM Mark Shannon  wrote:

> ...
> For those of you aware of the recent releases of Cinder and Pyston,
> PEP 659 might look similar.
> It is similar, but I believe PEP 659 offers better interpreter
> performance and is more suitable to a collaborative, open-source
> development model.
>

I was curious what you meant by "is more suitable to a collaborative,
open-source
development model," but I didn't see it elaborated on in the PEP. If this
is indeed a selling point, it might be worth mentioning that and saying why.

--Chris


> As always, comments and suggestions are welcome.
>
> Cheers,
> Mark.
>
> Links:
>
> https://www.python.org/dev/peps/pep-0659/
> https://github.com/facebookincubator/cinder
> https://github.com/pyston/pyston
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/VKBV4X6ZEMRBALW7JNOZYI22KETR4F3R/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NGDXMGDYQQSJO73WOPSQO7GM4OXDKI73/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-07 Thread Chris Jerdonek
On Fri, May 7, 2021 at 6:39 PM Steven D'Aprano  wrote:

> On Fri, May 07, 2021 at 06:02:51PM -0700, Chris Jerdonek wrote:
>
> > To know what compression methods might be effective, I’m wondering if it
> > could be useful to see separate histograms of, say, the start column
> number
> > and width over the code base. Or for people that really want to dig in,
> > maybe access to the set of all pairs could help. (E.g. maybe a histogram
> of
> > pairs could also reveal something.)
>
> I think this is over-analysing. Do we need to micro-optimize the
> compression algorithm? Let's make the choice simple: live with the size
> increase, or swap to LZ4 compression as Antoine suggested. Analysis
> paralysis is a real risk here.
>
> If there are implementations which cannot support either (MicroPython?)
> they should be free to continue doing things the old way. In other
> words, "fine grained error messages" should be a quality of
> implementation feature rather than a language guarantee.
>
> I understand that the plan is to make this feature optional in any case,
> to allow third-party tools to catch up.
>
> If people really want to do that histogram analysis so that they can
> optimize the choice of compression algorithm, of course they are free to
> do so. But the PEP authors should not feel that they are obliged to do
> so, and we should avoid the temptation to bikeshed over compressors.
>

I'm not sure why you're sounding so negative. Pablo asked for ideas in his
first message to the list:

On Fri, May 7, 2021 at 2:53 PM Pablo Galindo Salgado 
wrote:

> Does anyone see a better way to encode this information **without
> complicating a lot the implementation**?
>

Maybe a large gain can be made with a simple tweak to how the pair is
encoded, but there's no way to know without seeing the distribution. Also,
my reply wasn't about the pyc files on disk but about their representation
in memory, which Pablo later said may be the main concern. So it's not
compression algorithms like LZ4 so much as a method of encoding.

--Chris


>
> (For what it's worth, I like this proposed feature, I don't care about a
> 20-25% increase in pyc file size, but if this leads to adding LZ4
> compression to the stdlib, I like it even more :-)
>
>
> --
> Steve
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/6H2XSRMARU4SX4WRMIO2M4MI4EQASPBC/
> Code of Conduct: http://python.org/psf/codeofconduct/
>

>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UYARCZJJFIEKRWMEEBW2FAGBPAPDFJGG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-07 Thread Chris Jerdonek
On Fri, May 7, 2021 at 5:44 PM Pablo Galindo Salgado 
wrote:

> Some update on the numbers. We have made some draft implementation to
> corroborate the
> numbers with some more realistic tests and seems that our original
> calculations were wrong.
> The actual increase in size is quite bigger than previously advertised:
>
> Using bytes object to encode the final object and marshalling that to disk
> (so using uint8_t) as the underlying
> type:
>
> BEFORE:
>
> ❯ ./python -m compileall -r 1000 Lib > /dev/null
> ❯ du -h Lib -c --max-depth=0
> 70M Lib
> 70M total
>
> AFTER:
> ❯ ./python -m compileall -r 1000 Lib > /dev/null
> ❯ du -h Lib -c --max-depth=0
> 76M Lib
> 76M total
>
> So that's an increase of 8.56 % over the original value. This is storing
> the start offset and end offset with no compression
> whatsoever.
>

To know what compression methods might be effective, I’m wondering if it
could be useful to see separate histograms of, say, the start column number
and width over the code base. Or for people that really want to dig in,
maybe access to the set of all pairs could help. (E.g. maybe a histogram of
pairs could also reveal something.)

—Chris



> On Fri, 7 May 2021 at 22:45, Pablo Galindo Salgado 
> wrote:
>
>> Hi there,
>>
>> We are preparing a PEP and we would like to start some early discussion
>> about one of the main aspects of the PEP.
>>
>> The work we are preparing is to allow the interpreter to produce more
>> fine-grained error messages, pointing to
>> the source associated to the instructions that are failing. For example:
>>
>> Traceback (most recent call last):
>>
>>   File "test.py", line 14, in 
>>
>> lel3(x)
>>
>> ^^^
>>
>>   File "test.py", line 12, in lel3
>>
>> return lel2(x) / 23
>>
>>^^^
>>
>>   File "test.py", line 9, in lel2
>>
>> return 25 + lel(x) + lel(x)
>>
>> ^^
>>
>>   File "test.py", line 6, in lel
>>
>> return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
>>
>>  ^
>>
>> TypeError: 'NoneType' object is not subscriptable
>>
>> The cost of this is having the start column number and end column number
>> information for every bytecode instruction
>> and this is what we want to discuss (there is also some stack cost to
>> re-raise exceptions but that's not a big problem in
>> any case). Given that column numbers are not very big compared with line
>> numbers, we plan to store these as unsigned chars
>> or unsigned shorts. We ran some experiments over the standard library and
>> we found that the overhead of all pyc files is:
>>
>> * If we use shorts, the total overhead is ~3% (total size 28MB and the
>> extra size is 0.88 MB).
>> * If we use chars. the total overhead is ~1.5% (total size 28 MB and the
>> extra size is 0.44MB).
>>
>> One of the disadvantages of using chars is that we can only report
>> columns from 1 to 255 so if an error happens in a column
>> bigger than that then we would have to exclude it (and not show the
>> highlighting) for that frame. Unsigned short will allow
>> the values to go from 0 to 65535.
>>
>> Unfortunately these numbers are not easily compressible, as every
>> instruction would have very different offsets.
>>
>> There is also the possibility of not doing this based on some build flag
>> on when using -O to allow users to opt out, but given the fact
>> that these numbers can be quite useful to other tools like coverage
>> measuring tools, tracers, profilers and the such adding conditional
>> logic to many places would complicate the implementation considerably and
>> will potentially reduce the usability of those tools so we prefer
>> not to have the conditional logic. We believe this is extra cost is very
>> much worth the better error reporting but we understand and respect
>> other points of view.
>>
>> Does anyone see a better way to encode this information **without
>> complicating a lot the implementation**? What are people thoughts on the
>> feature?
>>
>> Thanks in advance,
>>
>> Regards from cloudy London,
>> Pablo Galindo Salgado
>>
>> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/QDEKMTZRMPEKPFFBPCGUYWLLR43A6M6U/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZTNJHADASSERV65FSVVYWNL6JF65CYQK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 654: Exception Groups and except* [REPOST]

2021-04-05 Thread Chris Jerdonek
On Mon, Apr 5, 2021 at 3:07 AM Nathaniel Smith  wrote:

> - Recording pre-empted exceptions: This is another type of metadata that
> would be useful to print along with the traceback. It's non-obvious and a
> bit hard to explain, but multiple trio users have complained about this, so
> I assume it will bite asyncio users too as soon as TaskGroups are added.
> The situation is, you have a parent task P and two child tasks C1 and C2:
>
> P
>/ \
>   C1  C2
>
>   C1 terminates with an unhandled exception E1, so in order to continue
> unwinding, the nursery/taskgroup in P cancels C2. But, C2 was itself in the
> middle of unwinding another, different exception E2 (so e.g. the
> cancellation arrived during a `finally` block). E2 gets replaced with a
> `Cancelled` exception whose __context__=E2, and that exception unwinds out
> of C2 and the nursery/taskgroup in P catches the `Cancelled` and discards
> it, then re-raises E1 so it can continue unwinding.
>
>   The problem here is that E2 gets "lost" -- there's no record of it in
> the final output. Basically E1 replaced it. And that can be bad: for
> example, if the two children are interacting with each other, then E2 might
> be the actual error that broke the program, and E1 is some exception
> complaining that the connection to C2 was lost. If you have two exceptions
> that are triggered from the same underlying event, it's a race which one
> survives.
>

This point reminded me again of this issue in the tracker ("Problems with
recursive automatic exception chaining" from 2013):
https://bugs.python.org/issue18861
I'm not sure if it's exactly the same, but you can see that a couple of the
later comments there talk about "exception trees" and other types of
annotations.

If that issue were addressed after ExceptionGroups were introduced, does
that mean there would then be two types of exception-related trees layered
over each other (e.g. groups of trees, trees of groups, etc)? It makes me
wonder if there's a more general tree structure that could accommodate both
use cases...

--Chris

  This is conceptually similar to the way an exception in an 'except' block
> used to cause exceptions to be lost, so we added __context__ to avoid that.
> And just like for __context__, it would be nice if we could attach some
> info to E1 recording that E2 had happened and then got preempted. But I
> don't see how we can reuse __context__ itself for this, because it's a
> somewhat different relationship: __context__ means that an exception
> happened in the handler for another exception, while in this case you might
> have multiple preempted exceptions, and they're associated with particular
> points in the stack trace where the preemption occurred.
>
>   This is a complex issue and maybe we should call it out-of-scope for the
> first version of ExceptionGroups. But I mention it because it's a second
> place where adding some extra annotations to the traceback info would be
> useful, and maybe we can keep it simple by adding some minimal hooks in the
> core traceback machinery and let libraries like trio/asyncio handle the
> complicated parts?
>


>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NIBYXIXQCXJGFMIKNW5HMRTUCHDDCDIB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: New sys.module_names attribute in Python 3.10: list of all stdlib modules

2021-01-26 Thread Chris Jerdonek
On Mon, Jan 25, 2021 at 10:23 PM Random832  wrote:

> On Mon, Jan 25, 2021, at 18:44, Chris Jerdonek wrote:
> > But to issue a warning when a standard module is being overridden like
> > I was suggesting, wouldn’t you also need to know whether the name of
> > the module being imported is a standard name, which is what
> > says.module_names provides?
>
> I don't think the warning would be only useful for stdlib modules... has
> any thought been given to warning when a module being imported from the
> current directory / script directory is the same as an installed package?
>

Related to this, I wonder if another application of sys.stdlib_module_names
could be for installers: When installing a new package, a warning could be
issued if the package is attempting to install a package with a name
already in sys.stdlib_module_names. I don't know off-hand what happens if
one were to try to do that today..

--Chris



>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XTLEMKOZ4REXKJY2OI5RNJFBAAJABGD7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: New sys.module_names attribute in Python 3.10: list of all stdlib modules

2021-01-25 Thread Chris Jerdonek
On Mon, Jan 25, 2021 at 2:05 PM Victor Stinner  wrote:

> On Mon, Jan 25, 2021 at 6:37 PM Chris Jerdonek 
> wrote:
> > On Mon, Jan 25, 2021 at 7:51 AM Ivan Pozdeev via Python-Dev <
> python-dev@python.org> wrote:
> >>
> >> Just _names_? There's a recurring error case when a 3rd-party module
> overrides a standard one if it happens to have the same name. If you
> >> filter such a module out, you're shooting yourself in the foot...
> >
> > Would another use case be to support issuing a warning if a third-party
> module is imported whose name matches a standard one? A related use case
> would be to build on this and define a function that accepts an already
> imported module and return whether it is from the standard library. Unlike,
> the module_names attribute, this function would reflect the reality of the
> underlying module, and so not have false positives as with doing a name
> check alone.
>
> This is a different use case which requires a different solution.
> sys.module_names solve some specific use cases (that I listed in my
> first email).
>
> In Python 3.9, you can already check if a module __file__ is in the
> sysconfig.get_paths()['stdlib'] directory. You don't need to modify
> Python for that.


But to issue a warning when a standard module is being overridden like I
was suggesting, wouldn’t you also need to know whether the name of the
module being imported is a standard name, which is what says.module_names
provides?

—Chris




If you also would like to check if an *extension* module comes from
> the stdlib, you need to get the "lib-dynload" directory. I failed to
> find a programmatic way to get this directory, maybe new API would be
> needed for that.
>
> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OA3GUOBJ2ASKAPFSJVVXOCYEWWCUJLST/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: New sys.module_names attribute in Python 3.10: list of all stdlib modules

2021-01-25 Thread Chris Jerdonek
On Mon, Jan 25, 2021 at 7:51 AM Ivan Pozdeev via Python-Dev <
python-dev@python.org> wrote:

> Just _names_? There's a recurring error case when a 3rd-party module
> overrides a standard one if it happens to have the same name. If you
> filter such a module out, you're shooting yourself in the foot...


Would another use case be to support issuing a warning if a third-party
module is imported whose name matches a standard one? A related use case
would be to build on this and define a function that accepts an already
imported module and return whether it is from the standard library. Unlike,
the module_names attribute, this function would reflect the reality of the
underlying module, and so not have false positives as with doing a name
check alone.

—Chris



>
> On 25.01.2021 16:03, Victor Stinner wrote:
> > Hi,
> >
> > I just added a new sys.module_names attribute, list (technically a
> > frozenset) of all stdlib module names:
> > https://bugs.python.org/issue42955
> >
> > There are multiple use cases:
> >
> > * Group stdlib imports when reformatting a Python file,
> > * Exclude stdlib imports when computing dependencies.
> > * Exclude stdlib modules when listing extension modules on crash or
> > fatal error, only list 3rd party extension (already implemented in
> > master, see bpo-42923 ;-)).
> > * Exclude stdlib modules when tracing the execution of a program using
> > the trace module.
> > * Detect typo and suggest a fix: ImportError("No module named maths.
> > Did you mean 'math'?",) (test the nice friendly-traceback project!).
> >
> > Example:
> >
>  'asyncio' in sys.module_names
> > True
>  'numpy' in sys.module_names
> > False
> >
>  len(sys.module_names)
> > 312
>  type(sys.module_names)
> > 
> >
>  sorted(sys.module_names)[:10]
> > ['__future__', '_abc', '_aix_support', '_ast', '_asyncio', '_bisect',
> > '_blake2', '_bootsubprocess', '_bz2', '_codecs']
>  sorted(sys.module_names)[-10:]
> > ['xml.dom', 'xml.etree', 'xml.parsers', 'xml.sax', 'xmlrpc', 'zipapp',
> > 'zipfile', 'zipimport', 'zlib', 'zoneinfo']
> >
> > The list is opinionated and defined by its documentation:
> >
> > A frozenset of strings containing the names of standard library
> > modules.
> >
> > It is the same on all platforms. Modules which are not available on
> > some platforms and modules disabled at Python build are also listed.
> > All module kinds are listed: pure Python, built-in, frozen and
> > extension modules. Test modules are excluded.
> >
> > For packages, only sub-packages are listed, not sub-modules. For
> > example, ``concurrent`` package and ``concurrent.futures``
> > sub-package are listed, but not ``concurrent.futures.base``
> > sub-module.
> >
> > See also the :attr:`sys.builtin_module_names` list.
> >
> > The design (especially, the fact of having the same list on all
> > platforms) comes from the use cases list above. For example, running
> > isort should produce the same output on any platform, and not depend
> > if the Python stdlib was splitted into multiple packages on Linux
> > (which is done by most popular Linux distributions).
> >
> > The list is generated by the Tools/scripts/generate_module_names.py
> script:
> >
> https://github.com/python/cpython/blob/master/Tools/scripts/generate_module_names.py
> >
> > When you add a new module, you must run "make regen-module-names,
> > otherwise a pre-commit check will fail on your PR ;-) The list of
> > Windows extensions is currently hardcoded in the script (contributions
> > are welcomed to discover them, since the list is short and evolves
> > rarely, I didn't feel the need to spend time that on that).
> >
> > Currently (Python 3.10.0a4+), there are 312 names in sys.module_names,
> > stored in Python/module_names.h:
> > https://github.com/python/cpython/blob/master/Python/module_names.h
> >
> > It was decided to include "helper" modules like "_aix_support" which
> > is used by sysconfig. But test modules like _testcapi are excluded to
> > make the list shorter (it's rare to run the CPython test suite outside
> > Python).
> >
> > There are 83 private modules, name starting with an underscore
> > (exclude _abc but also __future__):
> >
>  len([name for name in sys.module_names if not name.startswith('_')])
> > 229
> >
> > This new attribute may help to define "what is the Python stdlib" ;-)
> >
> > Victor
>
> --
> Regards,
> Ivan
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/KCJDHKOKCN5343VVA3DC7RAGNUGWNKZY/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to 

[Python-Dev] Re: Please explain how to migrate when a function is removed, thanks ;-)

2021-01-20 Thread Chris Jerdonek
I don't know if this is already covered in the discussion and in our
processes, but in addition to documenting the instructions in the release
in which things break, I think it would also be good to include such
instructions in any earlier release in which the thing is merely
deprecated. In those cases, the instructions would be the same, but they
would be on how to make the deprecation warning go away rather than to
unbreak things. That way, people have the information to make their code
forward-compatible earlier, as opposed to waiting for the breaking release.
Personally, I like to address DeprecationWarnings when I first see them, as
opposed to waiting until later.

Is there / would it make sense to have a section analogous to "Porting to
Python X" that covers "Make All DeprecationWarnings Go Away in X"? If we
had such a section, the "Porting to" section could be constructed by
copying the relevant bits from that section in the previous release.

--Chris


On Wed, Jan 20, 2021 at 4:47 PM Kyle Stanley  wrote:

> Thanks for bringing attention to this, Victor, and to Ken Jin (GH:
> Fidget-Spinner) for the PR. I've just completed reviewing and merging the
> PR, so hopefully anyone affected will now have a more clear idea of how to
> migrate their asyncio code to 3.10. Having the porting method explicitly
> documented certainly helps to smooth the version transition process and
> reduce headaches. :-)
>
> Going forward, I'll try to make more of an active effort to ensure any
> potentially incompatible changes I'm involved with include a clear method
> of porting documented in their respective whatsnew. It can be easy to
> forget at times that a seemingly minor fix which is intuitively clear to
> the authors of a change may not be as clear to those not involved with it,
> regardless of how difficult the fix actually is.
>
> On Tue, Jan 19, 2021 at 12:03 PM Victor Stinner 
> wrote:
>
>> A PR was proposed to document the removal of the loop parameter:
>> https://github.com/python/cpython/pull/24256
>>
>> Victor
>>
>> On Tue, Jan 19, 2021 at 1:34 PM Victor Stinner 
>> wrote:
>> >
>> > Hi,
>> >
>> > We are working on upgrading Python from 3.9 to 3.10 in Fedora and we
>> > are facing many Python 3.10 incompatible changes. Most changes were
>> > planned with a deprecation period, but, as usual, projects are not
>> > tested with DeprecationWarning treated as errors, and so tons of
>> > packages now fail to build.
>> >
>> > I'm not here to talk about DeprecationWarning, but how we communicate
>> > on incompatible changes made on purpose. For example, What's New in
>> > Python 3.8 announces: "In asyncio, the explicit passing of a loop
>> > argument has been deprecated and will be removed in version 3.10". As
>> > expected, the parameter was removed in Python 3.10 (bpo-42392), but I
>> > cannot see anything in What's New in Python 3.10. The problem is that
>> > I don't know how to fix projects broken by this change. I'm not
>> > complaining about this specific change, but more generally.
>> >
>> > I strongly suggest to well document incompatible changes. The bare
>> > minimum is to mention them in "Porting to Python 3.10" section:
>> > https://docs.python.org/dev/whatsnew/3.10.html#porting-to-python-3-10
>> >
>> > A link to the bpo sometimes helps to understand how to port code. But
>> > I would really appreciate if authors of incompatible changes would
>> > explain how to add Python 3.10 support to existing projects, without
>> > losing support for older Python versions. Not just "this function is
>> > now removed, good luck!" :-)
>> >
>> > I didn't touch asyncio for at least 1 year, so I don't know what
>> > happens if I remove a loop argument. Does an application remain
>> > compatible with Python 3.6 without passing loop?
>> >
>> > I know that I made multiple incompatible changes myself and I'm sure
>> > that the documentation should probably be enhanced, but I also expect
>> > others to help on that! ;-)
>> >
>> > Please think about people who have to port 4000 Python projects to
>> Python 3.10!
>> >
>> > ---
>> >
>> > The best would be to ship a tool which adds Python 3.10 support to
>> > existing projects without losing support with Python 3.6. Maybe
>> > something like https://github.com/asottile/pyupgrade could be used for
>> > that? pyupgrade looks more specific to the Python syntax, than the
>> > usage of the stdlib.
>> >
>> > I wrote such tool to add Python 3.10 support to C extensions without
>> > losing support with Python 2.7. It relies on a header file
>> > (pythoncapi_compat.h) which provides new C API functions on old Python
>> > versions.
>> > https://github.com/pythoncapi/pythoncapi_compat
>> >
>> > For example, it replaces "obj->ob_refcnt = refcnt;" with
>> > "Py_SET_REFCNT(obj, refcnt);" and provides Py_SET_REFCNT() to Python
>> > 3.8 and older (function added to Python 3.9)
>> >
>> > Victor
>> > --
>> > Night gathers, and now my watch begins. It shall not end until my death.
>>
>>
>>
>> --
>> 

[Python-Dev] Re: PEP 640: Unused variable syntax.

2020-10-20 Thread Chris Jerdonek
On Mon, Oct 19, 2020 at 3:11 PM Thomas Wouters  wrote:

> PEP: 640
> Title: Unused variable syntax
> Author: Thomas Wouters 
>
...

> In Python it is somewhat common to need to do an assignment without
> actually
> needing the result. Conventionally, people use either ``"_"`` or a name
> such
> as ``"unused"`` (or with ``"unused"`` as a prefix) for this. It's most
> common in *unpacking assignments*::
>

Many times I'm not using an assignment target, I still like to give a
descriptive name.  The reason is that it lets me see what value I'm not
using. It helps to document and confirm my understanding of the value being
unpacked. It also lets you toggle easily between using and not using a
value if you're working on the code.

To illustrate, I might do this--

scheme, _netloc, _path, params, query, fragment = urlparse(url)

instead of this--

scheme, _, _, params, query, fragment = urlparse(url)

So I'd prefer if the scheme would allow including a name (either by
prefixing or some other method), or at least not preclude such an extension
in the future.

--Chris
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IMU3N7YIFVBNAXWSWQ2BMXJUPNEZWCI7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [python-committers] Re: Performance benchmarks for 3.9

2020-10-14 Thread Chris Jerdonek
MOn Wed, Oct 14, 2020 at 8:03 AM Pablo Galindo Salgado 
wrote:

> > Would it be possible rerun the tests with the current
> setup for say the last 1000 revisions or perhaps a subset of these
> (e.g. every 10th revision) to try to binary search for the revision which
> introduced the change ?
>
> Every run takes 1-2 h so doing 1000 would be certainly time-consuming :)
>

Would it be possible instead to run git-bisect for only a _particular_
benchmark? It seems that may be all that’s needed to track down particular
regressions. Also, if e.g. git-bisect is used it wouldn’t be every e.g.
10th revision but rather O(log(n)) revisions.

—Chris




That's why from now on I am trying to invest in daily builds for master,
> so we can answer that exact question if we detect regressions in the
> future.
>
>
> On Wed, 14 Oct 2020 at 15:04, M.-A. Lemburg  wrote:
>
>> On 14.10.2020 16:00, Pablo Galindo Salgado wrote:
>> >> Would it be possible to get the data for older runs back, so that
>> > it's easier to find the changes which caused the slowdown ?
>> >
>> > Unfortunately no. The reasons are that that data was misleading because
>> > different points were computed with a different version of
>> pyperformance and
>> > therefore with different packages (and therefore different code). So
>> the points
>> > could not be compared among themselves.
>> >
>> > Also, past data didn't include 3.9 commits because the data gathering
>> was not
>> > automated and it didn't run in a long time :(
>>
>> Make sense.
>>
>> Would it be possible rerun the tests with the current
>> setup for say the last 1000 revisions or perhaps a subset of these
>> (e.g. every 10th revision) to try to binary search for the revision which
>> introduced the change ?
>>
>> > On Wed, 14 Oct 2020 at 14:57, M.-A. Lemburg > > > wrote:
>> >
>> > Hi Pablo,
>> >
>> > thanks for pointing this out.
>> >
>> > Would it be possible to get the data for older runs back, so that
>> > it's easier to find the changes which caused the slowdown ?
>> >
>> > Going to the timeline, it seems that the system only has data
>> > for Oct 14 (today):
>> >
>> >
>> https://speed.python.org/timeline/#/?exe=12=regex_dna=1=1000=off=on=on=none
>> >
>> > In addition to unpack_sequence, the regex_dna test has slowed
>> > down a lot compared to Py3.8.
>> >
>> >
>> https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_unpack_sequence.py
>> >
>> https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_regex_dna.py
>> >
>> > Thanks.
>> >
>> > On 14.10.2020 15:16, Pablo Galindo Salgado wrote:
>> > > Hi!
>> > >
>> > > I have updated the branch benchmarks in the pyperformance server
>> and now they
>> > > include 3.9. There are
>> > > some benchmarks that are faster but on the other hand some
>> benchmarks are
>> > > substantially slower, pointing
>> > > at a possible performance regression in 3.9 in some aspects. In
>> particular
>> > some
>> > > tests like "unpack sequence" are
>> > > almost 20% slower. As there are some other tests were 3.9 is
>> faster, is
>> > not fair
>> > > to conclude that 3.9 is slower, but
>> > > this is something we should look into in my opinion.
>> > >
>> > > You can check these benchmarks I am talking about by:
>> > >
>> > > * Go here: https://speed.python.org/comparison/
>> > > * In the left bar, select "lto-pgo latest in branch '3.9'" and
>> "lto-pgo latest
>> > > in branch '3.8'"
>> > > * To better read the plot, I would recommend to select a
>> "Normalization"
>> > to the
>> > > 3.8 branch (this is in the top part of the page)
>> > >and to check the "horizontal" checkbox.
>> > >
>> > > These benchmarks are very stable: I have executed them several
>> times over the
>> > > weekend yielding the same results and,
>> > > more importantly, they are being executed on a server specially
>> prepared to
>> > > running reproducible benchmarks: CPU affinity,
>> > > CPU isolation, CPU pinning for NUMA nodes, CPU frequency is
>> fixed, CPU
>> > governor
>> > > set to performance mode, IRQ affinity is
>> > > disabled for the benchmarking CPU nodes...etc so you can trust
>> these numbers.
>> > >
>> > > I kindly suggest for everyone interested in trying to improve the
>> 3.9 (and
>> > > master) performance, to review these benchmarks
>> > > and try to identify the problems and fix them or to find what
>> changes
>> > introduced
>> > > the regressions in the first place. All benchmarks
>> > > are the ones being executed by the pyperformance suite
>> > > (https://github.com/python/pyperformance) so you can execute them
>> > > locally if you need to.
>> > >
>> > > ---
>> > >
>> > > On a related note, I am also working on the speed.python.org
>> > 
>> > > 

[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-25 Thread Chris Jerdonek
On Sat, Jul 25, 2020 at 12:17 PM Jim J. Jewett  wrote:

> I certainly understand saying "this change isn't important enough to
> justify a change."
>
> But it sounds as though you are saying the benefit is irrelevant;


Jim, if you include what you’re replying to in your own message (like I’m
doing here), it will be easier for people to tell who / what you’re
replying to. I wasn’t able to tell what your last few messages were in
reply to.

—Chris


it is just inherently too expensive to ask programs that are already
> dealing with internals and trying to optimize performance to make a
> mechanical change from:
> code.magic_attrname
> to:
> magicdict[code]
>
> What have I missed?
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/TDCJFNHIAFEH5NIBEPP2GFP4C2BYR2DP/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DGMTJPPUQ3CJDMHPBL45BIJE2OMLXATG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Intended invariants for signals in CPython

2020-06-25 Thread Chris Jerdonek
On Wed, Jun 24, 2020 at 5:15 PM Yonatan Zunger via Python-Dev <
python-dev@python.org> wrote:

> That said, the meta-question still applies: Are there things which are
> generally intended *not* to be interruptible by signals, and if so, is
> there some consistent way of indicating this?
>

Yonatan, Nathaniel Smith wrote an interesting post a few years ago that
includes some background about signal handling:
https://vorpus.org/blog/control-c-handling-in-python-and-trio/
Have you seen that?

--Chris


>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/SI76X4GQ2IZUMY7TYY5ZNBG6EHPQXZP2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The Anti-PEP

2020-06-25 Thread Chris Jerdonek
On Thu, Jun 25, 2020 at 11:52 AM Brett Cannon  wrote:

> On Thu, Jun 25, 2020 at 5:45 AM Antoine Pitrou 
> wrote:
>
>> I don't think this really works.  A PEP has to present a consistent
>> view of the world, and works as a cohesive whole.  Arguments against a
>> PEP don't form a PEP in themselves, they don't need to be consistent
>> with each other; they merely oppose a particular set of propositions.
>> So an "anti-PEP" would not be anything like a PEP; it would just be a
>> list of assorted arguments.
>>
>
> I agree, and that's what the Rejected Ideas section is supposed to capture.
>

When I read the description of Rejected Ideas in PEP 1, it seems like it's
more for ideas that have been rejected that are still in line with the
overall PEP / motivation. It seems like what Mark is suggesting would fit
better in a separate "Arguments Against" section. I guess it would be
possible to include "reject the PEP" as a rejected idea or each individual
argument against as its own rejected "idea," but it would seem a little
weird to me to organize it that way.

I do see that PEP 1 says about the Rationale section:

The rationale should provide evidence of consensus within the community and
> discuss important objections or concerns raised during discussion.


But what Mark is suggesting might be too large for the Rationale section.

--Chris


> If a PEP is not keeping a record of what is discussed, including opposing
> views which the PEP is choosing not to accept, then that's a deficiency in
> the PEP and should be fixed. And if people feel their opposing view was not
> recorded properly, then that should be brought up.
>
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NJRZ4SZ4RYQIXJHGIOS5UD2XT4RAIFH5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-24 Thread Chris Jerdonek
On Tue, Jun 23, 2020 at 9:12 AM Guido van Rossum  wrote:

> I'm happy to present a new PEP for the python-dev community to review.
> This is joint work with Brandt Bucher, Tobias Kohn, Ivan Levkivskyi and
> Talin.
>
...
>
I'll mostly let the PEP speak for itself:
> - Published: https://www.python.org/dev/peps/pep-0622/ (*)
> - Source: https://github.com/python/peps/blob/master/pep-0622.rst
>

I have an exploratory question. In this section:

The alternatives may bind variables, as long as each alternative binds the
> same set of variables (excluding _). For example:
> match something:
> ...
> case Foo(arg=x) | Bar(arg=x):  # Valid, both arms bind 'x'
> ...
> ...


Tweaking the above example slightly, would there be a way to modify the
following so that, if the second alternative matched, then 'x' would have
the value, say, None assigned to it?

match something:
> ...
> case Foo(arg=x) | Bar() (syntax assigning, say, None to x?)
> ...
> ...


That would let Bar be handled by the Foo case even if Bar doesn't take an
argument. I'm not sure if this would ever be needed, but it's something I
was wondering. I didn't see this covered but could have missed it.

--Chris




>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/47B3J7CMO3O7AORZ6ZGT4SMOEJILPNZT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Cycles in the __context__ chain

2020-06-15 Thread Chris Jerdonek
On Sun, Jun 14, 2020 at 9:19 AM Serhiy Storchaka 
wrote:

> It is possible to create a loop by setting the __context__ attribute of
> the raised exception, either explicitly, or implicitly, using "raise ...
> from ...".


I think we should separate the questions of what to do when (1) setting the
context explicitly (e.g. setting the __context__ attribute) and (2) setting
it implicitly (using e.g. the raise syntax).

When setting it explicitly, I think it's acceptable to lose the previous
context (because it's being done explicitly). But when done implicitly,
there's an argument that we should make an effort to preserve the previous
context somewhere, so information isn't lost.

I also think we should be open to the option of allowing cycles to exist,
and not artificially breaking them. There are a few reasons for this.
First, cycles in the chain can already exist in the current code. I believe
Python's traceback-formatting code already has logic to look for cycles, so
it won't cause hangs there. The reason for the most recent hang was
different: _PyErr_SetObject() has separate logic to look for cycles, and
that cycle-detection logic has a bug that can cause hangs. Finally, if we
preserve cycles, we can improve Python's traceback-formatting code to
display that the exception chain has a cycle. This is ideal IMO because the
user will learn of the issue and we won't destroy information.

I think it's important that we aim for a solution where the user is able to
learn if they have an issue with their exception-handling code (e.g. code
that creates cycles). This means we shouldn't silently try to alter or
"fix" the chain. Rather, we should display that there is a cycle (e.g. in
the traceback-formatting code). A couple more options would be to (1) issue
a warning if we are doing any artificial cycle breaking, and (2) insert a
new placeholder exception where the cycle starts, with the exception string
containing information about the cycle that was broken, so the user learns
about it and can fix it.

--Chris



On Mon, Jun 15, 2020 at 12:42 AM Dennis Sweeney 
wrote:

> Worth noting is that there is an existing loop-breaking mechanism,
> but only for the newest exception being raised. In particular, option (4)
> is actually the current behavior if the the most recent exception
> participates in a cycle:
>
> Python 3.9.0b1
> >>> A, B, C, D, E = map(Exception, "ABCDE")
> >>> A.__context__ = B
> >>> B.__context__ = C
> >>> C.__context__ = D
> >>> D.__context__ = E
> >>> try:
> ... raise A
> ... except Exception:
> ... raise C
> ...
> Exception: B
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "", line 2, in 
> Exception: A
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "", line 4, in 
> Exception: C
>
> This cycle-breaking is not due to any magic in the
> ``PyException_SetContext()``,
> which is currently a basic one-liner, but instead comes from
> ``_PyErr_SetObject`` in errors.c, which has something along the lines of:
>
> def _PyErr_SetObject(new_exc):
> top = existing_topmost_exc()
>
> if top is None:
> # no context
> set_top_exception(new_exc)
> return
>
> # convert new_exc class to instance if applicable.
> ...
>
> if top is new_exc:
> # already on top
> return
>
> e = top
> while True:
> context = e.__context__
> if context is None:
> # no loop
> break
> if context is new_exc:
> # unlink the existing exception
> e.__context__ = None
> break
> e = context
>
> new_exc.__context__ = top
> set_top_exception(new_exc)
>
> The only trouble comes about when there is a "rho-shaped" linked list,
> in which we have a cycle not involving the new exception being raised.
> For instance,
>
> Raising A on top of (B -> C -> D -> C -> D -> C -> ...)
> results in an infinite loop.
>
> Two possible fixes would be to either (I) use a magical ``__context__``
> setter to ensure that there is never a rho-shaped sequence, or (II)
> allow arbitrary ``__context__`` graphs and then correctly handle
> rho-shaped sequences in ``_PyErr_SetObject`` (i.e. at raise-time).
>
> Fix type (I) could result in surprising things like:
>
> >>> A = Exception()
> >>> A.__context__ = A
> >>> A.__context__ is None
> True
>
> so I propose fix type (II). This PR is such a fix:
> https://github.com/python/cpython/pull/20539
>
> It basically extends the existing behavior (4) to the rho-shaped case.
>
> It also prevents the cycle-detecting logic from sitting in two places
> (both _PyErr_SetObject and PyException_SetContext) and does not make any
> 

[Python-Dev] Re: Can we stop adding to the C API, please?

2020-06-04 Thread Chris Jerdonek
On Wed, Jun 3, 2020 at 6:09 AM Mark Shannon  wrote:

> Also, can we remove all the new API functions added in 3.9 before the
> release and it is too late?
>

I think it would be helpful to open an issue that lists the 40 new
functions, so people could more easily review them before 3.9 is released.
Only a few were discussed in this thread. Also, if the new function was
"private" ("_" prefix), is there still a concern?

--Chris
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/H2EBKRJ76WBVCWVJXIKJV3I2IN5EKAJY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 618: Add Optional Length-Checking To zip

2020-05-15 Thread Chris Jerdonek
Here’s another advantage of having a separate function that I didn’t see
acknowledged in the PEP:

If strict behavior is a better default for a zip-like function than
non-strict, then choosing a new function would let you realize that better
default. In contrast, by adding a new argument to the existing function,
the function you use will forever have the less preferred default.

In terms of what is a better default, I would say strict is better because
errors can’t pass silently: If errors occur, you can always change the
flag. But you would be doing that explicitly.

—Chris

On Fri, May 15, 2020 at 6:57 AM Paul Ganssle  wrote:

> I'm on the fence about using a separate function vs. a keyword argument
> (I think there is merit to both), but one thing to note about the
> separate function suggestion is that it makes it easier to write
> backwards compatible code that doesn't rely on version checking. With
> `itertools.zip_strict`, you can do some graceful degradation like so:
>
> try:
> from itertools import zip_strict
> except ImportError:
> zip_strict = zip
>
> Or provide fallback easily:
>
> try:
> from itertools import zip_strict
> except ImportError:
> def zip_strict(*args):
> yield from zip(*args)
> for arg in args:
> if next(arg, None):
>  raise ValueError("At least one input terminated early.")
>
> There's an alternate pattern for the kwarg-only approach, which is to
> just try it and see:
>
> try:
> zip(strict=True)
> HAS_ZIP_STRICT = True
> except TypeError:
> HAS_ZIP_STRICT = False
>
> But I would say it's considerably less idiomatic.
>
> Just food for thought here. In the long run this doesn't matter, because
> eventually 3.9 will fall out of everyone's support matrices and these
> workarounds will become obsolete anyway.
>
> Best,
> Paul
>
> On 5/15/20 5:20 AM, Stephen J. Turnbull wrote:
> > Brandt Bucher writes:
> >
> >  > Still agreed. But I think they would be *better* served by the
> >  > proposed keyword argument.
> >  >
> >  > This whole sub-thread of discussion has left me very confused. Was
> >  > anything unclear in the PEP's phrasing here?
> >
> > I thought it was quite clear.  Those of us who disagree simply
> > disagree.  We prefer to provide it as a separate function.  Just move
> > on, please; you're not going to convince us, and we're not going to
> > convince you.  Leave it to the PEP Delegate or Steering Council.
> >
> >  > I wouldn't confuse "can" and "should" here.
> >
> > You do exactly that in arguing for your preferred design, though.
> >
> > We could implement the strictness test with an argument to the zip
> > builtin function, but I don't think we should.  I still can't think of
> > a concrete use case for it from my own experience.  Of course I
> > believe concrete use cases exist, but that introspection makes me
> > suspicious of the claim that this should be a builtin feature, with
> > what is to my taste an ugly API.
> >
> > Again, I don't expect to convince you, and you shouldn't expect to
> > convince me, at least not without more concrete and persuasive use
> > cases than I've seen so far.
> >
> > Steve
> > ___
> > Python-Dev mailing list -- python-dev@python.org
> > To unsubscribe send an email to python-dev-le...@python.org
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/6NQZIDVMGPXA5QJWTKWJFZUUUAYQAOH4/
> > Code of Conduct: http://python.org/psf/codeofconduct/
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/A4UGQRMKUZDBHEE4AFJ4PL6AUUTAPF7N/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/K37A3BFMDWVUHKJNH6O36ANHIMNLPQKE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Issues with import_fresh_module

2020-05-07 Thread Chris Jerdonek
To expand on my earlier comment about changing the module under test to
make your testing easier, asyncio is one library that has lots of tests of
different combinations of its C and Python implementations being used
together.

As far as I know, it doesn't use import_fresh_module or similar hackery.
Instead it exposes a private way of getting at the parallel Python
implementation:
https://github.com/python/cpython/blob/b7a78ca74ab539943ab11b5c4c9cfab7f5b7ff5a/Lib/asyncio/futures.py#L271-L272
This is the kind of thing I was suggesting. (It might require more setup
than this in your case.)

--Chris


On Thu, May 7, 2020 at 11:33 AM Brett Cannon  wrote:

> Maybe an initialization/import side-effect bug which is triggered if the
> module is imported twice?
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/25XFYLISP53DRZX2UI7ADYC3JC2V2NVG/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LTMDTYKYL7IVTPISSFVUSX7355GI4QOX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Issues with import_fresh_module

2020-05-06 Thread Chris Jerdonek
Have you also considered changes to the modules under test that might make
it easier for both implementations to exist and be tested side-by-side (so
with fewer hacks on the testing side)?

—Chris

On Wed, May 6, 2020 at 2:33 PM Paul Ganssle  wrote:

> Thanks for the suggestion.
>
> I think I tried something similar for tests that involved an environment
> variable and found that it doesn't play nicely with coverage.py *at all*.
>
> Also, I will have to solve this problem at some point anyway because the
> property tests for the module (not currently included in the PR) include
> tests that have the C and pure Python version running side-by-side, which
> would be hard to achieve with subinterpreters.
>
> On 5/6/20 4:51 PM, Nathaniel Smith wrote:
>
> On Wed, May 6, 2020 at 7:52 AM Paul Ganssle  
>  wrote:
>
> As part of PEP 399, an idiom for testing both C and pure Python versions of a 
> library is suggested making use if import_fresh_module.
>
> Unfortunately, I'm finding that this is not amazingly robust. We have this 
> issue: https://bugs.python.org/issue40058, where the tester for datetime 
> needs to do some funky manipulations to the state of sys.modules for reasons 
> that are now somewhat unclear, and still sys.modules is apparently left in a 
> bad state.
>
> When implementing PEP 615, I ran into similar issues and found it very 
> difficult to get two independent instances of the same module – one with the 
> C extension blocked and one with it intact. I ended up manually importing the 
> C and Python extensions and grafting them onto two "fresh" imports with 
> nothing blocked.
>
> When I've had to deal with similar issues in the past, I've given up
> on messing with sys.modules and just had one test spawn a subprocess
> to do the import+run the actual tests. It's a big hammer, but the nice
> thing about big hammers is that there's no subtle issues, either they
> smash the thing or they don't.
>
> But, I don't know how awkward that would be to fit into Python's
> unittest system, if you have lots of tests you need to run this way.
>
> -n
>
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/H4TWK574BEUDVY4MGTSFJ5OKD4OVOWZZ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/SLYLON2KLYCRYRWKY773MSZASJ7LC5JP/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] Lost sight

2019-01-19 Thread Chris Jerdonek
Hi Serhiy,

That's terrible and sounds frightening. Were you able to get medical
care to get a diagnosis and treatment if needed?

We all hope your condition improves.

--Chris

On Sat, Jan 19, 2019 at 2:14 AM Serhiy Storchaka  wrote:
>
> I have virtually completely lost the sight of my right eye (and the loss
> is quickly progresses) and the sight of my left eye is weak. That is why
> my activity as a core developer was decreased significantly at recent
> time. My apologies to those who are waiting for my review. I will do it
> slowly.
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Interested in serving on Steering Council

2019-01-09 Thread Chris Jerdonek
Just to close (or continue) the loop on this thread, I just nominated
David for the steering council:
https://discuss.python.org/t/steering-council-nomination-david-mertz/647

Thanks for stepping forward with your interest and willingness to serve, David!

--Chris

On Fri, Jan 4, 2019 at 12:37 PM Antoine Pitrou  wrote:
>
>
> Hi David,
>
> On Fri, 4 Jan 2019 15:24:20 -0500
> David Mertz  wrote:
> >
> > I've been part of the Python community since 1998, but really active in it
> > since about 2001.  During the early 2000s, I wrote a large number of widely
> > read articles promoting Python, often delving into explaining semi-obscure
> > features and language design issues.  Most of these were with in my column
> > _Charming Python_.  I believe that several changes in Python itself—such as
> > making coroutines easier to use and the design of metaclasses and class
> > decorators—were significantly influenced by things I wrote on the topics.
> > [snip]
>
> Those are useful things to know, thank you.
>
> > If the core developers feel that the overwhelming qualification for the
> > Steering Committee is familiarity with the C code base of CPython, then
> > indeed I am not the best candidate for that.
>
> Obviously not the overwhelming qualification (though at least _some_ of
> the committee members would have to be familiar with the C code base, I
> think).
>
> > If language design issues are
> > more important—and especially if thinking about Python's place among users
> > and industry are important, then I think I'm a very good candidate for the
> > role.
>
> That, but I think also familiarity with the development and
> contribution process, will definitely play a role.  In other words, if
> some external candidate gets elected I would hope they take the time to
> become familiar with how things work in that regard, and try to
> contribute themselves (not necessarily to make important contributions
> to the codebase but to understand the daily routine).
>
> Regards
>
> Antoine.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] short-circuiting runtime errors/exceptions in python debugger.

2018-10-29 Thread Chris Jerdonek
I have another idea on this. What about the idea of starting the program,
and then a few minutes later, starting the same program a second time. If
the first program errors, you could examine the second one which is a
little bit behind. Before starting the second one, perhaps you could even
make a copy of the second one and pause it for a few minutes before
restarting, in case the second one also errors out, and so on. This would
be more useful only in the case of deterministic errors.

--Chris


On Mon, Oct 29, 2018 at 11:59 AM Chris Jerdonek 
wrote:

> A simpler feature that could possibly help him (assuming there isn't any
> external state to deal with) would be the ability to save everything at a
> certain point in time, and then resume it later. He could rig things up to
> save the state e.g. after every hour: 1 hour, 2 hours, etc. Then if an
> error occurs after 2.5 hours, he could at least start resuming after 2
> hours. This could be viewed as a cheap form of a reverse debugger, because
> a reverse debugger has to save the state at every point in time, not just
> at a few select points.
>
> --Chris
>
> On Mon, Oct 29, 2018 at 9:51 AM Armin Rigo  wrote:
>
>> Hi,
>>
>> On Sat, 27 Oct 2018 at 01:50, Steven D'Aprano 
>> wrote:
>> > [...]
>> > > So I was wondering if it would be possible to keep that context around
>> > > if you are in the debugger and rewind the execution point to before
>> > > the statement was triggered.
>> >
>> > I think what you are looking for is a reverse debugger[1] also known as
>> > a time-travel debugger.
>>
>> I think it's a bit different.  A reverse debugger is here to debug
>> complex conditions in a (static) program.  What Ed is looking for is a
>> way to catch easy failures, fix the obviously faulty line, and
>> continue running the program.
>>
>> Of course I can't help but mention to Ed that this is precisely the
>> kind of easy failures that are found by *testing* your code,
>> particularly if that's code that only runs after hours of other code
>> has executed.  *Never* trust yourself to write correct code if you
>> don't know that it is correct after waiting for hours.
>>
>> But assuming that you really, really are allergic to tests, then what
>> you're looking for reminds me of long-ago Python experiments with
>> resumable exceptions and patching code at runtime.  Both topics are
>> abandoned now.  Resumable exceptions was a cool hack of the
>> interpreter that nobody really found a use for (AFAIR); patching code
>> at runtime comes with a pile of messes---it only works in the simple
>> cases, but there is no general solution for that.
>>
>>
>> A bientôt,
>>
>> Armin.
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] short-circuiting runtime errors/exceptions in python debugger.

2018-10-29 Thread Chris Jerdonek
A simpler feature that could possibly help him (assuming there isn't any
external state to deal with) would be the ability to save everything at a
certain point in time, and then resume it later. He could rig things up to
save the state e.g. after every hour: 1 hour, 2 hours, etc. Then if an
error occurs after 2.5 hours, he could at least start resuming after 2
hours. This could be viewed as a cheap form of a reverse debugger, because
a reverse debugger has to save the state at every point in time, not just
at a few select points.

--Chris

On Mon, Oct 29, 2018 at 9:51 AM Armin Rigo  wrote:

> Hi,
>
> On Sat, 27 Oct 2018 at 01:50, Steven D'Aprano  wrote:
> > [...]
> > > So I was wondering if it would be possible to keep that context around
> > > if you are in the debugger and rewind the execution point to before
> > > the statement was triggered.
> >
> > I think what you are looking for is a reverse debugger[1] also known as
> > a time-travel debugger.
>
> I think it's a bit different.  A reverse debugger is here to debug
> complex conditions in a (static) program.  What Ed is looking for is a
> way to catch easy failures, fix the obviously faulty line, and
> continue running the program.
>
> Of course I can't help but mention to Ed that this is precisely the
> kind of easy failures that are found by *testing* your code,
> particularly if that's code that only runs after hours of other code
> has executed.  *Never* trust yourself to write correct code if you
> don't know that it is correct after waiting for hours.
>
> But assuming that you really, really are allergic to tests, then what
> you're looking for reminds me of long-ago Python experiments with
> resumable exceptions and patching code at runtime.  Both topics are
> abandoned now.  Resumable exceptions was a cool hack of the
> interpreter that nobody really found a use for (AFAIR); patching code
> at runtime comes with a pile of messes---it only works in the simple
> cases, but there is no general solution for that.
>
>
> A bientôt,
>
> Armin.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-18 Thread Chris Jerdonek
On Thu, Oct 18, 2018 at 9:11 AM Michael Selik  wrote:
> On Thu, Oct 18, 2018 at 8:35 AM Sean Harrington  wrote:
>> Further, let me pivot on my idea of __qualname__...we can use the `id` of 
>> `func` as the cache key to address your concern, and store this `id` on the 
>> `task` tuple (i.e. an integer in-lieu of the `func` previously stored there).
>
>
> Possible. Does the Pool keep a reference to the passed function in the main 
> process? If not, couldn't the garbage collector free that memory location and 
> a new function could replace it? Then it could have the same qualname and id 
> in CPython. Edge case, for sure. Worse, it'd be hard to reproduce as it'd be 
> dependent on the vagaries of memory allocation.

I'm not following this thread closely, but I just wanted to point out
that __qualname__ won't necessarily be an attribute of the object if
the API accepts any callable. (I happen to be following an issue on
the tracker where this came up.)

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Arbitrary non-identifier string keys when using **kwargs

2018-10-10 Thread Chris Jerdonek
On Tue, Oct 9, 2018 at 8:55 PM Guido van Rossum  wrote:
> On Tue, Oct 9, 2018 at 7:49 PM Chris Jerdonek  
> wrote:
>> On Tue, Oct 9, 2018 at 7:13 PM Benjamin Peterson  wrote:
>> > Can anyone think of a situation where it would be advantageous for an 
>> > implementation to reject non-identifier string kwargs? I can't.
>>
>> One possibility is that it could foreclose certain security bugs from
>> happening. For example, if someone has an API that accepts **kwargs,
>> they might have the mistaken assumption that the keys are identifiers
>> without special characters like ";" etc, and so they could make the
>> mistake of thinking they don't need to escape / sanitize them.
>
>
> Hm, that's not an entirely unreasonable concern. How would an attacker get 
> such keys *into* the dict?

I was just thinking json. It could be a config-file type situation, or
a web API that accepts json.

For example, there are JSON-RPC implementations in Python:
https://pypi.org/project/json-rpc/
that translate json dicts directly into **kwargs:
https://github.com/pavlov99/json-rpc/blob/f1b4e5e96661efd4026cb6143dc3acd75c6c4682/jsonrpc/manager.py#L112

On the server side, the application could be doing something like
assuming that the kwargs are e.g. column names paired with values to
construct a string in SQL or in some other language or format.

--Chris



> One possible scenario would be something that parses a traditional web query 
> string into a dict, passes it down through **kwds, and then turns it back 
> into another query string without proper quoting. But the most common (and 
> easiest) way to turn a dict into a query string is calling urlencode(), which 
> quotes unsafe characters.
>
> I think we needn't rush this (and when in doubt, status quo wins, esp. when 
> there's no BDFL :-).
>
> --
> --Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Arbitrary non-identifier string keys when using **kwargs

2018-10-09 Thread Chris Jerdonek
On Tue, Oct 9, 2018 at 7:13 PM Benjamin Peterson  wrote:
> On Tue, Oct 9, 2018, at 17:14, Barry Warsaw wrote:
> > On Oct 9, 2018, at 16:21, Steven D'Aprano  wrote:
> > >
> > > On Tue, Oct 09, 2018 at 10:26:50AM -0700, Guido van Rossum wrote:
> > >> My feeling is that limiting it to strings is fine, but checking those
> > >> strings for resembling identifiers is pointless and wasteful.
> > >
> > > Sure. The question is, do we have to support uses where people
> > > intentionally smuggle non-identifier strings as keys via **kwargs?
> >
> > I would not be in favor of that.  I think it doesn’t make sense to be
> > able to smuggle those in via **kwargs when it’s not supported by
> > Python’s grammar/syntax.
>
> Can anyone think of a situation where it would be advantageous for an 
> implementation to reject non-identifier string kwargs? I can't.

One possibility is that it could foreclose certain security bugs from
happening. For example, if someone has an API that accepts **kwargs,
they might have the mistaken assumption that the keys are identifiers
without special characters like ";" etc, and so they could make the
mistake of thinking they don't need to escape / sanitize them.

--Chris

>
> I agree with Guido—banning it would be too much trouble for no benefit.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Communication channels

2018-10-01 Thread Chris Jerdonek
Another one is GitHub (and the bug tracker, for that matter). For
example, I believe here is where the discussion took place that led to
the initial draft of PEP 582 re: recognizing a local __packages__
directory:
https://github.com/kushaldas/peps/pull/1

The PEP was posted here:
https://github.com/python/peps/pull/776
https://www.python.org/dev/peps/pep-0582/

To my knowledge it hasn't been discussed on python-dev yet.

Also, if you are trying to be complete, another communication channel
is in-person events and conferences, etc.

--Chris


On Mon, Oct 1, 2018 at 4:21 AM Victor Stinner  wrote:
>
> Hi,
>
> Last months, new communication channels appear. This is just a
> reminder that they exist:
>
> * Zulip: https://python.zulipchat.com/ (exist since 1 year?)
> * Discourse: http://discuss.python.org/ (I'm not sure if it's fully
> official yet ;-))
> * IRC: #python-dev on FreeNode, only for development *of* Python,
> mostly to discuss bugs and pull requests
> * Mailing lists: python-ideas, python-dev, etc.
>
> Some core developers are also active on Twitter. Some ideas were first
> discussed on Twitter. You may want to follow some of them. Incomplete
> list of core devs that I follow:
>
> * Barry Warsaw: https://twitter.com/pumpichank
> * Brett Cannon: https://twitter.com/brettsky
> * Guido van Rossum: https://twitter.com/gvanrossum
> * Łukasz Langa: https://twitter.com/llanga
> * Mariatta: https://twitter.com/Mariatta
> * Serhiy Storchaka: https://twitter.com/SerhiyStorchaka
> * Yury Selivanov: https://twitter.com/1st1
> * ... the full list is very long, and I'm too lazy to complete it :-)
> Maybe someone has already a list more complete than mine.
>
> I hope that I didn't miss an important communication channel :-)
>
> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Confused on git commit tree about Lib/datetime.py

2018-08-01 Thread Chris Jerdonek
FWIW, it looks like this is the first (earliest) merge commit that
caused the v2.7.4 line to contain
cf86e368ebd17e10f68306ebad314eea31daaa1e:

$ git show -q d26b658f1433a28b611906c078f47bc804a63dd1
commit d26b658f1433a28b611906c078f47bc804a63dd1
Merge: 2d639d5665 f8b9dfd9a1
Author: Georg Brandl 
Date:   Sat Aug 11 11:08:04 2012 +0200

Graft a89d654adaa2 from 3.2 branch. Fixes #15620.

--Chris


On Tue, Jul 31, 2018 at 11:53 PM, Chris Angelico  wrote:
> On Wed, Aug 1, 2018 at 1:16 PM, Jeffrey Zhang  wrote:
>> I found a interesting issue when checking the Lib/datetime.py implementation
>> in python3
>>
>> This patch is introduced by cf86e368ebd17e10f68306ebad314eea31daaa1e [0].
>> But if you
>> check the github page[0], or using git tag --contains, you will find v2.7.x
>> includes this commit too.
>>
>> $ git tag --contains cf86e368ebd17e10f68306ebad314eea31daaa1e
>> 3.2
>> v2.7.10
>> v2.7.10rc1
>> v2.7.11
>> v2.7.11rc1
>> ...
>>
>> whereas, if you check the v2.7.x code base, nothing is found
>>
>> $ git log v2.7.4 -- Lib/datetime.py
>> 
>>
>> I guess it maybe a git tool bug, or the commit tree is messed up. Is there
>> any guys could explain this
>> situation?
>
> I suppose you could say that the commit tree is "messed up", in a
> sense, but it's not truly messed up, just a little odd. It's a
> consequence of the way merges have been done in the CPython
> repository. Nothing is actually broken, except for the ability to
> track down a commit the way you're doing.
>
> ChrisA
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tests failing on Windows with TESTFN

2018-07-30 Thread Chris Jerdonek
On Mon, Jul 30, 2018 at 8:46 AM, Tim Golden  wrote:
> On 30/07/2018 16:41, Nick Coghlan wrote:
>>
>> On 29 July 2018 at 03:20, Tim Golden  wrote:
>>>
>>> I think that was my starting point: rather than develop increasingly
>>> involved and still somewhat brittle mechanisms, why not do what you'd
>>> naturally do with a new test and use tempfile? I was expecting someone to
>>> come forward to highlight some particular reason why the TESTFN approach
>>> is
>>> superior, but apart from a reference to the possibly cost of creating a
>>> new
>>> temporary file per test, no-one really has.
>>
>>
>> For higher level modules, "just use tempfile to create a new temporary
>> directory, then unlink it at the end" is typically going to be a good
>> answer (modulo the current cleanup issues that Jeremy is reporting,
>> but ideally those will be fixed rather than avoided, either by
>> improving the way the module is being used, or fixing any underlying
>> defects).

If there's a desire to use tempfile, another option is to have
tempfile create the temp files inside the temporary directory the test
harness creates specifically for testing -- using the "dir" argument
to many of tempfile's functions.

Here is where the process-specific temp directory is created for
testing (inside test.libregrtest.main):
https://github.com/python/cpython/blob/9045199c5aaeac9b52537581be127d999b5944ee/Lib/test/libregrtest/main.py#L511

This would also facilitate the clean-up of any leftover temp files.

Again, I think it would be best to use any tempfile functions behind
one or more test-support functions, so the choice of location, etc.
can be changed centrally without needing to modify code everywhere.

--Chris

>>
>> For lower level modules though, adding a test suite dependency on
>> tempfile introduces a non-trivial chance of adding an operational
>> dependency from a module's test suite back to the module itself. When
>> that happens, module regressions may show up as secondary failures in
>> tempfile that then need to be debugged, rather than as specific unit
>> test failures that point you towards exactly what you broke.
>>
>> Cheers,
>> Nick.
>>
>
> Thanks Nick; I hadn't thought about the possible interdependency issue.
>
> I think for the moment my approach will be to switch to support.unlink
> wherever possible to start with. Before introducing other (eg tempfile)
> changes, this should at least narrow the issues down. I've made a start on
> that (before inadvertently blowing away all the changes since my
> hours-previous commit!)
>
> If further changes are necessary then I'll probably look case-by-case to see
> whether a tempfile or some other solution would help.
>
> That said, that's potentially quite a lot of change -- at least in terms of
> files changed if not strictly of functionality. So I'm thinking of
> trickle-feeding the changes through as people will understandably baulk at a
> patchbomb (PR-bomb?) hitting the codebase all at once.
>
> TJG
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tests failing on Windows with TESTFN

2018-07-28 Thread Chris Jerdonek
On Sat, Jul 28, 2018 at 5:40 PM Brett Cannon  wrote:

>
> On Sat, Jul 28, 2018, 15:13 eryk sun,  wrote:
>
>> On Sat, Jul 28, 2018 at 9:17 PM, Jeremy Kloth 
>> wrote:
>> >
>> > *PLEASE*, don't use tempfile to create files/directories in tests.  It
>> > is unfriendly to (Windows) buildbots.  The current approach of
>> > directory-per-process ensures no test turds are left behind, whereas
>> > the tempfile solution slowly fills up my buildbot.  Windows doesn't
>> > natively clean out the temp directory.
>>
>> FYI, Windows 10 storage sense (under system->storage) can be
>> configured to delete temporary files on a schedule. Of course that
>> doesn't help with older systems.
>>
>
> If Windows doesn't clean up its temp directory on a regular basis then
> that doesn't suggest to me not to use tempfile, but instead that the use of
> tempfile still needs to clean up after itself. And if there is a lacking
> feature in tempfile then we should add it instead of a avoiding the module.
>

Regardless of whether the tempfile or TESTFN approach is used, I think it
would be best for a few reasons if the choice is abstracted behind a
uniquely named test function (e.g. make_test_file if not already used).

—Chris



> -Brett
>
> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>>
> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/brett%40python.org
>>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] bpo-34239: Convert test_bz2 to use tempfile (#8485)

2018-07-28 Thread Chris Jerdonek
On Thu, Jul 26, 2018 at 2:05 PM, Tim Golden  wrote:
> https://github.com/python/cpython/commit/6a62e1d365934de82ff7c634981b3fbf218b4d5f
> commit: 6a62e1d365934de82ff7c634981b3fbf218b4d5f
> branch: master
> author: Tim Golden 
> committer: GitHub 
> date: 2018-07-26T22:05:00+01:00
> summary:
>
> bpo-34239: Convert test_bz2 to use tempfile (#8485)
>
> * bpo-34239: Convert test_bz2 to use tempfile
>
> test_bz2 currently uses the test.support.TESTFN functionality which creates a 
> temporary file local to the test directory named around the pid.
>
> This can give rise to race conditions where tests are competing with each 
> other to delete and recreate the file.

Per the other thread--
https://mail.python.org/pipermail/python-dev/2018-July/154762.html
this seems like a wrong statement of the problem as tests are properly
cleaning up after themselves. The leading hypothesis is that unrelated
Windows processes are delaying the deletion (e.g. virus scanners).

--Chris

>
> This change converts the tests to use tempfile.mkstemp which gives a 
> different file every time from the system's temp area
>
> files:
> M Lib/test/test_bz2.py
>
> diff --git a/Lib/test/test_bz2.py b/Lib/test/test_bz2.py
> index 003497f28b16..e62729a5a2f8 100644
> --- a/Lib/test/test_bz2.py
> +++ b/Lib/test/test_bz2.py
> @@ -6,6 +6,7 @@
>  import os
>  import pickle
>  import glob
> +import tempfile
>  import pathlib
>  import random
>  import shutil
> @@ -76,11 +77,14 @@ class BaseTest(unittest.TestCase):
>  BIG_DATA = bz2.compress(BIG_TEXT, compresslevel=1)
>
>  def setUp(self):
> -self.filename = support.TESTFN
> +fd, self.filename = tempfile.mkstemp()
> +os.close(fd)
>
>  def tearDown(self):
> -if os.path.isfile(self.filename):
> +try:
>  os.unlink(self.filename)
> +except FileNotFoundError:
> +pass
>
>
>  class BZ2FileTest(BaseTest):
>
> ___
> Python-checkins mailing list
> python-check...@python.org
> https://mail.python.org/mailman/listinfo/python-checkins
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tests failing on Windows with TESTFN

2018-07-28 Thread Chris Jerdonek
On Sat, Jul 28, 2018 at 10:20 AM, Tim Golden  wrote:
>
> Here's the thing. The TESTFN approach creates a directory per process
> test_python_ and some variant of @test__tmp inside that directory.

I filed an issue some years back about this (still open):
https://bugs.python.org/issue15305
The pid is unnecessarily being used twice. Once the directory is
created for the process, there shouldn't be a need to disambiguate
within the directory further.

> But the same filename in the same directory is used for every test in that
> process. Now, leaving aside the particular mechanism by which Windows
> processes might be holding locks which prevent removal or re-addition, that
> already seems like an odd choice.
>
> I think that was my starting point: rather than develop increasingly
> involved and still somewhat brittle mechanisms, why not do what you'd
> naturally do with a new test and use tempfile? I was expecting someone to
> come forward to highlight some particular reason why the TESTFN approach is
> superior, but apart from a reference to the possibly cost of creating a new
> temporary file per test, no-one really has.

I think there is value in having the files used during test runs in a
known location. How about a middle ground where, once the
process-specific directory is created in a known location, unique
files are created within that directory (e.g. using a random suffix,
or a perhaps a suffix that is a function of the test name / test id)?

This would address both the issue you're experiencing, and, if the
directory is deleted at the end of the test run, it would address the
issue Jeremy mentions of growing leftover files. There could also be a
check at the end of each test run making sure that the directory is
empty (to check for tests that aren't cleaning up after themselves).

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tests failing on Windows with TESTFN

2018-07-27 Thread Chris Jerdonek
On Fri, Jul 27, 2018 at 6:41 AM, Giampaolo Rodola'  wrote:
>
> Being TESTFN a global name it appears not suited for parallel testing.

It was designed for parallel testing though:

# Disambiguate TESTFN for parallel testing, while letting it remain a valid
# module name.
TESTFN = "{}_{}_tmp".format(TESTFN, os.getpid())

https://github.com/python/cpython/blob/aee632dfbb0abbc0d2bcc988c43a736afd568c55/Lib/test/support/__init__.py#L807-L809

> Windows makes this more noticeable than POSIX as it's more strict, but
> accessing the same file from 2 unit tests is technically a problem
> regardless of the platform. I don't think that means we should ditch
> TESTFN in favor of tempfile.mktemp() though. Instead the file cleanup
> functions (support.unlink() and support.rmtree()) may be more clever
> and (important) they should always be used in setUp / tearDown. For
> instance, the traceback you pasted refers to a test class which
> doesn't do this.

The "test_file" test method referenced in the traceback calls
os.remove(TESTFN) in finally blocks preceding its calls to
open(TESTFN, "wb"), and inspecting the method shows that it must have
been able to open TESTFN earlier in the method (the same test method
uses TESTFN multiple times):

https://github.com/python/cpython/blob/aee632dfbb0abbc0d2bcc988c43a736afd568c55/Lib/test/test_urllib2.py#L811-L830

So I think one should investigate what can be causing the error / how
it can be happening.

TESTFN uses the pid of the process, so it doesn't seem like another
test case could be interfering and opening the same TESTFN while the
"test_file" test method is running. On Stack Overflow, there are some
comments suggesting that in some cases os.remove() doesn't always
complete right away (e.g. because of anti-malware software briefly
holding a second reference).

--Chris


>
> In psutil I've had occasional Windows failures like this for years
> then I got tired of it and came up with this:
> https://github.com/giampaolo/psutil/blob/1b09b5fff78f705dfb42458726ff9789c26f6f21/psutil/tests/__init__.py#L686
> ...which basically aggressively retries os.unlink or shutil.rmtree for
> 1 sec in case of (any) error, and I haven't had this problem since
> then.
>
> I suppose test.support's unlink() and rmtree() can do something
> similar, maybe just by using a better exception handling, and all unit
> tests should use them in setUp / tearDown. I think this will diminish
> the occasional failures on Windows, although not completely.
>
> --
> Giampaolo - http://grodola.blogspot.com
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tests failing on Windows with TESTFN

2018-07-26 Thread Chris Jerdonek
On Wed, Jul 25, 2018 at 8:07 AM, Tim Golden  wrote:
> One problem is that certain tests use support.TESTFN (a local directory
> constructed from the pid) for output files etc. However this can cause
> issues on Windows when recreating the folders / files for multiple tests,
> especially when running in parallel.
>
> Here's an example on my laptop deliberately running 3 tests with -j0 which I
> know will generate an error about one time in three:
>
> C:\work-in-progress\cpython>python -mtest -j0 test_urllib2 test_bz2
> test_importlib
>
> Running Debug|Win32 interpreter...
> Run tests in parallel using 6 child processes
> 0:00:23 [1/3/1] test_urllib2 failed
> test test_urllib2 failed -- Traceback (most recent call last):
>   File "C:\work-in-progress\cpython\lib\test\test_urllib2.py", line 821, in
> test_file
> f = open(TESTFN, "wb")
> PermissionError: [Errno 13] Permission denied: '@test_15564_tmp'
>
> Although these errors are both intermittent and fairly easily spotted, the
> effect is that I rarely get a clean test run when I'm applying a patch.
>
> I started to address this years ago but things stalled. I'm happy to pick
> this up again and have another go, but I wanted to ask first whether there
> was any objection to my converting tests to using tempfile functions which
> should avoid the problem?

Do you know what's causing the issue on Windows? I thought TESTFN was
designed to work for parallel testing, so it would surprise me if
there was a problem with it. Alternatively, if TESTFN should be okay,
I wonder if it's an issue with another test or tests not cleaning up
after itself correctly, in which case it seems like this is an
opportunity to track down and fix that issue. Switching to something
else would just serve to hide / mask the issue with those other tests.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Accepting PEP 572, Assignment Expressions

2018-07-12 Thread Chris Jerdonek
(status := "Accepted") and "Congratulations!" ;-) (hope I did that right,
but I can't try it yet!)

Thanks for hanging in there, Guido, and for your patience with everyone
during the discussions. I'm glad you're still with us!

--Chris



On Wed, Jul 11, 2018 at 5:10 PM, Guido van Rossum  wrote:

> As anticippated, after a final round of feedback I am hereby accepting PEP
> 572, Assignment Expressions: https://www.python.org/dev/peps/pep-0572/
>
> Thanks to everyone who participated in the discussion or sent a PR.
>
> Below is a list of changes since the last post (
> https://mail.python.org/pipermail/python-dev/2018-July/154557.html) --
> they are mostly cosmetic so I won't post the doc again, but if you want to
> go over them in detail, here's the history of the file on GitHub:
> https://github.com/python/peps/commits/master/pep-0572.rst, and here's a
> diff since the last posting: https://github.com/python/peps
> /compare/26e6f61f...master (sadly it's repo-wide -- you can click on
> Files changed and then navigate to pep-0572.rst).
>
>- Tweaked the example at line 95-100 to use result = ... rather than return
>... so as to make a different rewrite less feasible
>- Replaced the weak "2-arg iter" example with Giampaolo Roloda's while
>chunk := file.read(8192): process(chunk)
>- *Added prohibition of unparenthesized assignment expressions in
>annotations and lambdas*
>- Clarified that TargetScopeError is a *new* subclass of SyntaxError
>- Clarified the text forbidding assignment to comprehension loop
>control variables
>- Clarified that the prohibition on := with annotation applies to
>*inline* annotation (i.e. they cannot be syntactically combined in the
>same expression)
>- Added conditional expressions to the things := binds less tightly
>than
>- Dropped section "This could be used to create ugly code"
>- Clarified the example in Appendix C
>
> Now on to the implementation work! (Maybe I'll sprint on this at the
> core-dev sprint in September.)
>
> --
> --Guido van Rossum (python.org/~guido)
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> chris.jerdonek%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Microsoft to acquire GitHub for $7.5 b

2018-06-06 Thread Chris Jerdonek
On Tue, Jun 5, 2018 at 7:03 PM Ivan Pozdeev via Python-Dev <
python-dev@python.org> wrote:

> On 05.06.2018 17:28, Martin Gainty wrote:
>
> who owns the Data hosted on Github?
>
> Github Author?
> Microsoft?
>
>
> Martin
>
>
> https://help.github.com/articles/github-terms-of-service/#d-user-generated-content
> :
>
> "*You own content you create, but you allow us certain rights to it, so
> that we can display and share the content you post. You still have control
> over your content, and responsibility for it, and the rights you grant us
> are limited to those we need to provide the service.*
>

Is the “service” they provide (and what it needs) allowed to change over
time, so that the rights granted can expand? The definition of “service” in
their document is—


   1. The “Service” refers to the applications, software, products, and
   services provided by GitHub.

—Chris

* We have the right to remove content or close Accounts if we need to."*
>
>
> --
> *From:* Python-Dev 
>  on behalf of M.-A.
> Lemburg  
> *Sent:* Tuesday, June 5, 2018 7:54 AM
> *To:* Antoine Pitrou; python-dev@python.org
> *Subject:* Re: [Python-Dev] Microsoft to acquire GitHub for $7.5 billion
>
> Something that may change is the way they treat Github
> accounts, after all, MS is very much a sales driven company.
>
> But then there's always the possibility to move to Gitlab
> as alternative (hosted or run on PSF VMs), so I would
> worry too much.
>
> Do note, however, that the value in Github is not so much with
> the products they have, but with the data. Their databases
> know more about IT developer than anyone else and given
> that Github is put under the AI umbrella in MS should tell
> us something :-)
>
>
> On 04.06.2018 19:02, Antoine Pitrou wrote:
> >
> > That's true, but Microsoft has a lot of stakes in the ecosystem.
> > For example, since it has its own CI service that it tries to promote
> > (VSTS), is it in Microsoft's best interest to polish and improve
> > integrations with other CI services?
> >
> > Regards
> >
> > Antoine.
> >
> >
> > On Mon, 4 Jun 2018 09:06:28 -0700
> > Guido van Rossum   wrote:
> >> On Mon, Jun 4, 2018 at 8:40 AM, Antoine Pitrou 
>  wrote:
> >>
> >>>
> >>> On Mon, 4 Jun 2018 17:03:27 +0200
> >>> Victor Stinner   wrote:
> 
>  At this point, I have no opinion about the event :-) I just guess that
>  it should make GitHub more sustainable since Microsoft is a big
>  company with money and interest in GitHub. I'm also confident that
>  nothing will change soon. IMHO there is no need to worry about
>  anything.
> >>>
> >>> It does spell uncertainty on the long term.  While there is no need to
> >>> worry for now, I think it gives a different colour to the debate about
> >>> moving issues to Github.
> >>>
> >>
> >> I don't see how this *increases* the uncertainty. Surely if GitHub had
> >> remained independent there would have been be similar concerns about
> how it
> >> would make enough money to stay in business.
> >>
> >
> > ___
> > Python-Dev mailing list
> > Python-Dev@python.org
> > https://mail.python.org/mailman/listinfo/python-dev
> Python-Dev Info Page 
> mail.python.org
> Do not post general Python questions to this list. For help with Python
> please see the Python help page.. On this list the key Python developers
> discuss the future of the language and its implementation.
>
>
> > Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mal%40egenix.com
> >
>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Experts (#1, Jun 05 2018)
> >>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
> >>> Python Database Interfaces ...   http://products.egenix.com/
> >>> Plone/Zope Database Interfaces ...   http://zope.egenix.com/
> 
>
> ::: We implement business ideas - efficiently in both time and costs :::
>
>eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>Registered at Amtsgericht Duesseldorf: HRB 46611
>http://www.egenix.com/company/contact/
>   http://www.malemburg.com/
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mgainty%40hotmail.com
>
>
> ___
> Python-Dev mailing 
> listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-dev
>
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru
>
>
> --
> Regards,
> Ivan
>
> ___
> Python-Dev mailing list
> 

Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-15 Thread Chris Jerdonek
What does “no release at all” mean? If it’s not released, how would people
use it?

—Chris

On Tue, May 15, 2018 at 9:36 PM Alex Walters 
wrote:

> In the spirit of learning why there is a fence across the road before I
> tear
> it down out of ignorance [1], I'd like to know the rationale behind source
> only releases of cpython.  I have an opinion on their utility and perhaps
> an
> idea about changing them, but I'd like to know why they are done (as
> opposed
> to source+binary releases or no release at all) before I head over to
> python-ideas.  Is this documented somewhere where my google-fu can't find
> it?
>
>
> [1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python startup time

2018-05-03 Thread Chris Jerdonek
FYI, a lot of these ideas were discussed back in September and October of
2017 on this list if you search the subject lines for "startup" e.g.
starting here and here:
https://mail.python.org/pipermail/python-dev/2017-September/149150.html
https://mail.python.org/pipermail/python-dev/2017-October/149670.html

At the end Guido kicked (at least part of) the discussion back to
python-ideas.

--Chris


On Thu, May 3, 2018 at 5:55 PM, Chris Angelico  wrote:

> On Fri, May 4, 2018 at 10:43 AM, Gregory P. Smith  wrote:
> > I'd also like to see this concept somehow extended to decorators so that
> the
> > results of the decoration can be captured in the compiled pyc rather than
> > requiring execution at import time.  I realize that limits what
> decorators
> > can do, but the evil things they could do that this would eliminate are
> > things they just shouldn't be doing in most situations.  meaning: there
> > would probably be two types of decorators... colons seem to be all the
> rage
> > these days so we could add an @: operator for that. :P ... Along with a
> from
> > __future__ import to change the behavior or all decorators in a file from
> > runtime to compile time by default.
> >
> > from __future__ import compile_time_decorators  # we'd be unlikely to
> ever
> > change the default and break things, __future__ seems wrong
> >
> > @this_happens_at_compile_time(3)
> > def ...
> >
> > @:this_waits_until_runtime(5)
> > def ...
> >
> > Just a not-so-wild idea, no idea if this should become a PEP for 3.8.
> (the
> > : syntax is a joke - i'd prefer @@ so it looks like eyeballs)
>
> At this point, we're squarely in python-ideas territory, but there are
> some possibilities. Imagine popping this line of code at the bottom of
> your file:
>
> import importlib; importlib.freeze_module()
>
> as a declaration that the dictionary for this module is now locked in
> and can be dumped out in whatever form is most efficient. Effectively,
> you're stating that you do not need any sort of dynamism (that call
> could be easily disabled for testing), and that, if the optimization
> breaks anything, you accept responsibility for it.
>
> How this would be implemented, I'm not sure, but that's no different
> from the @: idea.
>
> ChrisA
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-05-01 Thread Chris Jerdonek
On Tue, May 1, 2018 at 2:14 AM, Steve Holden <st...@holdenweb.com> wrote:
> On Tue, May 1, 2018 at 3:36 AM, Chris Jerdonek <chris.jerdo...@gmail.com>
> wrote:
>>
>> On Thu, Apr 26, 2018 at 10:33 AM, Sven R. Kunze <srku...@mail.de> wrote:
>> > On 25.04.2018 01:19, Steven D'Aprano wrote:
>> >>
>> >> Sorry, gcd(diff, n) is not the "perfect name", and I will tell you that
>> >> sometimes g is better. [...]
>> >
>> > We were talking about the real-world code snippet of Tim (as a
>> > justification
>> > of := ) and alternative rewritings of it without resorting to new
>> > syntax.
>>
>> Apologies if this idea has already been discussed (I might have missed
>> the relevant email), but thinking back to Tim's earlier example--
>>
>> if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
>> return g
>>
>> it occurs to me this could be implemented with current syntax using a
>> pattern like the following:
>>
>> stashed = [None]
>>
>> def stash(x):
>> stashed[0] = x
>> return x
>>
>> if stash(x - x_base) and stash(gcd(stashed[0], n)) > 1:
>> return stashed[0]
>>
>> There are many variations to this idea, obviously. For example, one
>> could allow passing a "name" to stash(), or combine stash / stashed
>> into a single, callable object that allows setting and reading from
>> its store. I wonder if one of them could be made into a worthwhile
>> pattern or API..
>
> I hope you don't think this recasting, is in any way less confusing to a
> beginner than an inline assignment. This is language abuse!

I didn't make any claims that it wouldn't be confusing (especially as
is). It was just an _idea_. I mentioned it because (1) it uses current
syntax, (2) it doesn't require intermediate assignments or extra
indents in the main body of code, (3) it doesn't even require choosing
intermediate names, and (4) I didn't see it mentioned in any of the
previous discussion. All three of the first points have been major
sources of discussion in the thread. So I thought it might be of
interest.

> In any case, what advantages would it have over simply declaring "stashed"
> as a global inside the function and omitting the confusing subscripting?

Right. Like I said, there are many variations. I just listed one to
convey the general idea.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-30 Thread Chris Jerdonek
On Thu, Apr 26, 2018 at 10:33 AM, Sven R. Kunze  wrote:
> On 25.04.2018 01:19, Steven D'Aprano wrote:
>>
>> Sorry, gcd(diff, n) is not the "perfect name", and I will tell you that
>> sometimes g is better. [...]
>
> We were talking about the real-world code snippet of Tim (as a justification
> of := ) and alternative rewritings of it without resorting to new syntax.

Apologies if this idea has already been discussed (I might have missed
the relevant email), but thinking back to Tim's earlier example--

if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
return g

it occurs to me this could be implemented with current syntax using a
pattern like the following:

stashed = [None]

def stash(x):
stashed[0] = x
return x

if stash(x - x_base) and stash(gcd(stashed[0], n)) > 1:
return stashed[0]

There are many variations to this idea, obviously. For example, one
could allow passing a "name" to stash(), or combine stash / stashed
into a single, callable object that allows setting and reading from
its store. I wonder if one of them could be made into a worthwhile
pattern or API..

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Order of positional and keyword arguments

2018-04-27 Thread Chris Jerdonek
On Thu, Apr 26, 2018 at 12:25 PM, Serhiy Storchaka  wrote:
> f(b=2, *[1]) is surprised in two ways:
>
> 1. Argument values are passed not in order. The first value is assigned to
> the second parameter, and the second value is assigned to the first
> parameter.
>
> 2. Argument values are evaluated not from left to right. This contradicts
> the general rule that expressions are evaluated from left to right (with few
> known exceptions).
>
> I never seen the form `f(b=2, *[1])` in practice (though the language
> reference contains an explicit example for it), and it looks weird to me. I
> don't see reasons of writing `f(b=2, *[1])` instead of more natural `f(*[1],
> b=2)`. I propose to disallow it.

Coincidentally, I recently came across and reviewed a PR to Django
that proposed exactly this, or at least something very similar. They
proposed changing--

def create_cursor(self, name=None):
to--
def create_cursor(self, name=None, *args, **kwargs):

https://github.com/django/django/pull/9674/files#diff-53fcf3ac0535307033e0cfabb85c5301R173

--Chris



>
> This will also make the grammar simpler. Current grammar:
>
>argument_list: `positional_arguments` ["," `starred_and_keywords`]
> :   ["," `keywords_arguments`]
> : | `starred_and_keywords` ["," `keywords_arguments`]
> : | `keywords_arguments`
>positional_arguments: ["*"] `expression` ("," ["*"] `expression`)*
>starred_and_keywords: ("*" `expression` | `keyword_item`)
> : ("," "*" `expression` | "," `keyword_item`)*
>keywords_arguments: (`keyword_item` | "**" `expression`)
> : ("," `keyword_item` | "," "**" `expression`)*
>keyword_item: `identifier` "=" `expression`
>
> Proposed grammar:
>
>argument_list: `positional_arguments` ["," `keywords_arguments`]
> : | `keywords_arguments`
>positional_arguments: ["*"] `expression` ("," ["*"] `expression`)*
>keywords_arguments: `keyword_argument` ("," `keyword_argument`)*
>keyword_argument: `identifier` "=" `expression` | "**" `expression`
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-23 Thread Chris Jerdonek
On Mon, Apr 23, 2018 at 4:54 PM, Greg Ewing  wrote:
> Tim Peters wrote:
>
>> if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
>> return g
>
>
> My problem with this is -- how do you read such code out loud?

It could be--

"if diff, which we let equal x - x_base, and g, which ..." or
"if diff, which we set equal to x - x_base, and g, which " or
"if diff, which we define to be x - x_base, and g, which " or
"if diff, which we define as x - x_base, and g, which ." etc.

--Chris



>
> From my Pascal days I'm used to reading ":=" as "becomes". So
> this says:
>
>"If diff becomes x - base and g becomes gcd(diff, n) is
> greater than or equal to 1 then return g."
>
> But "diff becomes x - base" is not what we're testing! That
> makes it sound like the result of x - base may or may not
> get assigned to diff, which is not what's happening at all.
>
> The "as" variant makes more sense when you read it as an
> English sentence:
>
>if ((x - x_base) as diff) and ...
>
>"If x - x_base (and by the way, I'm going to call that
> diff so I can refer to it later) is not zero ..."
>
> --
> Greg
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-17 Thread Chris Jerdonek
On Tue, Apr 17, 2018 at 12:46 AM, Chris Angelico  wrote:
>
> Having survived four rounds in the boxing ring at python-ideas, PEP
> 572 is now ready to enter the arena of python-dev. I'll let the
> proposal speak for itself. Be aware that the reference implementation
> currently has a few test failures, which I'm still working on, but to
> my knowledge nothing will prevent the proposal itself from being
> successfully implemented.

Very interesting / exciting, thanks!

> Augmented assignment is not supported in expression form::
>
> >>> x +:= 1
>   File "", line 1
> x +:= 1
> ^
> SyntaxError: invalid syntax

Can you include in the PEP a brief rationale for not accepting this
form? In particular, is the intent never to support it, or is the
intent to expressly allow adding it at a later date (e.g. after
getting experience with the simpler form, etc)?

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Subtle difference between f-strings and str.format()

2018-03-30 Thread Chris Jerdonek
On Fri, Mar 30, 2018 at 4:41 AM, Nick Coghlan  wrote:
> On 30 March 2018 at 21:16, Nathaniel Smith  wrote:
>> And bool(obj) does always return True or False; if you define a
>> __bool__ method that returns something else then bool rejects it and
>> raises TypeError. So bool(bool(obj)) is already indistinguishable from
>> bool(obj).
>>
>> However, the naive implementation of 'if a and True:' doesn't call
>> bool(bool(a)), it calls bool(a) twice, and this *is* distinguishable
>> by user code, at least in principle.
>
> For example:
>
> >>> class FlipFlop:
> ... _state = False
> ... def __bool__(self):
> ... result = self._state
> ... self._state = not result
> ... return result
> ...
> >>> toggle = FlipFlop()
> >>> bool(toggle)
> False
> >>> bool(toggle)
> True
> >>> bool(toggle)
> False
> >>> bool(toggle) and bool(toggle)
> False
> >>> toggle and toggle
> <__main__.FlipFlop object at 0x7f35293604e0>
> >>> bool(toggle and toggle)
> True
>
> So the general principle is that __bool__ implementations shouldn't do
> anything that will change the result of the next call to __bool__, or
> else weirdness is going to result.

I don't think this way of stating it is general enough. For example,
you could have a nondeterministic implementation of __bool__ that
doesn't itself carry any state (e.g. flipping the result with some
probability), but the next call could nevertheless still return a
different result. So I think Nathaniel's way of stating it is probably
better:

> If we want to change the language spec, I guess it would be with text
> like: "if bool(obj) would be called twice in immediate succession,
> with no other code in between, then the interpreter may assume that
> both calls would return the same value and elide one of them".

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 574 -- Pickle protocol 5 with out-of-band data

2018-03-29 Thread Chris Jerdonek
On Wed, Mar 28, 2018 at 6:15 PM, Nathaniel Smith  wrote:
> On Wed, Mar 28, 2018 at 1:03 PM, Serhiy Storchaka  wrote:
>> 28.03.18 21:39, Antoine Pitrou пише:
>>> I'd like to submit this PEP for discussion.  It is quite specialized
>>> and the main target audience of the proposed changes is
>>> users and authors of applications/libraries transferring large amounts
>>> of data (read: the scientific computing & data science ecosystems).
>>
>> Currently I'm working on porting some features from cloudpickle to the
>> stdlib. For these of them which can't or shouldn't be implemented in the
>> general purpose library (like serializing local functions by serializing
>> their code objects, because it is not portable) I want to add hooks that
>> would allow to implement them in cloudpickle using official API. This would
>> allow cloudpickle to utilize C implementation of the pickler and unpickler.
>
> There's obviously some tension here between pickle's use as a
> persistent storage format, and its use as a transient wire format. For
> the former, you definitely can't store code objects because there's no
> forwards- or backwards-compatibility guarantee for bytecode. But for
> the latter, transmitting bytecode is totally fine, because all you
> care about is whether it can be decoded once, right now, by some peer
> process whose python version you can control -- that's why cloudpickle
> exists.

Is it really true you'll always be able to control the Python version
on the other side? Even if they're internal services, it seems like
there could be times / reasons preventing you from upgrading the
environment of all of your services at the same rate. Or did you mean
to say "often" all you care about ...?

--Chris



>
> Would it make sense to have a special pickle version that the
> transient wire format users could opt into, that only promises
> compatibility within a given 3.X release cycle? Like version=-2 or
> version=pickle.NONPORTABLE or something?
>
> (This is orthogonal to Antoine's PEP.)
>
> -n
>
> --
> Nathaniel J. Smith -- https://vorpus.org
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.7 -- bugfix or security before EOL?

2018-03-14 Thread Chris Jerdonek
Oh, that makes your original email make much more sense (at least to me). I
also interpreted it to mean you were interested in extending the EOL date
out further, rather than pointing out that it should probably already have
been switched from “bugfix” to “security” status.

—Chris

On Wed, Mar 14, 2018 at 8:46 AM Michael Scott Cuthbert 
wrote:

> >* it still is in the time period before
> *>* EOL that other recent versions have gone to security only.
> *
> Again, not relevant.
>
> You might want to read http://python3statement.org/. 
> 
>
> I’m guessing my first message was unclear or able to be misunderstood in
> some part — I’m one of the frequent contributors to python3statement.org
> and have moved my own Python projects to Py3 only (the main one, music21,
> gets its 3.4+-only release this Saturday).  I have NO desire to prolong the
> 2.7 pain.
>
> What I am referring to is the number of “needs backport to 2.7” tags for
> non-security-related bug-fixes in the issue tracker. (
> https://github.com/python/cpython/pulls?q=is%3Apr+is%3Aopen+label%3A%22needs+backport+to+2.7%22
> )
> My question was between now and 1 Jan 2020 should we still be fixing things
> in 2.7 that we’re not fixing in 3.5, or leave 2.7 in a security-only mode
> for the next 21 months?  Looking at what has been closed recently, without
> getting a bpo for actually backporting, it appears that we’re sort of doing
> this in practice anyhow.
>
> Thanks! and even if my message was read differently than I intended, glad
> that it had a good effect.
>
> Michael Cuthbert (https://music21-mit.blogspot.com)
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] LibreSSL support

2018-01-18 Thread Chris Jerdonek
On Thu, Jan 18, 2018 at 7:34 AM Christian Heimes 
wrote:

> On 2018-01-16 21:17, Christian Heimes wrote:
> We have two options until LibreSSL has addressed the issue:
>
> 1) Make the SSL module more secure, simpler and standard conform
> 2) Support LibreSSL
>
> I started a vote on Twitter [4]. So far most people prefer security.


It’s not exactly the most balanced (neutral) presentation of a ballot
question though. :)

—Chris


>
> Christian
>
> [1] https://bugs.python.org/issue31399
> [2] https://github.com/pyca/cryptography/issues/3247
> [3] https://github.com/libressl-portable/portable/issues/381
> [4] https://twitter.com/reaperhulk/status/953991843565490176
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 567 pre v3

2018-01-10 Thread Chris Jerdonek
On Wed, Jan 10, 2018 at 10:58 PM, Yury Selivanov
<yselivanov...@gmail.com> wrote:
> On Thu, Jan 11, 2018 at 10:35 AM, Chris Jerdonek
> <chris.jerdo...@gmail.com> wrote:
>> On Mon, Jan 8, 2018 at 11:02 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>> Right now, the set of valid states for a ContextVar are: it can hold
>>> any Python object, or it can be undefined. However, the only way it
>>> can be in the "undefined" state is in a new Context where it has never
>>> had a value; once it leaves the undefined state, it can never return
>>> to it.
>>
>> I know Yury responded to one aspect of this point later on in the
>> thread. However, in terms of describing the possible states without
>> reference to the internal Context mappings, IIUC, wouldn't it be more
>> accurate to view a ContextVar as a stack of values rather than just
>> the binary "holding an object or not"? This is to reflect the number
>> of times set() has been called (and so the number of times reset()
>> would need to be called to "empty" the ContextVar).
>
>
> But why do you want to think of ContextVar as a stack of values?  Or
> as something that is holding even one value?

I was primarily responding to Nathaniel's comment about how to
describe or talk about the state and not necessarily advocating that
view.

But to your question, like it or not, I think the API encourages this
way of thinking because the get() method is on the ContextVar itself,
and so it's the ContextVar which is doing the looking up rather than
just fulfilling the role of a key name. The API brings to mind other
containers and things holding values like dict.get(), queue.get(),
BytesIO.getvalue(), and container type's object.__getitem__(), etc. So
I think one will need to be prepared for many or most users having
this conception with the current API. (I think renaming to something
like ContextVar.lookup() or even ContextVar.value() would go a long
way towards dispelling that, but Guido said earlier in the thread that
he likes the shorter name.)

> Do Python variables hold/envelope objects they reference?  No, they
> don't.  They are simple names and are used to lookup objects in
> globals/locals dicts.  ContextVars are very similar!  They are *keys*
> in Context objects—that is it.

Python variables don't hold the objects. But the analogy also doesn't
quite match because variables also don't have get() methods. It's
Python which is doing the looking up in that case rather than the
variable itself. With ContextVars, it's serving both roles of name and
thing doing the looking up.

This is one reason why I suggested several days ago that I thought
something like contextvars.get(key) (where key is a ContextVar) would
be a less confusing API. That way the ContextVar / ContextKey(?) would
only be acting as a key and not also be responsible for doing the
lookup and knowing about what is containing it.

--Chris


>
> ContextVar.default is returned by ContextVar.get() when it cannot find
> the value for the context variable in the current Context object.  If
> ContextVar.default was not provided, a LookupError is raised.
>
> The reason why this is simpler for regular variables is because they
> have a dedicated syntax.  Instead of writing
>
> print(globals()['some_variable'])
>
> we simply write
>
> print(some_variable)
>
> Similarly for context variables, we could have written:
>
>print(copy_context()[var])
>
> But instead we use a ContextVar.get():
>
>print(var.get())
>
> If we had a syntax support for context variables, it would be like this:
>
>context var
>print(var)   # Lookups 'var' in the current context
>
> Although I very much doubt that we would *ever* want to have a
> dedicated syntax for context variables (they are very niche and are
> only needed in some very special cases), I hope that this line of
> thinking would help to clear the waters.
>
> Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 567 pre v3

2018-01-10 Thread Chris Jerdonek
On Mon, Jan 8, 2018 at 11:02 PM, Nathaniel Smith  wrote:
> Right now, the set of valid states for a ContextVar are: it can hold
> any Python object, or it can be undefined. However, the only way it
> can be in the "undefined" state is in a new Context where it has never
> had a value; once it leaves the undefined state, it can never return
> to it.

I know Yury responded to one aspect of this point later on in the
thread. However, in terms of describing the possible states without
reference to the internal Context mappings, IIUC, wouldn't it be more
accurate to view a ContextVar as a stack of values rather than just
the binary "holding an object or not"? This is to reflect the number
of times set() has been called (and so the number of times reset()
would need to be called to "empty" the ContextVar).

--Chris



>
> This makes me itch. It's very weird to have a mutable variable with a
> valid state that you can't reach by mutating it. I see two
> self-consistent ways to make me stop itching: (a) double-down on
> undefined as being part of ContextVar's domain, or (b) reduce the
> domain so that undefined is never a valid state.
>
> # Option 1
>
> In the first approach, we conceptualize ContextVar as being a
> container that either holds a value or is empty (and then there's one
> of these containers for each context). We also want to be able to
> define an initial value that the container takes on when a new context
> materializes, because that's really convenient. And then after that we
> provide ways to get the value (if present), or control the value
> (either set it to a particular value or unset it). So something like:
>
> var1 = ContextVar("var1")  # no initial value
> var2 = ContextVar("var2", initial_value="hello")
>
> with assert_raises(SomeError):
> var1.get()
> # get's default lets us give a different outcome in cases where it
> would otherwise raise
> assert var1.get(None) is None
> assert var2.get() == "hello"
> # If get() doesn't raise, then the argument is ignored
> assert var2.get(None) == "hello"
>
> # We can set to arbitrary values
> for var in [var1, var2]:
> var.set("new value")
> assert var.get() == "new value"
>
> # We can unset again, so get() will raise
> for var in [var1, var2]:
> var.unset()
> with assert_raises(SomeError):
> var.get()
> assert var.get(None) is None
>
> To fulfill all that, we need an implementation like:
>
> MISSING = make_sentinel()
>
> class ContextVar:
> def __init__(self, name, *, initial_value=MISSING):
> self.name = name
> self.initial_value = initial_value
>
> def set(self, value):
> if value is MISSING: raise TypeError
> current_context()._dict[self] = value
> # Token handling elided because it's orthogonal to this issue
> return Token(...)
>
> def unset(self):
> current_context()._dict[self] = MISSING
> # Token handling elided because it's orthogonal to this issue
> return Token(...)
>
> def get(self, default=_NOT_GIVEN):
> value = current_context().get(self, self.initial_value)
> if value is MISSING:
> if default is _NOT_GIVEN:
> raise ...
> else:
> return default
> else:
> return value
>
> Note that the implementation here is somewhat tricky and non-obvious.
> In particular, to preserve the illusion of a simple container with an
> optional initial value, we have to encode a logically undefined
> ContextVar as one that has Context[var] set to MISSING, and a missing
> entry in Context encodes the presence of the inital value. If we
> defined unset() as 'del current_context._dict[self]', then we'd have:
>
> var2.unset()
> assert var2.get() is None
>
> which would be very surprising to users who just want to think about
> ContextVars and ignore all that stuff about Contexts. This, in turn,
> means that we need to expose the MISSING sentinel in general, because
> anyone introspecting Context objects directly needs to know how to
> recognize this magic value to interpret things correctly.
>
> AFAICT this is the minimum complexity required to get a complete and
> internally-consistent set of operations for a ContextVar that's
> conceptualized as being a container that either holds an arbitrary
> value or is empty.
>
> # Option 2
>
> The other complete and coherent conceptualization I see is to say that
> a ContextVar always holds a value. If we eliminate the "unset" state
> entirely, then there's no "missing unset method" -- there just isn't
> any concept of an unset value in the first place, so there's nothing
> to miss. This idea shows up in lots of types in Python, actually --
> e.g. for any exception object, obj.__context__ is always defined. Its
> value might be None, but it has a value. In this approach,
> ContextVar's are similar.
>
> To fulfill all that, we need an implementation like:
>
> class ContextVar:
> # Or maybe 

Re: [Python-Dev] PEP 567 v2

2018-01-05 Thread Chris Jerdonek
On Fri, Jan 5, 2018 at 8:29 AM, Guido van Rossum  wrote:
> On Fri, Jan 5, 2018 at 2:05 AM, Victor Stinner 
> wrote:
>>
>> Currently, Context.get(var) returns None when "var in context" is false.
>> That's surprising and different than var.get(), especially when var has a
>> default value.
>
> I don't see the problem. Context.get() is inherited from Mapping.get(); if
> you want it to raise use Context.__getitem__() (i.e. ctx[var]). Lots of
> classes define get() methods with various behaviors. Context.get() and
> ContextVar.get() are just different -- ContextVar is not a Mapping.

One thing that I think could be contributing to confusion around the
proposed API is that there is a circular relationship between Context
and ContextVar, e.g. ContextVar.get() does a lookup in the current
Context with "self" (the ContextVar object) as a key

Also, it's the "keys" (the ContextVar objects) that have the get()
method that should be used rather than the container object (the
Context). This gives the confusing *feeling* of a mapping of mappings.
This is different from how the containers people are most familiar
with work -- like dict.

Is there a reason ContextVar needs to be exposed publicly at all?  For
example, the API could use string keys like contextvars.get(name) or
Context.get(name) (class method). There could be separate functions to
initialize keys with desired default values, etc (internally creating
ContextVars as needed).

If the issue is key collisions, it seems like this could be handled by
namespacing or using objects (namespaced by modules) instead of
strings.

Maybe this approach was ruled out early on in discussions, but I don't
see it mentioned in the PEP.

--Chris



>
> --
> --Guido van Rossum (python.org/~guido)
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 567 v2

2017-12-28 Thread Chris Jerdonek
I have a couple basic questions around how this API could be used in
practice. Both of my questions are for the Python API as applied to Tasks
in asyncio.

1) Would this API support looking up the value of a context variable for
**another** Task? For example, if you're managing multiple tasks using
asyncio.wait() and there is an exception in some task, you might want to
examine and report the value of a context variable for that task.

2) Would an appropriate use of this API be to assign a unique task id to
each task? Or can that be handled more simply? I'm wondering because I
recently thought this would be useful, and it doesn't seem like asyncio
means for one to subclass Task (though I could be wrong).

Thanks,
--Chris


On Wed, Dec 27, 2017 at 10:08 PM, Yury Selivanov 
wrote:

> This is a second version of PEP 567.
>
> A few things have changed:
>
> 1. I now have a reference implementation:
> https://github.com/python/cpython/pull/5027
>
> 2. The C API was updated to match the implementation.
>
> 3. The get_context() function was renamed to copy_context() to better
> reflect what it is really doing.
>
> 4. Few clarifications/edits here and there to address earlier feedback.
>
>
> Yury
>
>
> PEP: 567
> Title: Context Variables
> Version: $Revision$
> Last-Modified: $Date$
> Author: Yury Selivanov 
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 12-Dec-2017
> Python-Version: 3.7
> Post-History: 12-Dec-2017, 28-Dec-2017
>
>
> Abstract
> 
>
> This PEP proposes a new ``contextvars`` module and a set of new
> CPython C APIs to support context variables.  This concept is
> similar to thread-local storage (TLS), but, unlike TLS, it also allows
> correctly keeping track of values per asynchronous task, e.g.
> ``asyncio.Task``.
>
> This proposal is a simplified version of :pep:`550`.  The key
> difference is that this PEP is concerned only with solving the case
> for asynchronous tasks, not for generators.  There are no proposed
> modifications to any built-in types or to the interpreter.
>
> This proposal is not strictly related to Python Context Managers.
> Although it does provide a mechanism that can be used by Context
> Managers to store their state.
>
>
> Rationale
> =
>
> Thread-local variables are insufficient for asynchronous tasks that
> execute concurrently in the same OS thread.  Any context manager that
> saves and restores a context value using ``threading.local()`` will
> have its context values bleed to other code unexpectedly when used
> in async/await code.
>
> A few examples where having a working context local storage for
> asynchronous code is desirable:
>
> * Context managers like ``decimal`` contexts and ``numpy.errstate``.
>
> * Request-related data, such as security tokens and request
>   data in web applications, language context for ``gettext``, etc.
>
> * Profiling, tracing, and logging in large code bases.
>
>
> Introduction
> 
>
> The PEP proposes a new mechanism for managing context variables.
> The key classes involved in this mechanism are ``contextvars.Context``
> and ``contextvars.ContextVar``.  The PEP also proposes some policies
> for using the mechanism around asynchronous tasks.
>
> The proposed mechanism for accessing context variables uses the
> ``ContextVar`` class.  A module (such as ``decimal``) that wishes to
> store a context variable should:
>
> * declare a module-global variable holding a ``ContextVar`` to
>   serve as a key;
>
> * access the current value via the ``get()`` method on the
>   key variable;
>
> * modify the current value via the ``set()`` method on the
>   key variable.
>
> The notion of "current value" deserves special consideration:
> different asynchronous tasks that exist and execute concurrently
> may have different values for the same key.  This idea is well-known
> from thread-local storage but in this case the locality of the value is
> not necessarily bound to a thread.  Instead, there is the notion of the
> "current ``Context``" which is stored in thread-local storage, and
> is accessed via ``contextvars.copy_context()`` function.
> Manipulation of the current ``Context`` is the responsibility of the
> task framework, e.g. asyncio.
>
> A ``Context`` is conceptually a read-only mapping, implemented using
> an immutable dictionary.  The ``ContextVar.get()`` method does a
> lookup in the current ``Context`` with ``self`` as a key, raising a
> ``LookupError``  or returning a default value specified in
> the constructor.
>
> The ``ContextVar.set(value)`` method clones the current ``Context``,
> assigns the ``value`` to it with ``self`` as a key, and sets the
> new ``Context`` as the new current ``Context``.
>
>
> Specification
> =
>
> A new standard library module ``contextvars`` is added with the
> following APIs:
>
> 1. ``copy_context() -> Context`` function is used to get a copy of
>the current ``Context`` object for the current 

Re: [Python-Dev] Tricky way of of creating a generator via a comprehension expression

2017-11-24 Thread Chris Jerdonek
On Fri, Nov 24, 2017 at 5:06 PM, Nathaniel Smith  wrote:
> On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum  wrote:
>> The more I hear about this topic, the more I think that `await`, `yield` and
>> `yield from` should all be banned from occurring in all comprehensions and
>> generator expressions. That's not much different from disallowing `return`
>> or `break`.
>
> I would say that banning `yield` and `yield from` is like banning
> `return` and `break`, but banning `await` is like banning function
> calls.

I agree. I was going to make the point earlier in the thread that
using "await" can mostly just be thought of as a delayed function
call, but it didn't seem at risk of getting banned until Guido's
comment so I didn't say anything (and there were too many comments
anyways).

I think it's in a different category for that reason. It's much easier
to reason about than, say, "yield" and "yield from".

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 559 - built-in noop()

2017-11-23 Thread Chris Jerdonek
On Wed, Nov 22, 2017 at 4:32 PM, Victor Stinner 
wrote:

> Aha, contextlib.nullcontext() was just added, cool!
>

So is this equivalent to--

@contextmanager
def yielding(x):
yield x

I thought we were against adding one-line functions?

--Chris



>
> https://github.com/python/cpython/commit/0784a2e5b174d2dbf7b144d480559e
> 650c5cf64c
> https://bugs.python.org/issue10049
>
> Victor
>
> 2017-09-09 21:54 GMT+02:00 Victor Stinner :
> > I always wanted this feature (no kidding).
> >
> > Would it be possible to add support for the context manager?
> >
> > with noop(): ...
> >
> > Maybe noop can be an instance of:
> >
> > class Noop:
> >   def __enter__(self, *args, **kw): return self
> >   def __exit__(self, *args): pass
> >   def __call__(self, *args, **kw): return self
> >
> > Victor
> >
> > Le 9 sept. 2017 11:48 AM, "Barry Warsaw"  a écrit :
> >>
> >> I couldn’t resist one more PEP from the Core sprint.  I won’t reveal
> where
> >> or how this one came to me.
> >>
> >> -Barry
> >>
> >> PEP: 559
> >> Title: Built-in noop()
> >> Author: Barry Warsaw 
> >> Status: Draft
> >> Type: Standards Track
> >> Content-Type: text/x-rst
> >> Created: 2017-09-08
> >> Python-Version: 3.7
> >> Post-History: 2017-09-09
> >>
> >>
> >> Abstract
> >> 
> >>
> >> This PEP proposes adding a new built-in function called ``noop()`` which
> >> does
> >> nothing but return ``None``.
> >>
> >>
> >> Rationale
> >> =
> >>
> >> It is trivial to implement a no-op function in Python.  It's so easy in
> >> fact
> >> that many people do it many times over and over again.  It would be
> useful
> >> in
> >> many cases to have a common built-in function that does nothing.
> >>
> >> One use case would be for PEP 553, where you could set the breakpoint
> >> environment variable to the following in order to effectively disable
> it::
> >>
> >> $ setenv PYTHONBREAKPOINT=noop
> >>
> >>
> >> Implementation
> >> ==
> >>
> >> The Python equivalent of the ``noop()`` function is exactly::
> >>
> >> def noop(*args, **kws):
> >> return None
> >>
> >> The C built-in implementation is available as a pull request.
> >>
> >>
> >> Rejected alternatives
> >> =
> >>
> >> ``noop()`` returns something
> >> 
> >>
> >> YAGNI.
> >>
> >> This is rejected because it complicates the semantics.  For example, if
> >> you
> >> always return both ``*args`` and ``**kws``, what do you return when none
> >> of
> >> those are given?  Returning a tuple of ``((), {})`` is kind of ugly, but
> >> provides consistency.  But you might also want to just return ``None``
> >> since
> >> that's also conceptually what the function was passed.
> >>
> >> Or, what if you pass in exactly one positional argument, e.g.
> ``noop(7)``.
> >> Do
> >> you return ``7`` or ``((7,), {})``?  And so on.
> >>
> >> The author claims that you won't ever need the return value of
> ``noop()``
> >> so
> >> it will always return ``None``.
> >>
> >> Coghlin's Dialogs (edited for formatting):
> >>
> >> My counterargument to this would be ``map(noop, iterable)``,
> >> ``sorted(iterable, key=noop)``, etc. (``filter``, ``max``, and
> >> ``min`` all accept callables that accept a single argument, as do
> >> many of the itertools operations).
> >>
> >> Making ``noop()`` a useful default function in those cases just
> >> needs the definition to be::
> >>
> >>def noop(*args, **kwds):
> >>return args[0] if args else None
> >>
> >> The counterargument to the counterargument is that using ``None``
> >> as the default in all these cases is going to be faster, since it
> >> lets the algorithm skip the callback entirely, rather than calling
> >> it and having it do nothing useful.
> >>
> >>
> >> Copyright
> >> =
> >>
> >> This document has been placed in the public domain.
> >>
> >>
> >> ..
> >>Local Variables:
> >>mode: indented-text
> >>indent-tabs-mode: nil
> >>sentence-end-double-space: t
> >>fill-column: 70
> >>coding: utf-8
> >>End:
> >>
> >>
> >> ___
> >> Python-Dev mailing list
> >> Python-Dev@python.org
> >> https://mail.python.org/mailman/listinfo/python-dev
> >> Unsubscribe:
> >> https://mail.python.org/mailman/options/python-dev/
> victor.stinner%40gmail.com
> >>
> >
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Guarantee ordered dict literals in v3.7?

2017-11-06 Thread Chris Jerdonek
On Mon, Nov 6, 2017 at 4:11 AM Nick Coghlan  wrote:

> Here's a more-complicated-than-a-doctest-for-a-dict-repo, but still
> fairly straightforward, example regarding the "insertion ordering
> dictionaries are easier to use correctly" argument:
>
> import json
> data = {"a":1, "b":2, "c":3}
> rendered = json.dumps(data)
> data2 = json.loads(rendered)
> rendered2 = json.dumps(data2)
> # JSON round trip
> assert data == data2, "JSON round trip failed"
> # Dict round trip
> assert rendered == rendered2, "dict round trip failed"
>
> Both of those assertions will always pass in CPython 3.6, as well as
> in PyPy, because their dict implementations are insertion ordered,
> which means the iteration order on the dictionaries is always "a",
> "b", "c".
>
> If you try it on 3.5 though, you should fairly consistently see that
> last assertion fail, since there's nothing in 3.5 that ensures that
> data and data2 will iterate over their keys in the same order.
>
> You can make that code implementation independent (and sufficiently
> version dependent to pass both assertions) by using OrderedDict:
>
> from collections import OrderedDict
> import json
> data = OrderedDict(a=1, b=2, c=3)
> rendered = json.dumps(data)
> data2 = json.loads(rendered, object_pairs_hook=OrderedDict)
> rendered2 = json.dumps(data2)
> # JSON round trip
> assert data == data2, "JSON round trip failed"
> # Dict round trip
> assert rendered == rendered2, "dict round trip failed"
>
> However, despite the way this code looks, the serialised key order
> *might not* be "a, b, c" on 3.5 and earlier (it will be on 3.6+, since
> that already requires that kwarg order be preserved).
>
> So the formally correct version independent code that reliably ensures
> that the key order in the JSON file is always "a, b, c" looks like
> this:
>
> from collections import OrderedDict
> import json
> data = OrderedDict((("a",1), ("b",2), ("c",3)))
> rendered = json.dumps(data)
> data2 = json.loads(rendered, object_pairs_hook=OrderedDict)
> rendered2 = json.dumps(data2)
> # JSON round trip
> assert data == data2, "JSON round trip failed"
> # Dict round trip
> assert rendered == rendered2, "dict round trip failed"
> # Key order
> assert "".join(data) == "".join(data2) == "abc", "key order failed"
>
> Getting from the "Works on CPython 3.6+ but is technically
> non-portable" state to a fully portable correct implementation that
> ensures a particular key order in the JSON file thus currently
> requires the following changes:


Nick, it seems like this is more complicated than it needs to be. You can
just pass sort_keys=True to json.dump() / json.dumps(). I use it for tests
and human-readability all the time.

—Chris


>
> - don't use a dict display, use collections.OrderedDict
> - make sure to set object_pairs_hook when using json.loads
> - don't use kwargs to OrderedDict, use a sequence of 2-tuples
>
> For 3.6, we've already said that we want the last constraint to age
> out, such that the middle version of the code also ensures a
> particular key order.
>
> The proposal is that in 3.7 we retroactively declare that the first,
> most obvious, version of this code should in fact reliably pass all
> three assertions.
>
> Failing that, the proposal is that we instead change the dict
> iteration implementation such that the dict round trip will start
> failing reasonably consistently again (the same as it did in 3.5), so
> that folks realise almost immediately that they still need
> collections.OrderedDict instead of the builtin dict.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] bpo-5001: More-informative multiprocessing error messages (#3079)

2017-08-30 Thread Chris Jerdonek
> https://github.com/python/cpython/commit/bd73e72b4a9f019be514954b1d40e64dc3a5e81c
> commit: bd73e72b4a9f019be514954b1d40e64dc3a5e81c
> branch: master
> author: Allen W. Smith, Ph.D 
> committer: Antoine Pitrou 
> date: 2017-08-30T00:52:18+02:00
> summary:
>
> bpo-5001: More-informative multiprocessing error messages (#3079)
> ...
> @@ -254,8 +256,8 @@ def _setup_queues(self):
>  def apply(self, func, args=(), kwds={}):
>  '''
>  Equivalent of `func(*args, **kwds)`.
> +Pool must be running.
>  '''
> -assert self._state == RUN

Also, this wasn't replaced with anything.

--Chris

>  return self.apply_async(func, args, kwds).get()
>
>  def map(self, func, iterable, chunksize=None):
> @@ -307,6 +309,10 @@ def imap(self, func, iterable, chunksize=1):
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] bpo-5001: More-informative multiprocessing error messages (#3079)

2017-08-30 Thread Chris Jerdonek
https://github.com/python/cpython/commit/bd73e72b4a9f019be514954b1d40e64dc3a5e81c
> commit: bd73e72b4a9f019be514954b1d40e64dc3a5e81c
> branch: master
> author: Allen W. Smith, Ph.D 
> committer: Antoine Pitrou 
> date: 2017-08-30T00:52:18+02:00
> summary:
>
> ...
> @@ -307,6 +309,10 @@ def imap(self, func, iterable, chunksize=1):
>  ))
>  return result
>  else:
> +if chunksize < 1:
> +raise ValueError(
> +"Chunksize must be 1+, not {0:n}".format(
> +chunksize))
>  assert chunksize > 1

It looks like removing this assert statement was missed.

--Chris

>  task_batches = Pool._get_tasks(func, iterable, chunksize)
>  result = IMapIterator(self._cache)
> @@ -334,7 +340,9 @@ def imap_unordered(self, func, iterable, chunksize=1):
>  ))
>  return result
>  else:
> -assert chunksize > 1
> +if chunksize < 1:
> +raise ValueError(
> +"Chunksize must be 1+, not {0!r}".format(chunksize))
>  task_batches = Pool._get_tasks(func, iterable, chunksize)
>  result = IMapUnorderedIterator(self._cache)
>  self._taskqueue.put(
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 and None/masking

2017-08-27 Thread Chris Jerdonek
Hi Jim, it seems like each time you reply you change the subject line and
start a new thread. Very few others are doing this (e.g. Yury when
releasing a new version). Would it be possible for you to preserve the
threading like others?

--Chris


On Sun, Aug 27, 2017 at 9:08 AM Jim J. Jewett  wrote:

> Does setting an ImplicitScopeVar to None set the value to None, or just
> remove it?
>
> If it removes it, does that effectively unmask a previously masked value?
>
> If it really sets to None, then is there a way to explicitly unmask
> previously masked values?
>
> Perhaps the initial constructor should require an initial value
>  (defaulting to None) and the docs should give examples both for using a
> sensible default value and for using a special "unset" marker.
>
> -jJ
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] startup time repeated? why not daemon

2017-07-23 Thread Chris Jerdonek
On Sun, Jul 23, 2017 at 5:57 AM, Victor Stinner
<victor.stin...@gmail.com> wrote:
> We already did that. See _bootlocale for example. (Maybe also
> _collecctions_abc?)

I was asking more in the context of recommended practices for
third-party developers, as Nick mentioned earlier, because it's not a
strategy I've ever seen mentioned (and common practice is to group
only by functionality).

It's good to know re: locale and collections though. Incidentally,
from the issue thread it doesn't look like _bootlocale was motivated
primarily by startup time, but _collections_abc was:

locale: http://bugs.python.org/issue9548
collections.abc: http://bugs.python.org/issue19218

--Chris

>
> Victor
>
> Le 22 juil. 2017 07:20, "Chris Jerdonek" <chris.jerdo...@gmail.com> a écrit
> :
>>
>> On Fri, Jul 21, 2017 at 9:52 AM, Brett Cannon <br...@python.org> wrote:
>> > On Thu, 20 Jul 2017 at 22:11 Chris Jerdonek <chris.jerdo...@gmail.com>
>> > wrote:
>> >> On Thu, Jul 20, 2017 at 8:49 PM, Nick Coghlan <ncogh...@gmail.com>
>> >> wrote:
>> >> > ...
>> >> > * Lazy loading can have a significant impact on startup time, as it
>> >> > means you don't have to pay for the cost of finding and loading
>> >> > modules that you don't actually end up using on that particular run
>> >
>> > It should be mentioned that I have started designing an API to make
>> > using
>> > lazy loading much easier in Python 3.7 (i.e. "calling a single function"
>> > easier), but I still have to write the tests and such before I propose a
>> > patch and it will still be mainly for apps that know what they are doing
>> > since lazy loading makes debugging import errors harder.
>> > ...
>> >> > However, if we're going to recommend them as good practices for 3rd
>> >> > party developers looking to optimise the startup time of their Python
>> >> > applications, then it makes sense for us to embrace them for the
>> >> > standard library as well, rather than having our first reaction be to
>> >> > write more hand-crafted C code.
>> >>
>> >> Are there any good write-ups of best practices and techniques in this
>> >> area for applications (other than obvious things like avoiding
>> >> unnecessary imports)? I'm thinking of things like how to structure
>> >> your project, things to look for, developer tools that might help, and
>> >> perhaps third-party runtime libraries?
>> >
>> > Nothing beyond "profile your application" and "don't do stuff during
>> > import
>> > as a side-effect" that I'm aware of.
>>
>> One "project structure" idea of the sort I had in mind is to move
>> frequently used functions in a module into their own module. This way
>> the most common paths of execution don't load unneeded functions.
>> Following this line of reasoning could lead to grouping functions in
>> an application by when they're needed instead of by what they do,
>> which is different from what we normally see. I don't recall seeing
>> advice like this anywhere, so maybe the trade-offs aren't worth it.
>> Thoughts?
>>
>> --Chris
>>
>>
>> >
>> > -Brett
>> >
>> >>
>> >>
>> >> --Chris
>> >>
>> >>
>> >>
>> >> >
>> >> > On that last point, it's also worth keeping in mind that we have a
>> >> > much harder time finding new C-level contributors than we do new
>> >> > Python-level ones, and have every reason to expect that problem to
>> >> > get
>> >> > worse over time rather than better (since writing and maintaining
>> >> > handcrafted C code is likely to go the way of writing and maintaining
>> >> > handcrafted assembly code as a skillset: while it will still be
>> >> > genuinely necessary in some contexts, it will also be an increasingly
>> >> > niche technical specialty).
>> >> >
>> >> > Starting to migrate to using Cython for our acceleration modules
>> >> > instead of plain C should thus prove to be a win for everyone:
>> >> >
>> >> > - Cython structurally avoids a lot of typical bugs that arise in
>> >> > hand-coded extensions (e.g. refcount bugs)
>> >> > - by design, it's much easier to mentally switch between Python &
>> >> > Cython than it is between Python & C

Re: [Python-Dev] startup time repeated? why not daemon

2017-07-21 Thread Chris Jerdonek
On Fri, Jul 21, 2017 at 9:52 AM, Brett Cannon <br...@python.org> wrote:
> On Thu, 20 Jul 2017 at 22:11 Chris Jerdonek <chris.jerdo...@gmail.com>
> wrote:
>> On Thu, Jul 20, 2017 at 8:49 PM, Nick Coghlan <ncogh...@gmail.com> wrote:
>> > ...
>> > * Lazy loading can have a significant impact on startup time, as it
>> > means you don't have to pay for the cost of finding and loading
>> > modules that you don't actually end up using on that particular run
>
> It should be mentioned that I have started designing an API to make using
> lazy loading much easier in Python 3.7 (i.e. "calling a single function"
> easier), but I still have to write the tests and such before I propose a
> patch and it will still be mainly for apps that know what they are doing
> since lazy loading makes debugging import errors harder.
> ...
>> > However, if we're going to recommend them as good practices for 3rd
>> > party developers looking to optimise the startup time of their Python
>> > applications, then it makes sense for us to embrace them for the
>> > standard library as well, rather than having our first reaction be to
>> > write more hand-crafted C code.
>>
>> Are there any good write-ups of best practices and techniques in this
>> area for applications (other than obvious things like avoiding
>> unnecessary imports)? I'm thinking of things like how to structure
>> your project, things to look for, developer tools that might help, and
>> perhaps third-party runtime libraries?
>
> Nothing beyond "profile your application" and "don't do stuff during import
> as a side-effect" that I'm aware of.

One "project structure" idea of the sort I had in mind is to move
frequently used functions in a module into their own module. This way
the most common paths of execution don't load unneeded functions.
Following this line of reasoning could lead to grouping functions in
an application by when they're needed instead of by what they do,
which is different from what we normally see. I don't recall seeing
advice like this anywhere, so maybe the trade-offs aren't worth it.
Thoughts?

--Chris


>
> -Brett
>
>>
>>
>> --Chris
>>
>>
>>
>> >
>> > On that last point, it's also worth keeping in mind that we have a
>> > much harder time finding new C-level contributors than we do new
>> > Python-level ones, and have every reason to expect that problem to get
>> > worse over time rather than better (since writing and maintaining
>> > handcrafted C code is likely to go the way of writing and maintaining
>> > handcrafted assembly code as a skillset: while it will still be
>> > genuinely necessary in some contexts, it will also be an increasingly
>> > niche technical specialty).
>> >
>> > Starting to migrate to using Cython for our acceleration modules
>> > instead of plain C should thus prove to be a win for everyone:
>> >
>> > - Cython structurally avoids a lot of typical bugs that arise in
>> > hand-coded extensions (e.g. refcount bugs)
>> > - by design, it's much easier to mentally switch between Python &
>> > Cython than it is between Python & C
>> > - Cython accelerated modules are easier to adapt to other interpeter
>> > implementations than handcrafted C modules
>> > - keeping Python modules and their C accelerated counterparts in sync
>> > will be easier, as they'll mostly be using the same code
>> > - we'd be able to start writing C API test cases in Cython rather than
>> > in handcrafted C (which currently mostly translates to only testing
>> > them indirectly)
>> > - CPython's own test suite would naturally help test Cython
>> > compatibility with any C API updates
>> > - we'd have an inherent incentive to help enhance Cython to take
>> > advantage of new C API features
>> >
>> > The are some genuine downsides in increasing the complexity of
>> > bootstrapping CPython when all you're starting with is a VCS clone and
>> > a C compiler, but those complications are ultimately no worse than
>> > those we already have with Argument Clinic, and hence amenable to the
>> > same solution: if we need to, we can check in the generated C files in
>> > order to make bootstrapping easier.
>> >
>> > Cheers,
>> > Nick.
>> >
>> > --
>> > Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>> > ___
>> > Python-Dev mailing list
>> >

Re: [Python-Dev] startup time repeated? why not daemon

2017-07-20 Thread Chris Jerdonek
On Thu, Jul 20, 2017 at 8:49 PM, Nick Coghlan  wrote:
> ...
> * Lazy loading can have a significant impact on startup time, as it
> means you don't have to pay for the cost of finding and loading
> modules that you don't actually end up using on that particular run
>
> We've historically resisted adopting these techniques for the standard
> library because they *do* make things more complicated *and* harder to
> debug relative to plain old eagerly imported dynamic Python code.
> However, if we're going to recommend them as good practices for 3rd
> party developers looking to optimise the startup time of their Python
> applications, then it makes sense for us to embrace them for the
> standard library as well, rather than having our first reaction be to
> write more hand-crafted C code.

Are there any good write-ups of best practices and techniques in this
area for applications (other than obvious things like avoiding
unnecessary imports)? I'm thinking of things like how to structure
your project, things to look for, developer tools that might help, and
perhaps third-party runtime libraries?

--Chris



>
> On that last point, it's also worth keeping in mind that we have a
> much harder time finding new C-level contributors than we do new
> Python-level ones, and have every reason to expect that problem to get
> worse over time rather than better (since writing and maintaining
> handcrafted C code is likely to go the way of writing and maintaining
> handcrafted assembly code as a skillset: while it will still be
> genuinely necessary in some contexts, it will also be an increasingly
> niche technical specialty).
>
> Starting to migrate to using Cython for our acceleration modules
> instead of plain C should thus prove to be a win for everyone:
>
> - Cython structurally avoids a lot of typical bugs that arise in
> hand-coded extensions (e.g. refcount bugs)
> - by design, it's much easier to mentally switch between Python &
> Cython than it is between Python & C
> - Cython accelerated modules are easier to adapt to other interpeter
> implementations than handcrafted C modules
> - keeping Python modules and their C accelerated counterparts in sync
> will be easier, as they'll mostly be using the same code
> - we'd be able to start writing C API test cases in Cython rather than
> in handcrafted C (which currently mostly translates to only testing
> them indirectly)
> - CPython's own test suite would naturally help test Cython
> compatibility with any C API updates
> - we'd have an inherent incentive to help enhance Cython to take
> advantage of new C API features
>
> The are some genuine downsides in increasing the complexity of
> bootstrapping CPython when all you're starting with is a VCS clone and
> a C compiler, but those complications are ultimately no worse than
> those we already have with Argument Clinic, and hence amenable to the
> same solution: if we need to, we can check in the generated C files in
> order to make bootstrapping easier.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Star the CPython GitHub project if you like Python!

2017-07-04 Thread Chris Jerdonek
Great work, Victor! It seems like this would be an easy thing to
mention at the beginning of conference talks and meetup presentations,
and also something to ask coworkers if you work at a company that uses
Python (e.g. on workplace Slack channels, etc).

--Chris



On Tue, Jul 4, 2017 at 6:15 AM, Victor Stinner  wrote:
> 4 days later, we got +2,389 new stars, thank you! (8,539 => 10,928)
>
> Python moved from the 11th place to the 9th, before Elixir and Julia.
>
> Python is still behind Ruby (12,511) and PHP (12,318), but it's
> already much better than before!
>
> Victor
>
> 2017-06-30 15:59 GMT+02:00 Victor Stinner :
>> Hi,
>>
>> GitHub has a showcase page of hosted programming languages:
>>
>>https://github.com/showcases/programming-languages
>>
>> Python is only #11 with 8,539 stars, behind PHP and Ruby!
>>
>> Hey, you should "like" ("star"?) the CPython project if you like Python!
>>
>>https://github.com/python/cpython/
>>Click on "Star" at the top right.
>>
>> Thank you!
>> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] Failed to build select

2016-08-08 Thread Chris Jerdonek
On Mon, Aug 8, 2016 at 8:59 AM, Ned Deily <n...@python.org> wrote:
> On Aug 8, 2016, at 02:45, Steven D'Aprano <st...@pearwood.info> wrote:
>>
>> Could not find platform dependent libraries 
>> Consider setting $PYTHONHOME to [:]
>> Could not find platform dependent libraries 
>> Consider setting $PYTHONHOME to [:]
>
> On Aug 8, 2016, at 03:25, Chris Jerdonek <chris.jerdo...@gmail.com> wrote:
>> FWIW, I would be interested in learning more about the above warning
>> (its significance, how it can be addressed, whether it can be ignored,
>> etc). I also get this message when installing 3.5.2 from source on
>> Ubuntu 14.04.
>
> Those messages are harmless and are generated by the Makefile steps that 
> update Importlib's bootstrap files, Python/importlib_external.h and 
> Python/importlib_external.h.  See http://bugs.python.org/issue14928 for the 
> origins of this.  It should be possible to fix the Makefile to suppress those 
> messages.  I suggest you open an issue about it.

I created an issue for this here: http://bugs.python.org/issue27713

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New security-sig mailling list

2016-06-20 Thread Chris Jerdonek
On Mon, Jun 20, 2016 at 12:24 PM, Ethan Furman  wrote:
>
> has been created:
>
>   https://mail.python.org/mailman/listinfo/security-sig
>
> The purpose of this list is to discuss security-related enhancements to 
> Python while having as little impact on backwards compatibility as possible.

I would recommend clarifying the relationship of the SIG to the Python
Security Response Team ( https://www.python.org/news/security ), or at
least clarifying that the SIG is different from the PSRT (and that
security reports should not be sent to the SIG).

--Chris


>
> Once a proposal is ready it will be presented to Python Dev.
>
> (This text is subject to change. ;)
>
> --
> ~Ethan~
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-10 Thread Chris Jerdonek
On Fri, Jun 10, 2016 at 11:29 AM, David Mertz  wrote:
> This is fairly academic, since I do not anticipate needing to do this
> myself, but I have a specific question.  I'll assume that Python 3.5.2 will
> go back to the 2.6-3.4 behavior in which os.urandom() never blocks on Linux.
> Moreover, I understand that the case where the insecure bits might be
> returned are limited to Python scripts that run on system initialization on
> Linux.
>
> If I *were* someone who needed to write a Linux system initialization script
> using Python 3.5.2, what would the code look like.  I think for this use
> case, requiring something with a little bit of "code smell" is fine, but I
> kinda hope it exists at all.

Good question.  And going back to Larry's original e-mail, where he said--

On Thu, Jun 9, 2016 at 4:25 AM, Larry Hastings  wrote:
> THE PROBLEM
> ...
> The issue author had already identified the cause: CPython was blocking on
> getrandom() in order to initialize hash randomization.  On this fresh
> virtual machine the entropy pool started out uninitialized.  And since the
> only thing running on the machine was CPython, and since CPython was blocked
> on initialization, the entropy pool was initializing very, very slowly.

it seems to me that you'd want such a solution to have code that
causes the initialization of the entropy pool to be sped up so that it
happens as quickly as possible (if that is even possible).  Is it
possible? (E.g. by causing the machine to start doing things other
than just CPython?)

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding NewType() to PEP 484

2016-05-28 Thread Chris Jerdonek
On Fri, May 27, 2016 at 9:26 PM, Guido van Rossum  wrote:
> We discussed this over dinner at PyCon, some ideas we came up with:
>
> - Dependent types, harking back to a similar concept in Ada
> (https://en.wikibooks.org/wiki/Ada_Programming/Type_System#Derived_types)
> which in that language is also spelled with "new".
>
> - New type
>
> - Distinguished Type
>
> - Distinguished Subtype
>
> - Distinguished Type Alias
>
> - Distinguished Alias
>
> - BoatyMcBoatType

Some more suggestions:

- Cloned Type (or Type Clone)

- Copied Type (or Type Copy)

- Named Type

- Renamed Type

- Twin Type

- Wrapped Type

- Doppelganger Type (not serious)

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Typo in PEP-0423

2015-12-23 Thread Chris Jerdonek
On Tue, Dec 22, 2015 at 4:35 PM, Benjamin Peterson  wrote:
> We've played around with robots.txt, but it's still useful for old docs
> to be indexed (e.g., for removed features), which just need to figure
> out how to get them deprecation in results. I wonder if  ref="canonical"> in the old docs would help.

Yes, this is probably the correct approach (though it's rel="canonical"):

https://support.google.com/webmasters/answer/139066?hl=en

It's always been an inconvenience when Google displays the docs for
different, old versions (3.2, 3.3, etc) -- seemingly at random, and
sometimes instead of the newest version.  Fortunately, this seems to
be improving over time.

By using rel="canonical", you would have control over this and can
signal to Google to display only the newest, stable version of a given
doc.  This would probably have other positive benefits like
consolidating the "search juice" onto one page, so it's no longer
spread thinly across multiple versions.  There would still be a
question of how you want to handle 2 versus 3.

--Chris


>
> On Sat, Dec 19, 2015, at 11:02, A.M. Kuchling wrote:
>> On Sat, Dec 19, 2015 at 08:55:26PM +1000, Nick Coghlan wrote:
>> > Even once the new docs are in place, getting them to the top of search
>> > of results ahead of archived material that may be years out of date is
>> > likely to still be a challenge - for example, even considering just
>> > the legacy distutils docs, the "3.1" and "2" docs appear ...
>>
>> We probably need to update https://docs.python.org/robots.txt, which
>> currently contains:
>>
>> # Prevent development and old documentation from showing up in search
>> results.
>> User-agent: *
>> # Disallow: /dev
>> Disallow: /release
>>
>> The intent was to allow the latest version of the docs to be crawled.
>> Unfortunately, with the current hierarchy we'd have to disallow each
>> version, e.g.
>>
>> Disallow: /2.6/*
>> Disallow: /3.0/*
>> Disallow: /3.1/*
>>
>> And we'd need to update it for each new major release.
>>
>> --amk
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/benjamin%40python.org
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale behind lazy map/filter

2015-10-13 Thread Chris Jerdonek
On Tue, Oct 13, 2015 at 8:26 AM, Random832  wrote:
> "R. David Murray"  writes:
>
>> On Tue, 13 Oct 2015 14:59:56 +0300, Stefan Mihaila
>>  wrote:
>>> Maybe it's just python2 habits, but I assume I'm not the only one
>>> carelessly thinking that "iterating over an input a second time will
>>> result in the same thing as the first time (or raise an error)".
>>
>> This is the way iterators have always worked.
>
> It does raise the question though of what working code it would actually
> break to have "exhausted" iterators raise an error if you try to iterate
> them again rather than silently yield no items.

What about cases where not all of the elements of the iterator are
known at the outset?  For example, you might have a collection of
pending tasks that you periodically loop through and process.
Changing the behavior would result in an error when checking for more
tasks instead of no tasks.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Merging Jython code into standard Lib [was Re: Python Language Summit at PyCon: Agenda]

2013-02-28 Thread Chris Jerdonek
On Thu, Feb 28, 2013 at 1:30 AM, Antoine Pitrou solip...@pitrou.net wrote:
 Le Wed, 27 Feb 2013 11:33:30 -0800,
 fwierzbi...@gmail.com fwierzbi...@gmail.com a écrit :

 There are a couple of spots that might be more controversial. For
 example, Jython has a file Lib/zlib.py that implements zlib in terms
 of the existing Java support for zlib. I do wonder if such a file is
 acceptable in CPython's Lib since its 195 lines of code would be
 entirely skipped by CPython.

 That's a bit annoying. How will we know that the code still works, even
 though our buildbots don't exercise it?
 Also, what happens if the code doesn't work anymore?

Agreed on those problems.  Would it be possible to use a design
pattern in these cases so the Jython-only code wouldn't need to be
part of the CPython repo?  A naive example would be refactoring zlib
to allow subclassing in the way that Jython needs, and then Jython
could subclass in its own repo.  CPython could have tests to check the
subclass contract that Jython needs.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fwd: PEP 426 is now the draft spec for distribution metadata 2.0

2013-02-20 Thread Chris Jerdonek
 On Wednesday, February 20, 2013 at 2:48 AM, Chris Jerdonek wrote:

 I meant that bringing distlib into http://hg.python.org/cpython/ would
 give it more visibility to core devs and others that already keep an
 eye on python-checkins (the mailing list). And I think seeing the
 Sphinx-processed docs integrated and cross-referenced with
 http://docs.python.org/dev/ will help people understand better what
 has been done and how it fits in with the rest of CPython -- which I
 think would be useful to the community. It may also encourage
 involvement (e.g. by being part of the main tracker).

On Tue, Feb 19, 2013 at 11:53 PM, Donald Stufft donald.stu...@gmail.com wrote:
 On the other hand it makes contributing to it more annoying since it
 does not have pull requests, unless it was just a mirror.

Maybe just the finished/production-ready pieces could be added as they
are ready, with the main development happening outside.  My
understanding of distlib is that it's a collection of independent,
bite-sized pieces of functionality, which could lend itself well to
such a process.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fwd: PEP 426 is now the draft spec for distribution metadata 2.0

2013-02-19 Thread Chris Jerdonek
On Tue, Feb 19, 2013 at 2:28 AM, Nick Coghlan ncogh...@gmail.com wrote:
 On Tue, Feb 19, 2013 at 7:37 PM, M.-A. Lemburg m...@egenix.com wrote:
 On 17.02.2013 11:11, Nick Coghlan wrote:
 I'm not against modernizing the format, but given that version 1.2
 has been out for around 8 years now, without much following,
 I think we need to make the implementation bit a requirement
 before accepting the PEP.

 It is being implemented in distlib, and the (short!) appendix to the
 PEP itself shows how to read the format using the standard library's
 email module.

Maybe this is already stated somewhere, but is there a plan for when
distlib will be brought into the repository?  Is there a reason not to
do it now?  It seems it would have more visibility that way (e.g.
people could see it as part of the development version of the online
docs and it would be in check-ins as are PEP edits), and its status
relative to Python would be clearer.

--Chris


 v2.0 is designed to fix many of the issues that prevented the adoption
 of v1.2, including tweaks to the standardised version scheme and the
 addition of a formal extension mechanism to avoid the ad hoc
 extensions that occurred with earlier metadata versions.

 Cheers,
 Nick.

 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fwd: PEP 426 is now the draft spec for distribution metadata 2.0

2013-02-19 Thread Chris Jerdonek
On Tue, Feb 19, 2013 at 3:16 PM, Daniel Holth dho...@gmail.com wrote:
 Sorry, Chris must have meant http://hg.python.org/distlib/ . I was
 struggling to imagine a world where that is more visible than something on
 bitbucket.

I meant that bringing distlib into http://hg.python.org/cpython/ would
give it more visibility to core devs and others that already keep an
eye on python-checkins (the mailing list).  And I think seeing the
Sphinx-processed docs integrated and cross-referenced with
http://docs.python.org/dev/ will help people understand better what
has been done and how it fits in with the rest of CPython -- which I
think would be useful to the community.  It may also encourage
involvement (e.g. by being part of the main tracker).

In asking about the plan for doing this, I was thinking of the
following remark by Nick:

On Tue, Feb 19, 2013 at 5:40 AM, Nick Coghlan ncogh...@gmail.com wrote:
 On Tue, Feb 19, 2013 at 11:23 PM, M.-A. Lemburg m...@egenix.com wrote:

 Hmm, what is distlib and where does it live ?

 As part of the post-mortem of packaging's removal from Python 3.3,
 several subcomponents were identified as stable and useful. distlib is
 those subcomponents extracted into a separate repository by Vinay
 Sajip.

 It will be proposed as the standard library infrastructure for
 building packaging related tools, while distutils will become purely a
 build system and have nothing to do with installing software directly
 (except perhaps on developer machines).

My question was basically whether there was a tentative plan for when
it (or completed parts of it) will be proposed (e.g. when a certain
amount of functionality is completed, etc).  It's better not to do
this at the last minute if 3.4 is the plan (as I think was attempted
with packaging but for 3.3).

On Tue, Feb 19, 2013 at 6:40 PM, Steven D'Aprano st...@pearwood.info wrote:

 I keep hearing people say that the stdlib is not important, but I don't
 think
 that is true. There are lots of people who have problems with anything not
 in
 the standard library.

 - Beginners often have difficulty (due to inexperience, lack of confidence
 or
   knowledge) in *finding*, let alone installing and using, packages that
 aren't
   in the standard library.

 - To people in the Linux world, adding anything outside of your distro's
   packaging system is a nuisance. No matter how easy your packaging library
   makes it, you now have two sorts of packages: first-class packages that
   your distro will automatically update for you, and second-class ones that
   aren't.

 - People working in restrictive corporate systems often have to jump through
   flaming hoops before installing software.

I would also add that for people new to writing Python modules and
that want to distribute them, it's hard to evaluate what they are
supposed to use (distutils, setuptools, distribute, bento, etc).
Just a day or two ago, this exact question was asked on the Distutils
mailing list with subject Confusion of a hobby programmer.  Code not
being in the standard library creates an extra mental hurdle to
overcome.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL delegation for PEP 426 + distutils freeze

2013-02-06 Thread Chris Jerdonek
On Sun, Feb 3, 2013 at 11:40 AM, Chris Jerdonek
chris.jerdo...@gmail.com wrote:
 On Sun, Feb 3, 2013 at 10:33 AM, Éric Araujo mer...@netwok.org wrote:
 Le 03/02/2013 07:48, Antoine Pitrou a écrit :
 I vote for removing the distutils is frozen principle.
 I’ve also been thinking about that.  There have been two exceptions to
 the freeze, for ABI flags in extension module names and for pycache
 directories.  When the stable ABI was added and MvL wanted to change
 distutils (I don’t know to do what exactly), Tarek stood firm on the
 freeze and asked for any improvement to go into distutils2, and after
 MvL said that he would not contibute to an outside project, we merged d2
 into the stdlib.  Namespace packages did not impact distutils either.
 Now that we’ve removed packaging from the stdlib, we have two Python
 features that are not supported in the standard packaging system, and I
 agree that it is a bad thing for our users.

 I’d like to propose a reformulation of the freeze:

 This could be common knowledge, but is the current formulation of the
 freeze spelled out somewhere?

I asked this earlier, but didn't see a response.  Is the freeze stated
somewhere like in a PEP?  If not, can someone state it precisely (e.g.
what's allowed to change and what's not)?

Thanks,
--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL delegation for PEP 426 + distutils freeze

2013-02-03 Thread Chris Jerdonek
On Sun, Feb 3, 2013 at 10:33 AM, Éric Araujo mer...@netwok.org wrote:
 Le 03/02/2013 07:48, Antoine Pitrou a écrit :
 I vote for removing the distutils is frozen principle.
 I’ve also been thinking about that.  There have been two exceptions to
 the freeze, for ABI flags in extension module names and for pycache
 directories.  When the stable ABI was added and MvL wanted to change
 distutils (I don’t know to do what exactly), Tarek stood firm on the
 freeze and asked for any improvement to go into distutils2, and after
 MvL said that he would not contibute to an outside project, we merged d2
 into the stdlib.  Namespace packages did not impact distutils either.
 Now that we’ve removed packaging from the stdlib, we have two Python
 features that are not supported in the standard packaging system, and I
 agree that it is a bad thing for our users.

 I’d like to propose a reformulation of the freeze:

This could be common knowledge, but is the current formulation of the
freeze spelled out somewhere?

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] hg.python.org Mercurial upgrade

2013-01-23 Thread Chris Jerdonek
On Wed, Jan 23, 2013 at 6:16 PM, Ezio Melotti ezio.melo...@gmail.com wrote:
 On Wed, Jan 23, 2013 at 9:43 PM, Antoine Pitrou solip...@pitrou.net wrote:
 On Wed, 23 Jan 2013 20:41:11 +0100
 Amaury Forgeot d'Arc amaur...@gmail.com wrote:
 2013/1/22 Antoine Pitrou solip...@pitrou.net

  I've upgraded the Mercurial version on hg.python.org. If there any
  problems, don't hesitate to post here.
 

 I've noticed a display glitch with the hg viewer:
 http://hg.python.org/cpython/rev/6df0b4ed8617#l2.8
 There is a [#14591] link which causes the rest of the line to be shifted.

 Indeed. This is not because of the upgrade, but because of a new regexp
 Ezio asked me to insert in the Web UI configuration :-)


 FWIW this was an attempt to fix the links to issues in
 http://hg.python.org/cpython/.
 AFAIU the interhg extension used here to turn #12345 to links
 affects at least 3 places:
   1) the description of each changeset in the shortlog page (e.g.
 http://hg.python.org/cpython/);
   2) the description at the top of the rev page (e.g.
 http://hg.python.org/cpython/rev/6df0b4ed8617);
   3) the code in the diff/rev/annotate and possibly other pages
 (e.g. http://hg.python.org/cpython/rev/6df0b4ed8617#l2.6);
 With the previous solution, case 1 was broken, but links for cases 2-3
 worked fine. The problem is that in 1 the description is already a
 link, so the result ended up being something like 'a
 href=rev/...Issue a href=b.p.o/12345#12345/a is now
 fixed/a'.
 With the new solution 1-2 work (the links are added/moved at the end),
 but it's glitched for case 3.
 Unless interhg provides a way to limit the replacement only to
 specific places and/or use different replacements for different
 places, we will either have to live with these glitches or come up
 with a proper fix done at the right level.

How does the above relate to this issue?

http://bugs.python.org/issue15919

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 433: Add cloexec argument to functions creating file descriptors

2013-01-13 Thread Chris Jerdonek
On Sun, Jan 13, 2013 at 2:40 AM, Charles-François Natali
cf.nat...@gmail.com wrote:
 Hello,

 PEP: 433
 Title: Add cloexec argument to functions creating file descriptors

 I'm not a native English speaker, but it seems to me that the correct
 wording should be parameter (part of the function
 definition/prototype, whereas argument refers to the actual value
 supplied).

Yes, this distinction is now reflected in our glossary as of a month
or two ago.  Let's try to be consistent with that. :)

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] Cron docs@dinsdale /home/docs/build-devguide

2013-01-10 Thread Chris Jerdonek
On Thu, Jan 10, 2013 at 11:35 PM, Cron Daemon r...@python.org wrote:
 /home/docs/devguide/documenting.rst:766: WARNING: term not in glossary: 
 bytecode

FYI, this warning is spurious (second time at least).  I made an issue
about it here:

http://bugs.python.org/issue16928

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Posting frequent spurious changes in bugtracker

2013-01-03 Thread Chris Jerdonek
On Thu, Jan 3, 2013 at 1:06 PM, Glenn Linderman v+pyt...@g.nevcal.com wrote:
 Jesus' suggestion of a hidden version field would help, but could be
 annoying for the case of someone writing a lengthy response, and having it
 discarded because the hidden version field is too old... so care would have
 to be taken to preserve such responses when doing the refresh...

I seem to recall that this already happens in certain circumstances.
If someone else posts a comment while you are composing a comment, the
user interface notifies you with a red box when you try to submit that
the thread has changed.  It prevents the post but does preserve the
text you are composing.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] Cron docs@dinsdale /home/docs/build-devguide

2012-12-29 Thread Chris Jerdonek
On Sat, Dec 29, 2012 at 12:05 PM, Cron Daemon r...@python.org wrote:
 /home/docs/devguide/documenting.rst:768: WARNING: term not in glossary: 
 bytecode

Why is this warning reported?  I can't reproduce on my system, and on
my system and in the published online docs, the term successfully
links to:

http://docs.python.org/3/glossary.html#term-bytecode

(in the section
http://docs.python.org/devguide/documenting.html#information-units )

--Chris



 ___
 Python-checkins mailing list
 python-check...@python.org
 http://mail.python.org/mailman/listinfo/python-checkins
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (merge 2.7 - 2.7): Null merge.

2012-12-27 Thread Chris Jerdonek
On Thu, Dec 27, 2012 at 12:05 PM, serhiy.storchaka
python-check...@python.org wrote:
 http://hg.python.org/cpython/rev/26eb2979465c
 changeset:   81094:26eb2979465c
 branch:  2.7
 parent:  81086:ccbb16719540
 parent:  81090:d3c81ef728ae
 user:Serhiy Storchaka storch...@gmail.com
 date:Thu Dec 27 22:00:12 2012 +0200
 summary:
   Null merge.

Great to see your first check-ins, Serhiy.  Congratulations!

I think for this case we usually say Merge heads, which is different
from the case of a null merge (i.e. where the diff is empty, for
example when registering that a 3.x commit should not be
forward-ported to a later version).

--Chris


 files:
   Lib/idlelib/EditorWindow.py |  2 +-
   Misc/NEWS   |  3 +++
   2 files changed, 4 insertions(+), 1 deletions(-)


 diff --git a/Lib/idlelib/EditorWindow.py b/Lib/idlelib/EditorWindow.py
 --- a/Lib/idlelib/EditorWindow.py
 +++ b/Lib/idlelib/EditorWindow.py
 @@ -1611,7 +1611,7 @@
  try:
  try:
  _tokenize.tokenize(self.readline, self.tokeneater)
 -except _tokenize.TokenError:
 +except (_tokenize.TokenError, SyntaxError):
  # since we cut off the tokenizer early, we can trigger
  # spurious errors
  pass
 diff --git a/Misc/NEWS b/Misc/NEWS
 --- a/Misc/NEWS
 +++ b/Misc/NEWS
 @@ -168,6 +168,9 @@
  Library
  ---

 +- Issue #16504: IDLE now catches SyntaxErrors raised by tokenizer. Patch by
 +  Roger Serwy.
 +
  - Issue #16702: test_urllib2_localnet tests now correctly ignores proxies for
localhost tests.


 --
 Repository URL: http://hg.python.org/cpython

 ___
 Python-checkins mailing list
 python-check...@python.org
 http://mail.python.org/mailman/listinfo/python-checkins

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.3): Issue #16045: add more unit tests for built-in int()

2012-12-23 Thread Chris Jerdonek
On Sun, Dec 23, 2012 at 12:03 PM, Terry Reedy tjre...@udel.edu wrote:

 +# For example, PyPy 1.9.0 raised TypeError for these cases because it
 +# expects x to be a string if base is given.
 +@support.cpython_only
 +def test_base_arg_with_no_x_arg(self):
 +self.assertEquals(int(base=6), 0)
 +# Even invalid bases don't raise an exception.
 +self.assertEquals(int(base=1), 0)
 +self.assertEquals(int(base=1000), 0)
 +self.assertEquals(int(base='foo'), 0)


 I think the above behavior is buggy and should be changed rather than frozen
 into CPython with a test. According to the docs, PyPy does it right.

I support further discussion here.  (I did draft the patch, but it was
a first version.  I did not commit the patch.)

 The current online doc gives the signature as
 int(x=0)
 int(x, base=10) where x is s string

 The 3.3.0 docstring says
 When converting a string, use the optional base.  It is an error to supply
 a base when converting a non-string.

One way to partially explain CPython's behavior is that when base is
provided, the function behaves as if x defaults to '0' rather than 0.
This is similar to the behavior of str(), which defaults to b'' when
encoding or errors is provided, but otherwise defaults to '':

http://docs.python.org/dev/library/stdtypes.html#str

 Certainly, accepting any object as a base, violating The allowed values are
 0 and 2-36. just because giving a base is itself invalid is crazy.

For further background (and you can see this is the 2.7 commit),
int(base='foo') did raise TypeError in 2.7, but this particular case
was relaxed in Python 3.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.3): Issue #16045: add more unit tests for built-in int()

2012-12-23 Thread Chris Jerdonek
On Sun, Dec 23, 2012 at 6:19 PM, Terry Reedy tjre...@udel.edu wrote:
 On 12/23/2012 4:47 PM, Chris Jerdonek wrote:
 On Sun, Dec 23, 2012 at 12:03 PM, Terry Reedy tjre...@udel.edu wrote:

 +# For example, PyPy 1.9.0 raised TypeError for these cases because
 it
 +# expects x to be a string if base is given.
 +@support.cpython_only
 +def test_base_arg_with_no_x_arg(self):
 +self.assertEquals(int(base=6), 0)
 +# Even invalid bases don't raise an exception.
 +self.assertEquals(int(base=1), 0)
 +self.assertEquals(int(base=1000), 0)
 +self.assertEquals(int(base='foo'), 0)

 I think the above behavior is buggy and should be changed rather than
 frozen
 into CPython with a test. According to the docs, PyPy does it right.

 In any case, the discrepancy between doc and behavior is a bug and should be
 fixed one way or the other way. Unlike int(), I do not see a realistic use
 case for int(base=x) that would make it anything other than a bug.

Just to be clear, I agree with you that something needs fixing (and
again, I did not commit the patch).  But I want to clarify a couple of
your responses to my points.


 One way to partially explain CPython's behavior is that when base is
 provided, the function behaves as if x defaults to '0' rather than 0.


 That explanation does not work. int('0', base = invalid) and int(x='0',
 base=invalid) raise TypeError or ValueError.

I was referring to the behavioral discrepancy between CPython
returning 0 for int(base=valid) and the part of the docstring you
quoted which says, It is an error to supply a base when converting a
non-string.  I wasn't justifying the case of int(base=invalid).
That's why I said partially explains.  The int(base=valid) case is
covered by the following line of the CPython-specific test that was
committed (which in PyPy raises TypeError):

+self.assertEquals(int(base=6), 0)

 If providing a value explicit
 changes behavior, then that value is not the default. To make '0' really be
 the base-present default, the doc and above behavior should be changed. Or,
 make '' the default and have int('', base=whatever) return 0 instead of
 raising. (This would be the actual parallel to the str case.)


 This is similar to the behavior of str(), which defaults to b'' when
 encoding or errors is provided, but otherwise defaults to '':

 This is different. Providing b'' explicitly has no effect.
 str(encoding=x, errors=y) and str(b'', encoding=x, errors=y) act the same.
 If x or y is not a string, both raise TypeError. (Unlike int and base.) A
 bad encoding string is ignored because the encoding lookup is not done
 unless there is something to encode. (This is why the ignore-base
 base-default should be '', not '0'.) A bad error specification is (I
 believe) ignored for any error-free bytes/encoding pair because, again, the
 lookup is only done when needed.

Again, I was referring to the valid case.  My point was that str()'s
object argument defaults to '' when encoding or errors isn't given,
and otherwise defaults to b''.  You can see that the object argument
defaults to '' in the simpler case here:

 str(), str(object=''), str(object=b'')
('', '', b'')

But when the encoding argument is given the default is different (it is b''):

 str(object='', encoding='utf-8')
TypeError: decoding str is not supported
 str(encoding='utf-8'), str(object=b'', encoding='utf-8')
('', '')

But again, these are clarifications of my comments.  I'm not
disagreeing with your larger point.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] hg.python.org down

2012-12-22 Thread Chris Jerdonek
On Sat, Dec 22, 2012 at 6:58 AM, Nick Coghlan ncogh...@gmail.com wrote:
 On Sun, Dec 23, 2012 at 12:14 AM, Stefan Krah ste...@bytereef.org wrote:
 Hi,

 hg.python.org seems to be unreachable (tested from various IP addresses).

 The docs build daemon started complaining on python-checkins about
 2:10 pm UTC (on the 22nd), so about the same time you noticed the
 issue.

For the record, it seems to be back up.  I don't know since when
precisely, but the last of the complaints on python-checkins seems to
have been about two hours ago.  (The complaints were happening every 5
minutes.)

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3145 (With Contents)

2012-12-21 Thread Chris Jerdonek
On Fri, Dec 21, 2012 at 6:46 AM, Brett Cannon br...@python.org wrote:

 On Thu, Dec 20, 2012 at 7:35 PM, Chris Jerdonek chris.jerdo...@gmail.com
 wrote:

 I don't disagree that he shouldn't have cross-posted.  I was just
 pointing out that the language should be clarified.  What's confusing
 is that the current language implies that one shouldn't send any
 PEP-related e-mails to any mailing list other than peps@.  In
 particular, how can one discuss PEPs on python-dev or python-ideas
 without violating that language (e.g. this e-mail which is related to
 PEP 1)?  It is probably just a matter of clarifying what PEP-related
 means.


 I'm just not seeing the confusion, sorry. And we have never really had any
 confusion over this wording before. If you want to send a patch to tweak the
 wording to me more clear then please go ahead and I will consider it, but
 I'm not worried enough about it to try to come up with some rewording
 myself.

I uploaded a proposed patch to this issue:

http://bugs.python.org/issue16746

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3145 (With Contents)

2012-12-20 Thread Chris Jerdonek
On Thu, Dec 20, 2012 at 12:18 PM, Brett Cannon br...@python.org wrote:

 And please do not CC the peps mailing list on discussions. It should only be
 used to mail in new PEPs or acceptable patches to PEPs.

PEP 1 should perhaps be clarified if the above is the case.
Currently, PEP 1 says all PEP-related e-mail should be sent there:

The PEP editors assign PEP numbers and change their status. Please
send all PEP-related email to p...@python.org (no cross-posting
please). Also see PEP Editor Responsibilities  Workflow below.

as well as:

A PEP editor must subscribe to the p...@python.org list. All
PEP-related correspondence should be sent (or CC'd) to
p...@python.org (but please do not cross-post!).

(Incidentally, the statement not to cross-post seems contradictory if
a PEP-related e-mail is also sent to python-dev, for example.)

--Chris



 On Wed, Dec 19, 2012 at 5:20 PM, anatoly techtonik techto...@gmail.com
 wrote:

 On Sun, Dec 9, 2012 at 7:17 AM, Gregory P. Smith g...@krypto.org wrote:

 I'm really not sure what this PEP is trying to get at given that it
 contains no examples and sounds from the descriptions to be adding a
 complicated api on top of something that already, IMNSHO, has too much it
 (subprocess.Popen).

 Regardless, any user can use the stdout/err/in file objects with their
 own code that handles them asynchronously (yes that can be painful but that
 is what is required for _any_ socket or pipe I/O you don't want to block
 on).


 And how to use stdout/stderr/in asynchronously in cross-platform manner?
 IIUC the problem is that every read is blocking.


 It sounds to me like this entire PEP could be written and released as a
 third party module on PyPI that offers a subprocess.Popen subclass adding
 some more convenient non-blocking APIs.  That's where I'd start if I were
 interested in this as a future feature.


 I've rewritten the PEP based on how do I understand the code. I don't know
 how to update it and how to comply with open documentation license, so I
 just attach it and add PEPs list to CC. Me too has a feeling that the PEP
 should be stripped of additional high level API until low level
 functionality is well understood and accepted.

 --
 anatoly t.

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/brett%40python.org



 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3145 (With Contents)

2012-12-20 Thread Chris Jerdonek
On Thu, Dec 20, 2012 at 1:12 PM, Brett Cannon br...@python.org wrote:

 On Thu, Dec 20, 2012 at 3:55 PM, Chris Jerdonek chris.jerdo...@gmail.com
 wrote:

 On Thu, Dec 20, 2012 at 12:18 PM, Brett Cannon br...@python.org wrote:
 
  And please do not CC the peps mailing list on discussions. It should
  only be
  used to mail in new PEPs or acceptable patches to PEPs.

 PEP 1 should perhaps be clarified if the above is the case.
 Currently, PEP 1 says all PEP-related e-mail should be sent there:

 The PEP editors assign PEP numbers and change their status. Please
 send all PEP-related email to p...@python.org (no cross-posting
 please). Also see PEP Editor Responsibilities  Workflow below.

 as well as:

 A PEP editor must subscribe to the p...@python.org list. All
 PEP-related correspondence should be sent (or CC'd) to
 p...@python.org (but please do not cross-post!).

 (Incidentally, the statement not to cross-post seems contradictory if
 a PEP-related e-mail is also sent to python-dev, for example.)


 But it very clearly states to NOT cross-post which is exactly what Anatoly
 did and that is what I take issue with the most. I personally don't see any
 confusion with the wording. It clearly states that if you are a PEP author
 you should mail the peps editors and NOT cross-post. If you are an editor,
 make sure any emailing you do with an individual CCs the list but do NOT
 cross-post.

I don't disagree that he shouldn't have cross-posted.  I was just
pointing out that the language should be clarified.  What's confusing
is that the current language implies that one shouldn't send any
PEP-related e-mails to any mailing list other than peps@.  In
particular, how can one discuss PEPs on python-dev or python-ideas
without violating that language (e.g. this e-mail which is related to
PEP 1)?  It is probably just a matter of clarifying what PEP-related
means.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread Chris Jerdonek
On Thu, Dec 13, 2012 at 6:48 PM, R. David Murray rdmur...@bitdance.com wrote:
 On Thu, 13 Dec 2012 20:21:24 -0500, Trent Nelson tr...@snakebite.org wrote:
 - Use a completely separate clone to house all the intermediate
   commits, then generate a diff once the final commit is ready,
   then apply that diff to the main cpython repo, then push that.
   This approach is fine, but it seems counter-intuitive to the
   whole concept of DVCS.

 Perhaps.  But that's exactly what I did with the email package changes
 for 3.3.

 You seem to have a tension between all those dirty little commits and
 clean history and the fact that a dvcs is designed to preserve all
 those commits...if you don't want those intermediate commits in the
 official repo, then why is a diff/patch a bad way to achieve that?

Right.  And you usually have to do this beforehand anyways to upload
your changes to the tracker for review.

Also, for the record (not that anyone has said anything to the
contrary), our dev guide says, You should collapse changesets of a
single feature or bugfix before pushing the result to the main
repository. The reason is that we don’t want the history to be full of
intermediate commits recording the private history of the person
working on a patch. If you are using the rebase extension, consider
adding the --collapse option to hg rebase. The collapse extension is
another choice.

(from http://docs.python.org/devguide/committing.html#working-with-mercurial )

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Guido, Dropbox, and Python

2012-12-12 Thread Chris Jerdonek
On Dec 10, 2012, at 1:52 PM, Terry Reedy tjre...@udel.edu wrote:

 My question, Guido, is how this will affect Python development, and in 
 particular, your work on async. If not proprietary info, does or will Dropbox 
 use Python3?

I talked to some Dropbox people tonight, and they said they use 2.7
for the client and 2.5 for the server.  It is a project for them to
switch the server to using 2.7.

--Chris

Sent from my iPhone
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] hg annotate is broken on hg.python.org

2012-12-09 Thread Chris Jerdonek
On Sun, Dec 9, 2012 at 3:30 PM, anatoly techtonik techto...@gmail.com wrote:
 Just to let you know that annotate in hgweb is broken for Python sources.

 http://hg.python.org/cpython/annotate/692be1f9fa1d/Lib/distutils/tests/test_register.py

Maybe I'm missing something, but what's broken about it?  Also, in my
experience it's okay to file issues about hg.python.org on the main
tracker if you suspect something isn't right or you think should be
improved.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] type vs. class terminology

2012-11-25 Thread Chris Jerdonek
I would like to know when we should use class in the Python 3
documentation, and when we should use type.  Are these terms
synonymous in Python 3, and do we have a preference for which to use
and when?

I'm sure this has been discussed before.  But if this terminology
issue has already been resolved, the resolution doesn't seem to be
reflected in the docs.  For example, the glossary entries for type and
class don't reference each other.

Thoughts?

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] devguide: Add instructions for handling merge conflicts during null merges.

2012-11-21 Thread Chris Jerdonek
On Wed, Nov 21, 2012 at 6:07 PM, chris.jerdonek
python-check...@python.org wrote:
 http://hg.python.org/devguide/rev/78a69b929ab7
 changeset:   573:78a69b929ab7
 user:Chris Jerdonek chris.jerdo...@gmail.com
 date:Wed Nov 21 18:04:35 2012 -0800
 summary:
   Add instructions for handling merge conflicts during null merges.

This was for issue #16517:

http://bugs.python.org/issue16517

--Chris



 files:
   committing.rst |  11 +--
   1 files changed, 9 insertions(+), 2 deletions(-)


 diff --git a/committing.rst b/committing.rst
 --- a/committing.rst
 +++ b/committing.rst
 @@ -306,17 +306,24 @@
 # Fix any conflicts; compile; run the test suite
 hg commit

 +.. index:: null merging
 +
  .. note::
 -   *If the patch shouldn't be ported* from Python 3.3 to Python 3.4, you must
 -   also make it explicit: merge the changes but revert them before 
 committing::
 +   If the patch should *not* be ported from Python 3.3 to Python 3.4, you 
 must
 +   also make this explicit by doing a *null merge*: merge the changes but
 +   revert them before committing::

hg update default
hg merge 3.3
hg revert -ar default
 +  hg resolve -am  # needed only if the merge created conflicts
hg commit

 This is necessary so that the merge gets recorded; otherwise, somebody
 else will have to make a decision about your patch when they try to merge.
 +   (Using a three-way merge tool generally makes the ``hg resolve`` step
 +   in the above unnecessary; also see `this bug report
 +   http://bz.selenic.com/show_bug.cgi?id=2706`__.)

  When you have finished your porting work (you can port several patches one
  after another in your local repository), you can push **all** outstanding

 --
 Repository URL: http://hg.python.org/devguide

 ___
 Python-checkins mailing list
 python-check...@python.org
 http://mail.python.org/mailman/listinfo/python-checkins

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Split unicodeobject.c into subfiles?

2012-11-17 Thread Chris Jerdonek
[Apologies for resurrecting a few-weeks old thread.]

On Thu, Oct 4, 2012 at 2:46 PM,  mar...@v.loewis.de wrote:

 Zitat von Victor Stinner victor.stin...@gmail.com:

 I only see one argument against such refactoring: it will be harder to
 backport/forwardport bugfixes.

 I'm opposed for a different reason: I think it will be *harder* to maintain.
 The amount of code will not be reduced, but now you also need to guess what
 file some piece of functionality may be in. Instead of having my text editor
 (Emacs) search in one file, it will have to search across multiple files -
 but not across all open buffers, but only some of them (since I will have
 many other source files open as well).

 I really fail to see what problem people have with large source files.
 What is it that you want to do that can be done easier if it's multiple
 files?

One thing is browse or link to such code files on the web (e.g. from
within a tracker comment or from within our online documentation).
For example, today I was unable to open the following page from within
a browser to link to one of its lines on a tracker comment:

http://hg.python.org/cpython/file/27c20650aeab/Objects/unicodeobject.c

My laptop's fan simply turns on and the page hangs indefinitely while loading.

I don't think this point was ever mentioned.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Split unicodeobject.c into subfiles?

2012-11-17 Thread Chris Jerdonek
On Sat, Nov 17, 2012 at 10:55 AM, Chris Angelico ros...@gmail.com wrote:
 On Sun, Nov 18, 2012 at 5:47 AM, Chris Jerdonek
 chris.jerdo...@gmail.com wrote:
 On Thu, Oct 4, 2012 at 2:46 PM,  mar...@v.loewis.de wrote:
 I really fail to see what problem people have with large source files.
 What is it that you want to do that can be done easier if it's multiple
 files?

 One thing is browse or link to such code files on the web (e.g. from
 within a tracker comment or from within our online documentation).
 For example, today I was unable to open the following page from within
 a browser to link to one of its lines on a tracker comment:

 http://hg.python.org/cpython/file/27c20650aeab/Objects/unicodeobject.c

 My laptop's fan simply turns on and the page hangs indefinitely while 
 loading.

 Curious. This sounds like a web browser issue - I can pull it up in
 either Chrome or Firefox on Windows on my 2GHz/2GB RAM laptop with a
 visible pause, but not more than half a second.

I'm also using Chrome and on a fairly new Mac.  Perhaps.  I tried
again and it froze up several open *.python.org tabs (mail.python.org,
bugs.python.org, etc).  However, later it worked as you describe.  The
problem seems sporadic.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Failed issue tracker submission

2012-11-16 Thread Chris Jerdonek
I filed an issue in the meta tracker about e-mails like the below that
are sent to python-dev.  The issue link is here:

http://psf.upfronthosting.co.za/roundup/meta/issue492

--Chris


On Thu, Nov 15, 2012 at 6:39 PM, Python tracker
roundup-ad...@psf.upfronthosting.co.za wrote:


 An unexpected error occurred during the processing
 of your message. The tracker administrator is being
 notified.

 Return-Path: python-dev@python.org
 X-Original-To: rep...@bugs.python.org
 Delivered-To: roundup+trac...@psf.upfronthosting.co.za
 Received: from mail.python.org (mail.python.org [IPv6:2001:888:2000:d::a6])
 by psf.upfronthosting.co.za (Postfix) with ESMTPS id DFD211C98E
 for rep...@bugs.python.org; Fri, 16 Nov 2012 03:39:49 +0100 (CET)
 Received: from albatross.python.org (localhost [127.0.0.1])
 by mail.python.org (Postfix) with ESMTP id 3Y2kDj4Tk8zRb6
 for rep...@bugs.python.org; Fri, 16 Nov 2012 03:39:49 +0100 (CET)
 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901;
 t=1353033589; bh=H41Haza/UPvj9B16qvXQtqjlb22jRmazur238MqAPOE=;
 h=MIME-Version:Content-Type:Content-Transfer-Encoding:From:To:
  Subject:Message-Id:Date;
 b=yz1YIEtIYuM3H9V1Ok0YJ/SDvU8xtO+cFOApgDH8jPRdxRS6z/CSdZ96bSY1cNFM7
  4xxKY3LiUXipe439DPwEjxZr0HcsfX+2JdR4Pzf3OZmdopV6PEl8/twaIFoTHQMy8u
  dTwpZ7G4OJKCc2IeCq4e5Bl/TEvQvVT9NO8eoa3k=
 Received: from localhost (HELO mail.python.org) (127.0.0.1)
   by albatross.python.org with SMTP; 16 Nov 2012 03:39:49 +0100
 Received: from virt-7yvsjn.psf.osuosl.org (virt-7yvsjn.psf.osuosl.org 
 [140.211.10.72])
 by mail.python.org (Postfix) with ESMTP
 for rep...@bugs.python.org; Fri, 16 Nov 2012 03:39:49 +0100 (CET)
 MIME-Version: 1.0
 Content-Type: text/plain; charset=utf-8
 Content-Transfer-Encoding: base64
 From: python-dev@python.org
 To: rep...@bugs.python.org
 Subject: [issue15894]
 Message-Id: 3y2kdj4tk8z...@mail.python.org
 Date: Fri, 16 Nov 2012 03:39:49 +0100 (CET)

 TmV3IGNoYW5nZXNldCBiZDg1MzMxMWZmZTAgYnkgQnJldHQgQ2Fubm9uIGluIGJyYW5jaCAnZGVm
 YXVsdCc6Cklzc3VlICMxNTg5NDogRG9jdW1lbnQgd2h5IHdlIGRvbid0IHdvcnJ5IGFib3V0IHJl
 LWFjcXVpcmluZyB0aGUKaHR0cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvYmQ4NTMzMTFm
 ZmUwCg==

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Failed issue tracker submission

2012-11-16 Thread Chris Jerdonek
On Fri, Nov 16, 2012 at 10:06 PM, R. David Murray rdmur...@bitdance.com wrote:
 On Sat, 17 Nov 2012 01:01:52 +0100, Antoine Pitrou solip...@pitrou.net 
 wrote:
 On Sat, 17 Nov 2012 00:47:33 +0100
 mar...@v.loewis.de wrote:
 
  Zitat von Guido van Rossum gu...@python.org:
 
   But python-dev seems a poor place to spam with those errors.
 
  It's certainly not deliberate that they are sent. However, there
  are too few people interested in working on fixing that so that
  it remains unfixed. Even finding out why they are sent to python-dev
  is tricky.

 I think it's because I configured the dummy python-dev user that way:
 http://bugs.python.org/user13902

 I'm pretty sure it's because python-dev is the 'from' address
 used when the messages are sent...and the configuration of
 that user is what allows them to be accepted.

 I suggest changing the from address and the account configuration
 to tracker-disc...@python.org instead.

Above and on the meta tracker issue I filed, it sounds like Martin is
saying that the e-mails should never be going to python-dev (or
tracker-discuss) in the first place -- due to a roundup bug and
because the e-mail is already being sent to the submitter and the
roundup admins.  Thus, changing the from address would only mitigate
the problem and not be fixing the root cause.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] problems building python2.7

2012-11-09 Thread Chris Jerdonek
On Fri, Nov 9, 2012 at 2:57 AM, Chris Withers ch...@simplistix.co.uk wrote:
 On 09/11/2012 10:52, Michael Foord wrote:

 It should be python.exe (yes really).

 Hah! Should http://docs.python.org/devguide/ be updated to reflect this or
 does this only affect Mac OS? (or should we correct the build so it doesn't
 spit out a .exe?)

That link already mentions this information in the top Quick Start section:

3. Run the tests with ./python -m test -j3 (use ./python.exe on most
Mac OS X systems and PCbuild\python_d.exe on Windows; replace test
with test.regrtest for 2.7).

But it would probably be good to repeat this information in certain
locations, for example in the more detailed sections on building and
running tests.

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] peps: Revert PEP 430 to Final.

2012-11-03 Thread Chris Jerdonek
On Sun, Oct 28, 2012 at 5:07 AM, nick.coghlan
python-check...@python.org wrote:

 http://hg.python.org/peps/rev/1ccf682bdfc9
 changeset:   4575:1ccf682bdfc9
 user:Nick Coghlan ncogh...@gmail.com
 date:Sun Oct 28 22:06:46 2012 +1000
 summary:
   Revert PEP 430 to Final.
 ...
 -To help ensure future stability even of links to the in-development version,
 -the ``/dev/`` subpath will be redirected to the appropriate version specific
 -subpath (currently ``/3.4/``).
 ...
  * ``http://docs.python.org/x/*``
  * ``http://docs.python.org/x.y/*``
 +* ``http://docs.python.org/dev/*``
  * ``http://docs.python.org/release/x.y.z/*``
  * ``http://docs.python.org/devguide``
 ...
 +The ``/dev/`` URL means the documentation for the default branch in source
 +control.

I noticed on an older issue in the tracker that the following deep,
in-development link is broken:

http://docs.python.org/dev/2.6/library/multiprocessing.html

(in comment http://bugs.python.org/issue4711#msg78151 )

Can something be done to preserve those deep links, or is it not worth
worrying about?  I'm not sure how prevalent such links are or when
they were valid (and when they stopped working, assuming they once
did).

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   >