Re: [Python-Dev] Allow annotations using basic types in the stdlib?

2017-11-06 Thread R. David Murray
I agree with Steve.  There is *cognitive* overhead to type annotations.
I find that they make Python code harder to read and understand.  So I
object to them in the documentation and docstrings as well.  (Note:
while I agree that the notation is compact for the simple types, the
fact that it would appear for some signatures and not for others is a
show stopper from my point of view...consistency is important to reducing
the cognitive overhead of reading the docs.)

I'm dealing with the spread of annotations on my current project,
having to ask programmers on the team to delete annotations that they've
"helpfully" added that to my mind serve no purpose on a project of the
size we're developing, where we aren't using static analysis for anything.

Maybe I'm being a curmudgeon standing in the way of progress, but I'm
pretty sure there are a number of people in my camp :)

On Mon, 06 Nov 2017 16:22:23 +, Steve Holden  wrote:
> While I appreciate the value of annotations I think that *any* addition of
> them to the stdlib would complicate an important learning resource
> unnecessarily. S
> 
> Steve Holden
> 
> On Mon, Nov 6, 2017 at 4:07 PM, Victor Stinner 
> wrote:
> 
> > Related to annotations, are you ok to annotate basic types in the
> > *documentation* and/or *docstrings* of the standard library?
> >
> > For example, I chose to document the return type of time.time() (int)
> > and time.time_ns() (float). It's short and I like how it's formatted.
> > See the current rendered documentation:
> >
> > https://docs.python.org/dev/library/time.html#time.time
> >
> > "Annotations" in the documentation and docstrings have no impact on
> > Python runtime performance. Annotations in docstrings makes them a few
> > characters longer and so impact the memory footprint, but I consider
> > that the overhead is negligible, especially when using python3 -OO.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread R. David Murray
On Wed, 06 Sep 2017 09:43:53 -0700, Guido van Rossum  wrote:
> I'm actually not in favor of this. It's another way to do the same thing.
> Sorry to rain on your dream!

So it goes :)  I learned things by going through the process, so it
wasn't wasted time for me even if (or because) I made several mistakes.
Sorry for wasting anyone else's time :(

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread R. David Murray
On Wed, 06 Sep 2017 15:05:51 +1000, Chris Angelico <ros...@gmail.com> wrote:
> On Wed, Sep 6, 2017 at 10:11 AM, R. David Murray <rdmur...@bitdance.com> 
> wrote:
> > I've written a PEP proposing a small enhancement to the Python loop
> > control statements.  Short version: here's what feels to me like a
> > Pythonic way to spell "repeat until":
> >
> > while:
> > 
> > break if 
> >
> > The PEP goes into some detail on why this feels like a readability
> > improvement in the more general case, with examples taken from
> > the standard library:
> >
> >  https://www.python.org/dev/peps/pep-0548/
> 
> Is "break if" legal in loops that have their own conditions as well,
> or only in a bare "while:" loop? For instance, is this valid?
> 
> while not found_the_thing_we_want:
> data = sock.read()
> break if not data
> process(data)

Yes.

> Or this, which uses the condition purely as a descriptor:
> 
> while "moar socket data":
> data = sock.read()
> break if not data
> process(data)

Yes.

> Also - shouldn't this be being discussed first on python-ideas?

Yep, you are absolutely right.  Someone has told me I also missed
a related discussion on python-ideas in my searching for prior
discussions.  (I haven't looked for it yet...)

I'll blame jet lag :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 548: More Flexible Loop Control

2017-09-05 Thread R. David Murray
I've written a PEP proposing a small enhancement to the Python loop
control statements.  Short version: here's what feels to me like a
Pythonic way to spell "repeat until":

while:

break if 

The PEP goes into some detail on why this feels like a readability
improvement in the more general case, with examples taken from
the standard library:

 https://www.python.org/dev/peps/pep-0548/

Unlike Larry, I don't have a prototype, and in fact if this idea
meets with approval I'll be looking for a volunteer to do the actual
implementation.

--David

PS: this came to me in a dream on Sunday night, and the more I explored
the idea the better I liked it.  I have no idea what I was dreaming about
that resulted in this being the thing left in my mind when I woke up :)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Version and Last-Modified headers are no longer required in PEPs.

2017-09-05 Thread R. David Murray
The Version and Last-Modified headers required by PEP1 used to be
maintained by the version control system, but this is not true now that
we've switched to git.  We are therefore deprecating these headers and
have removed them from PEP1.  The PEP generation script now considers
them to be optional.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Debugging Python scripts with GDB on OSX

2016-07-07 Thread R. David Murray
On Wed, 06 Jul 2016 16:14:34 -, Alexandru Croitor  
wrote:
> I'm interested to find out if debugging Python scripts with GDB is supported 
> on OSX at all?
> 
> I'm referring to the functionality described on 
> https://wiki.python.org/moin/DebuggingWithGdb and on 
> http://fedoraproject.org/wiki/Features/EasierPythonDebugging.
> 
> I've tried so far various combinations of pre-compiled GDB from the homebrew 
> package manager, locally-compiled GDB from homebrew, as well as locally 
> compiled GDB from MacPorts, together with a pre-compiled Python 2.7, 
> homebrew-compiled 2.7, and custom compiled Python 2.7 from the official 
> source tarball.
> 
> My results so far were not successful. The legacy GDB commands to show a 
> python stack trace or the local variables - do not work. And the new GDB 
> commands (referenced on the Fedora project page) are not present at all in 
> any of the GDB versions.
> 
> I've checked the python CI build bot tests, and it seems the new GDB commands 
> are only successfully tested on Linux machines, and are skipped on FreeBSD, 
> OS X, and Solaris machines.
> 
> Are the new python <-> GDB commands specific to Linux?
> Are there any considerations to take in regards to debug symbols for Python / 
> GDB on OSX?
> 
> Has anyone attempted what I'm trying to do?
> 
> I would be grateful for any advice.
> 
> And I apologize if my choice of the mailing lists is not the best.

I tried to do this a few weeks ago myself, with similar negative
results.  The only thing I tried that you don't mention (I didn't
try everything you did) is a compile from raw gdb source...and that
didn't support OSX format core dumps.  So I gave up.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why does base64 return bytes?

2016-06-16 Thread R. David Murray
On Wed, 15 Jun 2016 11:51:05 +1200, Greg Ewing <greg.ew...@canterbury.ac.nz> 
wrote:
> R. David Murray wrote:
> > The fundamental purpose of the base64 encoding is to take a series
> > of arbitrary bytes and reversibly turn them into another series of
> > bytes in which the eighth bit is not significant.
> 
> No, it's not. If that were its only purpose, it would be
> called base128, and the RFC would describe it purely in
> terms of bit patterns and not mention characters or
> character sets at all.

Sorry, you are correct.  IMO it is to encode it to a representation
that consists of a limited subset of printable (makes marks on paper or
screen) characters (which is an imprecise term); ie: data that will not
be interpreted as having control information by most programs processing
the data stream as either human-readable or raw bytes.

The rest of the argument still applies, specifically the part about
wire encoding to seven bit bytes being the currently-most-used[*] and
backward-compatible use case.  And I say this despite the fact that the
email package currently handles everything as surrogate-escaped text
and so does in fact decode the output of base64.encode to ASCII and
only later re-encodes it.  That's a design issue in the email package
deriving from the fact that bytes and string used to be the same thing
in python2.  It might some day get corrected, but probably won't be, and
it is a legacy of *not* making the distinction between bytes and string.

--David

[*] Yes this is changing, I already said that :)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why does base64 return bytes?

2016-06-14 Thread R. David Murray
On Tue, 14 Jun 2016 14:05:19 -0300, "Joao S. O. Bueno"  
wrote:
> On 14 June 2016 at 13:32, Toshio Kuratomi  wrote:
> >
> > On Jun 14, 2016 8:32 AM, "Joao S. O. Bueno"  wrote:
> >>
> >> On 14 June 2016 at 12:19, Steven D'Aprano  wrote:
> >> > Is there
> >> > a good reason for returning bytes?
> >>
> >> What about: it returns 0-255 numeric values for each position in  a
> >> stream, with
> >> no clue whatsoever to how those values map to text characters beyond
> >> the 32-128 range?
> >>
> >> Maybe base64.decode could take a "encoding" optional parameter - or
> >> there could  be
> >> a separate 'decote_to_text" method that would explicitly take a text codec
> >> name.
> >> Otherwise, no, you simply can't take a bunch of bytes and say they
> >> represent text.
> >>
> > Although it's not explicit, the question seems to be about the output of
> > encoding (and for symmetry, the input of decoding).  In both of those cases,
> > valid output will consist only of ascii characters.
> >
> > The input to encoding would have to remain bytes (that's the main purpose of
> > base64... to turn bytes into an ascii string).
> >
> 
> Sorry, it is 2016, and I don't think at this point anyone can consider
> an ASCII string
> as a representative pattern of textual data in any field of application.
> Bytes are not text. Bytes with an associated, meaningful, encoding are text.
>   I thought this had been through when Python 3 was out.
> 
> Unless you are working with COBOL generated data (and intending to keep
> the file format) , it does not make sense in any real-world field.
> (supposing your
> Cobol data is ASCII and nort EBCDIC).

The fundamental purpose of the base64 encoding is to take a series
of arbitrary bytes and reversibly turn them into another series of
bytes in which the eighth bit is not significant.  Its utility is for
transmitting eight bit bytes over a channel that is not eight bit clean.
Before unicode, that meant bytes.  Now that we have unicode in use in
lots of places, you can think of unicode as a communications channel
that is not eight bit clean.  So, we might want to use base64 encoding to
transmit arbitrary bytes over a unicode channel.  This gives a legitimate
reason to want unicode output from a base64 encoder.   However, it is
equally legitimate in the Python context to say you should be explicit
about your intentions by decoding the bytes output of the base64 encoder
using the ASCII codec.

This was indeed discussed at length.  For a while we didn't even allow
unicode input on either side, but we relaxed that.  My understanding of
Python's current stance on functions that handle both bytes and string
is that *either* the function accepts both types and outputs the *same*
type as the input, *or* it accepts both types but always outputs *one*
type or the other.

You can't have unicode output if you give unicode input to the base64
decoder in the general case.  So decode, at least, has to always give
bytes output.  Likewise, there is small to zero utility for using unicode
input to the base64 encoder, since the unicode would have to be ASCII
only and there'd be no point in doing the encoding.  So, the only thing
that makes sense is to follow the "one output type" rule here.

Now, you can argue whether or not it would make sense for the encoder
to always produce unicode.  However, you then immediately run into the
backward compatibility issue:  the primary use case of the base64 encoding
is to produce *wire ready* bytes.  This is what the email package uses
it for, for example.  So for backward compatibility reasons, which
are consonant with its primary use case, it makes more sense for the
encoder to produce bytes than string.  If you need to transmit bytes
over a unicode channel, you can decode it from ASCII.  That is,
unicode is the *exceptional* use case here, not the rule.  That might
in fact be changing, but for backward compatibility reasons, Python
won't change.

And that should answer Steve's original question :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-09 Thread R. David Murray
On Thu, 09 Jun 2016 13:12:22 +0100, Cory Benfield  wrote:
> The Linux kernel can’t change this stuff easily because they mustn’t
> break userspace. Python *is* userspace, we can do what we like, and we

I don't have specific input on the rest of this discussion, but I disagree
strongly with this statement.  The environment in which python programs
run, ie: the python runtime and standard library, are *our* "userspace",
and the same constraints apply to our making changes there as apply
to the linux kernel and its userspace...even though we knowingly break
those constraints from time to time[*].

--David

[*] Which I think the twisted folks at least would argue we shouldn't
be doing :)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proper way to specify that a method is not defined for a type

2016-06-07 Thread R. David Murray
For those interested in this topic, if you are not already aware of it,
see also http://bugs.python.org/issue25958, which among other things
has a relevant proposed patch for datamode.rst.

On Tue, 07 Jun 2016 10:56:37 -0700, Guido van Rossum  wrote:
> Setting it to None in the subclass is the intended pattern. But CPython
> must explicitly handle that somewhere so I don't know how general it is
> supported. Try defining a list subclass with __len__ set to None and see
> what happens. Then try the same with MutableSequence.
> 
> On Tue, Jun 7, 2016 at 10:37 AM, Ethan Furman  wrote:
> 
> > For binary methods, such as __add__, either do not implement or return
> > NotImplemented if the other operand/class is not supported.
> >
> > For non-binary methods, simply do not define.
> >
> > Except for subclasses when the super-class defines __hash__ and the
> > subclass is not hashable -- then set __hash__ to None.
> >
> > Question:
> >
> > Are there any other methods that should be set to None to tell the
> > run-time that the method is not supported?  Or is this a general mechanism
> > for subclasses to declare any method is unsupported?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] FIXED: I broke the 3.5 branch, apparently

2016-06-03 Thread R. David Murray
On Fri, 03 Jun 2016 23:21:25 +0100, MRAB <pyt...@mrabarnett.plus.com> wrote:
> On 2016-06-03 22:50, R. David Murray wrote:
> > I don't understand how it happened, but apparently I got a merge commit
> > backward and merged 3.6 into 3.5 and pushed it without realizing what
> > had happened.  If anyone has any clue how to reverse this cleanly,
> > please let me know.  (There are a couple people at the sprints looking
> > in to it, but the mercurial guys aren't here so we are short on experts).
> >
> > My apologies for the mess :(
> >
> There's a lot about undoing changes here:
> 
> http://hgbook.red-bean.com/read/finding-and-fixing-mistakes.html

Ned Deily has fixed the problem.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] I broke the 3.5 branch, apparently

2016-06-03 Thread R. David Murray
I don't understand how it happened, but apparently I got a merge commit
backward and merged 3.6 into 3.5 and pushed it without realizing what
had happened.  If anyone has any clue how to reverse this cleanly,
please let me know.  (There are a couple people at the sprints looking
in to it, but the mercurial guys aren't here so we are short on experts).

My apologies for the mess :(

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pathlib enhancments - method name only

2016-04-10 Thread R. David Murray
On Sun, 10 Apr 2016 18:51:23 +1200, Greg Ewing <greg.ew...@canterbury.ac.nz> 
wrote:
> > On 9 April 2016 at 23:02, R. David Murray <rdmur...@bitdance.com> wrote:
> > 
> >>That is, a 'filename' is the identifier we've assigned to this thing
> >>pointed to by an inode in linux, but an os path is a text representation
> >>of the path from the root filename to a specified filename.  That is,
> >>the path *is* the name, so to say "path name" sounds redundant and
> >>confusing to me.
> 
> The term "pathname" is what is conventionally used to refer
> to a textual string passed to the OS to identify an object
> in the file system.
> 
> It's often abbreviated to just "path", but that's ambiguous
> for our purposes, because "path" can also refer to one of
> our higher-level objects.

I find it interesting that in all my years of unix computing I've never
run into this (at least so that I became concious of it).  I see now
that in fact the Posix spec uses 'pathname'.

Objection, such as it was, completely withdrawn :)

(Nick's point about Path object vs path is also a good one.)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pathlib enhancments - method name only

2016-04-09 Thread R. David Murray
On Sat, 09 Apr 2016 17:48:38 +1000, Nick Coghlan  wrote:
> On 9 April 2016 at 04:25, Brett Cannon  wrote:
> > On Fri, 8 Apr 2016 at 11:13 Ethan Furman  wrote:
> >> On 04/08/2016 10:46 AM, Koos Zevenhoven wrote:
> >>  > On Fri, Apr 8, 2016 at 7:42 PM, Chris Barker  wrote:
> >>  >> On Fri, Apr 8, 2016 at 9:02 AM, Koos Zevenhoven wrote:
> >>
> >>  >>> I'm still thinking a little bit about 'pathname', which to me sounds
> >>  >>> more like a string than fspath does.
> >>  >>
> >>  >>
> >>  >> I like that a lot - or even "__pathstr__" or "__pathstring__"
> >>  >> after all, we're making a big deal out of the fact that a path is
> >>  >> *not a string*, but rather a string is a *representation* (or
> >>  >> serialization) of a path.
> >>
> >> That's a decent point.
> >>
> >> So the plausible choices are, I think:
> >>
> >> - __fspath__  # File System Path -- possible confusion with Path
> >
> > +1
> 
> I like __fspath__, but I'm also sympathetic to Koos' point that we're
> really dealing with path *names* being produced via this protocol,
> rather than the paths themselves.
> 
> That would bring the completely explicit "__fspathname__" into the
> mix, which would be comparable in length to "__getattribute__" as a
> magic method name (both in terms of number of syllable and number of
> characters).

I'm not going to vote -1, but for the record I have no real intuition
as to what a "path name" would be.  An arbitrary identifier that we're
using to refer to an os path?

That is, a 'filename' is the identifier we've assigned to this thing
pointed to by an inode in linux, but an os path is a text representation
of the path from the root filename to a specified filename.  That is,
the path *is* the name, so to say "path name" sounds redundant and
confusing to me.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pathlib enhancments - method name only

2016-04-08 Thread R. David Murray
On Fri, 08 Apr 2016 19:24:44 -, Brett Cannon  wrote:
> On Fri, 8 Apr 2016 at 12:10 Chris Angelico  wrote:
> 
> > On Sat, Apr 9, 2016 at 5:03 AM, Chris Barker 
> > wrote:
> > > On Fri, Apr 8, 2016 at 11:34 AM, Koos Zevenhoven 
> > wrote:
> > >>
> > >> >
> > >> > __pathstr__ # pathstring
> > >> >
> > >>
> > >> Or perhaps __pathstring__ in case it may be or return byte strings.

But there are other paths than OS file system paths.  I prefer
__fspath__ or __os_path__ myself.  I think the fact that it is a string
is implied by the fact that it is getting us the thing we can pass
to the os (since Python3 deals with os paths as strings unless you
specify otherwise, only converting them back to bytes, on unix, at the last
moment).

Heh, although I suppose one could make the argument that it should
return whatever the native OS wants, and save the low level code
from having to do that?  Pass the path object all the way down
to that "final step" in the C layer?  (Just ignore me, I'm sure
I'm only making trouble :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bugs.python.org email blockage at gmail

2016-04-06 Thread R. David Murray
On Wed, 06 Apr 2016 12:03:36 +0900, "Stephen J. Turnbull" <step...@xemacs.org> 
wrote:
> R. David Murray writes:
> 
>  > again.  However, the IPV4 address has a poor reputation, and Verizon
>  > at least appears to be blocking it.  So more work is still needed.
> 
> Don't take Verizon's policy as meaningful.  Tell Verizon customers to
> get another address.  That is the only solution that works for Verizon
> subscribers for very long (based on 15 years of Mailman-Users posts),
> they have never been a high-quality email provider.  Further, Verizon
> (as an email provider) is in the process of dying anyway (they are
> very much alive as the new owner of AOL), so improvements in their
> email practices have a likelihood of zero to the resolution of a C
> float.

Yes, Mark reminded me that Verizon still isn't accepting mail from
mail.python.org, despite multiple contacts from the postmaster team.
So they are pretty much a lost cause and no one should use them for email,
I think.

However, the "poor reputation" comment came from the error message
returned by gmail when it bounced the spam-bounce-reports bugs was trying
to send to Ezio.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bugs.python.org email blockage at gmail

2016-04-06 Thread R. David Murray
On Wed, 06 Apr 2016 12:21:04 +1000, Nick Coghlan  wrote:
> On 6 April 2016 at 11:27, Terry Reedy  wrote:
> bugs.python.org is currently sending notification emails directly to
> recipients, rather than routing them via the outbound SMTP server on
> mail.python.org.

Correct.

> Reconfiguring it to relay notifications via the main outgoing server
> is the longer term fix, but an initial attempt at enabling that
> resulted in errors in the bugs.python.org mail logs, so David reverted
> to the direct email configuration for the time being.

Specifically, I think we should clean up the issues that are causing
reputation loss (which pretty much means dropping rietveld, although
in theory we could fix rietveld instead if someone wants to finish
Ezio's patch).  And then we need to understand the issue that caused
me to back out the change: something is sending null-Sender emails to
multiple recipients.  We may not need to fix it (mail.python.org rejected
them but they may be useless messages), but we probably should.  I suspect
they are actual bounces, but I don't have the time to investigate further
at this time.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] bugs.python.org email blockage at gmail

2016-04-05 Thread R. David Murray
We think we have a partial (and hopefully temporary) solution to the
bugs email blockage: ipv6 has been turned off on bugs, so it is sending
only from the ipv4 address.  Google appears to be accepting the emails
again.  However, the IPV4 address has a poor reputation, and Verizon
at least appears to be blocking it.  So more work is still needed.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Not receiving bug tracker emails

2016-03-30 Thread R. David Murray
On Wed, 30 Mar 2016 08:08:59 +0300, Serhiy Storchaka  
wrote:
> On 30.03.16 03:23, Victor Stinner wrote:
> > same for me, i'm using using gmail with a @gmail.com email.
> >
> > Victor
> >
> > 2016-03-30 1:30 GMT+02:00 Martin Panter :
> >> For the last ~36 hours I have stopped receiving emails for messages
> >> posted in the bug tracker. Is anyone else having this problem? Has
> >> anything changed recently?
> 
> Same for me.
> 
> This is very sad, because some comments can be unnoticed and some 
> patches can be unreviewed.

Anyone know how to find out what changed from Google's POV?  As far as
we know nothing changed at the bugs end, but it is certainly possible
that something did change in the hosting infrastructure without our
knowledge.  Knowing what is setting google off would help track it down,
if so...or perhaps something changed at the google end, in which case we
*really* need to know what.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Bug in build system for cross-platform builds

2016-03-14 Thread R. David Murray
On Mon, 14 Mar 2016 03:04:08 -, "Gregory P. Smith"  wrote:
> On Sun, Mar 13, 2016 at 7:41 PM Martin Panter  wrote:
> 
> > On 13 March 2016 at 01:13, Russell Keith-Magee 
> > wrote:
> > > The patches that I've uploaded to Issue23670 [1] show a full
> > cross-platform
> > > [1] http://bugs.python.org/issue23670
> > > build process. After you apply that patch, the iOS directory contains a
> > > meta-Makefile that manages the build process.
> >
> > Thanks very much for pointing that out. This has helped me understand
> > a lot more things. Only now do I realize that the four files generated
> > by pgen and _freeze_importlib are actually already committed into the
> > Mercurial repository:
> >
> > Include/graminit.h
> > Python/graminit.c
> > Python/importlib.h
> > Python/importlib_external.h
> >
> > A question for other Python developers: Why are these generated files
> > stored in the repository? The graminit ones seem to have been there
> > since forever (1990). It seems the importlib ones were there due to a
> > bootstrapping problem, but now that is solved. Antoine
> >  said he kept them in
> > the repository on purpose, but I want to know why.
> >
> > If we ignore the cross compiling use case, would there be any other
> > consequences of removing these generated files from the repository?
> > E.g. would it affect the Windows build process?
> >
> > I have two possible solutions in mind: either remove the generated
> > files from the repository and always build them, or keep them but do
> > not automatically regenerate them every build. Since they are
> > generated files, not source files, I would prefer to remove them, but
> > I want to know the consequences first.
> >
> 
> They should not be regenerated every build, if they are, that seems like a
> bug in the makefile to me (or else the timestamp checks that make does vs
> how your code checkout happened).  Having them checked in is convenient for
> cross builds as it is one less thing that needs a build-host-arch build.

The repo-timestamp problem is addressed by the 'make touch' target.

And yes, checking in these platform-independent artifacts is very
intentional: less to build, fewer external dependencies in the build
process...you don't need to *have* python to *build* python, which you
would have to if they were not checked in.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python should be easily compilable on Windows with MinGW

2016-02-26 Thread R. David Murray
On Fri, 26 Feb 2016 10:05:19 -0800, Dan Stromberg  wrote:
> But what do you really think?
> 
> IMO, windows builds probably should do both visual studio and mingw.
> That is, there probably should be two builds on windows, since there's
> no clear consensus about which to use.
> 
> I certainly prefer mingw over visual studio - and I have adequate
> bandwidth for either.

I don't think there is much if any objection to the idea of making CPython
compilable with mingw, we just need the official supported release to
be the VS one for compatibility reasons.

But, there has historically been a lack of a clear target in the mingw
space for someone to actually produce a working generalized port (as
opposed to, say, cygwin), much less generate a set of reviewable patches
that could be incorporated in to the repository.  (Among other things
for the latter we would need a mingw buildbot, and no one has stepped
forward on that front at all, as far as I know.)

I think there has been some progress lately, but it is a hard problem
and needs more volunteer time.   Ideally we'd have someone who is all of
passionate enough about it, knowledgeable enough about it, and patient
enough to work with the others in the community who need to be involved.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More optimisation ideas

2016-02-01 Thread R. David Murray
On Mon, 01 Feb 2016 14:12:27 +1100, Steven D'Aprano  wrote:
> On Sun, Jan 31, 2016 at 08:23:00PM +, Brett Cannon wrote:
> > So freezing the stdlib helps on UNIX and not on OS X (if my old testing is
> > still accurate). I guess the next question is what it does on Windows and
> > if we would want to ever consider freezing the stdlib as part of the build
> > process (and if we would want to change the order of importers on
> > sys.meta_path so frozen modules came after file-based ones).
> 
> I find that being able to easily open stdlib .py files in a text editor 
> to read the source is extremely valuable. I've learned much more from 
> reading the source than from (e.g.) StackOverflow. Likewise, it's often 
> handy to do a grep over the stdlib. When you talk about freezing the 
> stdlib, what exactly does that mean?
> 
> - will the source files still be there?

Well, Brett said it would be optional, though perhaps the above
paragraph is asking about doing it in our Windows build.  But the linux
distros might make also use the option if it exists, so the question is
very meaningful.  However, you'd have to ask the distro if the source
would be shipped in the linux case, and I'd guess not in most cases.

I don't know about anyone else, but on my own development systems it is
not that unusual for me to *edit* the stdlib files (to add debug prints)
while debugging my own programs.  Freeze would definitely interfere with
that.  I could, of course, install a separate source build on my dev
system, but I thought it worth mentioning as a factor.

On the other hand, if the distros go the way Nick has (I think) been
advocating, and have a separate 'system python for system scripts' that
is independent of the one installed for user use, having the system-only
python be frozen and sourceless would actually make sense on a couple of
levels.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread R. David Murray
On Fri, 18 Dec 2015 18:29:35 +0200, Andrew Svetlov  
wrote:
> I my asyncio code typical initialization/finalization procedures are
> much more complicated.
> I doubt if common code can be extracted into asyncio.
> Personally I don't feel the need for `wait_forever()` or
> `loop.creae_context_task()`.
> 
> But even if you need it you may create it from scratch easy, isn't it?

In my own asyncio code I wrote a generic context manager to hold
references to all the top level tasks my ap needs, which automatically
handles the teardown when loop.stop() is called from my SIGTERM
signal handler.

However, (and here we get to the python-dev content of this post :), I
think we are too early in the uptake of asyncio to be ready to say what
additional high-level features are well defined enough and useful enough
to become part of the standard library.  In any case discussions like
this really belong on the asyncio-specific mailing list, which I gather
is the python-tulip Google Group (I suppose I really ought to sign up...)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "python.exe is not a valid Win32 app"

2015-12-15 Thread R. David Murray
On Tue, 15 Dec 2015 15:41:35 +0100, Laura Creighton  wrote:
> In a message of Tue, 15 Dec 2015 11:46:03 +0100, Armin Rigo writes:
> >Hi all,
> >
> >On Tue, Dec 1, 2015 at 8:13 PM, Laura Creighton  wrote:
> >> Python 3.5 is not supported on windows XP.  Upgrade your OS or
> >> stick with 3.4
> >
> >Maybe this information should be written down somewhere more official?
> > I can't find it in any of these pages:
> >
> >https://www.python.org/downloads/windows/
> >https://www.python.org/downloads/release/python-350/
> >https://www.python.org/downloads/release/python-351/
> >https://docs.python.org/3/using/windows.html
> >
> >It is found on the following page, to which googling "python 3.5
> >windows XP" does not point:
> >
> >https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems

That's too bad, since that's the official place such info appears.

> >Instead, the google query above returns various threads on
> >stackoverflow and elsewhere where users wonder about that very
> >question.
> 
> I already asked for that, on the bug tracker but maybe I picked the wrong
> issue tracker for that request.
> 
> So now I have made one here, too.
> https://github.com/python/pythondotorg/issues/867

IMO the second is the right one...although the release managers sometimes
adjust the web site, I think this is a web site issue and not a release
management issue.  I would think that we should have "supported versions"
in the 'product description' for both Windows and OSX, but IMO the
current way the releases are organized on the web site does not make
that easy to achieve in a way that will be useful to end users.

That said, I'm not sure whether or not there is a way we could add
"supported versions" to the main docs that would make sense and be
useful...your bugs.python.org issue would be useful for discussing that.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Reference has no mention of list comÃprehensions

2015-12-04 Thread R. David Murray
On Fri, 04 Dec 2015 18:38:03 +1000, Nick Coghlan  wrote:
> Summarising that idea:
> 
> * literals: any of the dedicated expressions that produce an instance
> of a builtin type
> * constant literal: literals that produce a constant object that can
> be cached in the bytecode
> * dynamic literal: literals containing dynamic subexpressions that
> can't be pre-calculated
> * display: legacy term for a dynamic literal (originally inherited from ABC)
> * comprehension: a dynamic literal that creates a new container from
> an existing iterable
> * lexical literal: constant literals and dynamic string literals [1]
> 
> The ast.literal_eval() docs would need a slight adjustment to refer to
> "literals (excluding container comprehensions and generator
> expressions)", rather than the current "literals and container
> displays".

Except that that isn't accurate either:

>>> import ast
>>> ast.literal_eval('[1, id(1)]')
Traceback (most recent call last):
  File "", line 1, in 
  File "/home/rdmurray/python/p36/Lib/ast.py", line 84, in literal_eval
return _convert(node_or_string)
  File "/home/rdmurray/python/p36/Lib/ast.py", line 57, in _convert
return list(map(_convert, node.elts))
  File "/home/rdmurray/python/p36/Lib/ast.py", line 83, in _convert
raise ValueError('malformed node or string: ' + repr(node))
ValueError: malformed node or string: <_ast.Call object at 0xb73633ec>

So it's really container displays consisting of literals, which we could
call a "literal container display".

I think the intuitive notion of "literal" is "the value is literally
what is written here".  Which is a redundant statement; 'as written' is,
after all, what literally means when used correctly :).  That makes it
a language-agnostic concept if I'm correct.

I think we will find that f strings are called f expressions, not f literals.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Reference has no mention of list comÃprehensions

2015-12-03 Thread R. David Murray
On Thu, 03 Dec 2015 16:15:30 +, MRAB  wrote:
> On 2015-12-03 15:09, Random832 wrote:
> > On 2015-12-03, Laura Creighton  wrote:
> >> Who came up with the word 'display' and what does it have going for
> >> it that I have missed?  Right now I think its chief virtue is that
> >> it is a meaningless noun.  (But not meaningless enough, as I
> >> associate displays with output, not construction).
> >
> > In a recent discussion it seemed like people mainly use it
> > because they don't like using "literal" for things other than
> > single token constants.  In most other languages' contexts the
> > equivalent thing would be called a literal.
> >
> "Literals" also tend to be constants, or be constructed out of
> constants.
> 
> A list comprehension can contain functions, etc.

Actually, it looks like Random832 is right.  The docs for
ast.literal_eval say "a Python literal or container display".

Which also means we are using the term 'display' inconsistently,
since literal_eval will not eval a comprehension.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-11-30 Thread R. David Murray
On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny  wrote:
> Note that uploading the data to SpeedTin should be pretty straightforward
> (by using https://github.com/fabioz/pyspeedtin, so, the main issue would be
> setting up o machine to run the benchmarks).

Thanks, but Zach almost has this working using codespeed (he's still
waiting on a review from infrastructure, I think).  The server was not in
fact running; a large part of what Zach did was to get that server set up.
I don't know what it would take to export the data to another consumer,
but if you want to work on that I'm guessing there would be no objection.
And I'm sure there would be no objection if you want to get involved
in maintaining the benchmark server!

There's also an Intel project posted about here recently that checks
individual benchmarks for performance regressions and posts the results
to python-checkins.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Request for pronouncement on PEP 493 (HTTPS verification backport guidance)

2015-11-25 Thread R. David Murray
On Thu, 26 Nov 2015 09:17:02 +1300, Robert Collins  
wrote:
> On 26 November 2015 at 08:57, Barry Warsaw  wrote:
> > There's a lot to process in this thread, but as I see it, the issue breaks
> > down to these questions:
> >
> > * How should PEP 493 be implemented?
> >
> > * What should the default be?
> >
> > * How should PEP 493 be worded to express the right tone to redistributors?
> >
> > Let me take on the implementation details here.
> >
> > On Nov 24, 2015, at 04:04 PM, M.-A. Lemburg wrote:
> >
> >>I would still find having built-in support for the recommendations
> >>in the Python stdlib a better approach
> >
> > As would I.
> 
> For what its worth: a PEP telling distributors to patch the standard
> library is really distasteful to me.
> 
> We've spent a long time trying to build close relations such that when
> something doesn't work distributors can share their needs with us and
> we can make Python out of the box be a good fit. This seems to fly in
> the exact opposite direction: we're explicitly making it so that
> Python builds on these vendor's platforms will not be the same as you
> get by checking out the Python source code.

I think we should include the environment variable support in CPython
and be done with it (nuke the PEP otherwise).  Which is what I've
thought from the beginning :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Benchmark results across all major Python implementations

2015-11-18 Thread R. David Murray
On 17 Nov 2015, at 21:22, Stewart, David C <david.c.stew...@intel.com> wrote:
> On 11/17/15, 10:40 AM, "Python-Dev on behalf of R. David Murray" 
> <python-dev-bounces+david.c.stewart=intel@python.org on behalf of 
> rdmur...@bitdance.com> wrote:
>>
>> I suppose that for this to have maximum effect someone would have to
>> specifically be paying attention to performance and figuring out why
>> every (real) regression happened.  I don't suppose we have anyone in the
>> community currently who is taking on that role, though we certainly do
>> have people who are *interested* in Python performance :)
> 
> We're trying to fill that role as much as we can. When there is a
> significant (and unexplained) regression that we see, I usually ask
> our engineers to bisect it to identify the offending patch and
> root-cause it.

That's great news.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Benchmark results across all major Python implementations

2015-11-17 Thread R. David Murray
On Mon, 16 Nov 2015 23:37:06 +, "Stewart, David C" 
 wrote:
> Last June we started publishing a daily performance report of the latest 
> Python tip against the previous day's run and some established synch point. 
> We mail these to the community to act as a "canary in the coal mine." I wrote 
> about it at https://01.org/lp/blog/0-day-challenge-what-pulse-internet
> 
> You can see our manager-style dashboard of a couple of key workloads at 
> http://languagesperformance.intel.com/
> (I have this running constantly on a dedicated screen in my office).

Just took a look at this.  Pretty cool.  The web page is a bit confusing,
though.  It doesn't give any clue as to what is being measured by the
numbers presented...it isn't obvious whether those downward sloping
lines represent progress or regression.  Also, the intro talks about
historical data, but other than the older dates[*] in the graph there's
no access to it.  Do you have plans to provide access to the raw data?
It also doesn't show all of the test shown in the example email in your
blog post or the emails to python-checkins...do you plan to make those
graphs available in the future as well?

Also, in the emails, what is the PGO column percentage relative to?

I suppose that for this to have maximum effect someone would have to
specifically be paying attention to performance and figuring out why
every (real) regression happened.  I don't suppose we have anyone in the
community currently who is taking on that role, though we certainly do
have people who are *interested* in Python performance :)

--David

[*] Personally I'd find it easier to read those dates in MM-DD form,
but I suppose that's a US quirk, since in the US when using slashes
the month comes first...
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Benchmark results across all major Python implementations

2015-11-16 Thread R. David Murray
On Mon, 16 Nov 2015 21:23:49 +0100, Maciej Fijalkowski  wrote:
> Any thoughts on improving the benchmark set (I think all of
> {cpython,pypy,pyston} introduced new benchmarks to the set).
> "speed.python.org" becoming a thing is generally stopped on "noone
> cares enough to set it up".

Actually, with some help from Intel, it is getting there.  You can see
the 'benchmarks' entry in the buildbot console:

http://buildbot.python.org/all/console

but it isn't quite working yet.  We are also waiting on a review of the
salt state:

https://github.com/python/psf-salt/pull/74

(All work done by Zach Ware.)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] doc tests failing

2015-11-13 Thread R. David Murray
We don't have clean doctests for the docs.  Patches welcome.

At one point I had made the turtle doctests pass (it draws a bunch of
stuff on the screen) because otherwise we don't have very many turtle
tests, but I haven't checked it in a couple years.

Hmm.  We could list making the doc doctests pass as an activity for
beginners in the devguide.

On Fri, 13 Nov 2015 07:12:32 -0800, Ethan Furman  wrote:
> What am I doing wrong?
> 
> I have tried:
> 
> 
> hg update 3.5  # and hg update default
> make distclean && ./configure --with-pydebug && make -j2
> cd Doc
> make doctest
> 
> 
> and in both cases I get page after page of errors.  I have tried 
> installing python-sphinx and python3-sphinx; I have tried adding 
> PYTHON=../python and PYTHON=python3 to the `make doctest` line -- all to 
> no avail.
> 
> Here's a random sample of the errors:
> 
> **
> File "library/shlex.rst", line ?, in default
> Failed example:
>  remote_command
> Exception raised:
>  Traceback (most recent call last):
>File "/usr/lib/python2.7/doctest.py", line 1315, in __run
>  compileflags, 1) in test.globs
>File "", line 1, in 
>  remote_command
>  NameError: name 'remote_command' is not defined
> 
> **
> File "howto/sorting.rst", line ?, in default
> Failed example:
>  sorted([5, 2, 4, 1, 3], cmp=numeric_compare)
> Exception raised:
>  Traceback (most recent call last):
>File "/usr/lib/python2.7/doctest.py", line 1315, in __run
>  compileflags, 1) in test.globs
>File "", line 1, in 
>  sorted([5, 2, 4, 1, 3], cmp=numeric_compare)
>  NameError: name 'numeric_compare' is not defined
> 
> **
> File "library/ipaddress.rst", line ?, in default
> Failed example:
>  n2 = ip_network('192.0.2.1/32')
> Exception raised:
>  Traceback (most recent call last):
>File "/usr/lib/python2.7/doctest.py", line 1315, in __run
>  compileflags, 1) in test.globs
>File "", line 1, in 
>  n2 = ip_network('192.0.2.1/32')
>  NameError: name 'ip_network' is not defined
> 
> --
> ~Ethan~
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/rdmurray%40bitdance.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale behind lazy map/filter

2015-11-05 Thread R. David Murray
On Thu, 05 Nov 2015 03:59:05 +, Michael Selik  wrote:
> > I'm not suggesting restarting at the top (I've elsewhere suggested that
> > many such methods would be better as an *iterable* that can be restarted
> > at the top by calling iter() multiple times, but that's not the same
> > thing). I'm suggesting raising an exception other than StopIteration, so
> > that this situation can be detected. If you are writing code that tries
> > to resume iterating after the iterator has been exhausted, I have to
> > ask: why?
> 
> The most obvious case for me would be tailing a file. Loop over the lines
> in the file, sleep, then do it again. There are many tasks analogous to
> that scenario -- anything querying a shared resource.

The 'file' iterator actually breaks the rules of iterators: it does
*not* continue to raise StopIteration once it has been exhausted, if
more input becomes available.  Given that it is one of the most commonly
used iterators (and I would not be surprised if other special-purpose
iterators copied its design), this pattern does seem like a blocker for
the proposal.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] If you shadow a module in the standard library that IDLE depends on, bad things happen

2015-10-29 Thread R. David Murray
On Thu, 29 Oct 2015 16:56:38 -0700, Nathaniel Smith  wrote:
> On Thu, Oct 29, 2015 at 1:50 PM, Ryan Gonzalez  wrote:
> > Why not just check the path of the imported modules and compare it with the
> > Python library directory?
> 
> It works, but it requires that everyone who could run into this
> problem carefully add some extra guard code to every stdlib import
> statement, and in practice nobody will (or at least, not until after
> they've already gotten bitten by this at least once... at which point
> they no longer need it).
> 
> Given that AFAICT there's no reason this couldn't be part of the
> default import system's functionality and "just work" for everyone, if
> I were going to spend time on trying to fix this I'd probably target
> that :-).
> 
> (I guess the trickiest bit would be to find an efficient and
> maintainable way to check whether a given package name is present in
> the stdlib.)

For Idle, though, it sounds like a very viable strategy, and that's
what Laura is concerned about.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Generated Bytecode ...

2015-10-22 Thread R. David Murray
On Thu, 22 Oct 2015 17:02:48 -, Brett Cannon  wrote:
> On Thu, 22 Oct 2015 at 09:37 Stéphane Wirtel  wrote:
> 
> > Hi all,
> >
> > When we compile a python script
> >
> > # test.py
> > if 0:
> > x = 1
> >
> > python -mdis test.py
> >
> > There is no byte code for the condition.
> >
> > So my question is, the byte code generator removes the unused functions,
> > variables etc…, is it right?
> >
> 
> Technically the peepholer removes the dead branch, but since the peepholer
> is run on all bytecode you can't avoid it.

There's an issue (http://bugs.python.org/issue2506) for being able to
disable all optimizations (that Ned Batchelder, among others, would really
like to see happen :).  Raymond rejected it as not being worthwhile.

I still agree with Ned and others that there should, just on principle,
be a way to disable all optimizations.  Most (all?) compilers have such
a feature, for debugging reasons if nothing else.  We even have a way
to spell it in the generated byte code files now (opt-0).  But, someone
would have to champion it and write a patch proposal.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale behind lazy map/filter

2015-10-13 Thread R. David Murray
On Tue, 13 Oct 2015 14:59:56 +0300, Stefan Mihaila  
wrote:
> Maybe it's just python2 habits, but I assume I'm not the only one
> carelessly thinking that "iterating over an input a second time will 
> result in the same thing as the first time (or raise an error)".

This is the way iterators have always worked.  The only new thing is that
in python3 some things that used to be iter*ables* (lists, usually) are
now iter*ators*.  Yes it is a change in mindset *with regards to those
functions* (and yes I sometimes find it annoying), but it is actually
more consistent than it was in python2, and thus easier to generalize
your knowledge about how python works instead of having to remember
which functions work which way.  That is, if you need to iterate it
twice, turn it into a list first.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale behind lazy map/filter

2015-10-13 Thread R. David Murray
On Tue, 13 Oct 2015 11:26:09 -0400, Random832 <random...@fastmail.com> wrote:
> "R. David Murray" <rdmur...@bitdance.com> writes:
> 
> > On Tue, 13 Oct 2015 14:59:56 +0300, Stefan Mihaila
> > <stefanmihail...@gmail.com> wrote:
> >> Maybe it's just python2 habits, but I assume I'm not the only one
> >> carelessly thinking that "iterating over an input a second time will 
> >> result in the same thing as the first time (or raise an error)".
> >
> > This is the way iterators have always worked.
> 
> It does raise the question though of what working code it would actually
> break to have "exhausted" iterators raise an error if you try to iterate
> them again rather than silently yield no items.

They do raise an error: StopIteration.  It's just that the iteration
machinery uses that to stop iteration :).

And the answer to the question is: lots of code.  I've written some:
code that iterates an iterator, breaks that loop on a condition, then
resumes iterating, breaking that loop on a different condition, and so
on, until the iterator is exhausted.  If the iterator restarted at the
top once it was exhausted, that code would break.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale behind lazy map/filter

2015-10-13 Thread R. David Murray
On Tue, 13 Oct 2015 12:08:12 -0400, Random832 <random...@fastmail.com> wrote:
> "R. David Murray" <rdmur...@bitdance.com> writes:
> > On Tue, 13 Oct 2015 11:26:09 -0400, Random832 <random...@fastmail.com> 
> > wrote:
> >
> > And the answer to the question is: lots of code.  I've written some:
> > code that iterates an iterator, breaks that loop on a condition, then
> > resumes iterating, breaking that loop on a different condition, and so
> > on, until the iterator is exhausted.  If the iterator restarted at the
> > top once it was exhausted, that code would break
> 
> I'm not suggesting restarting at the top (I've elsewhere suggested that
> many such methods would be better as an *iterable* that can be restarted
> at the top by calling iter() multiple times, but that's not the same
> thing). I'm suggesting raising an exception other than StopIteration, so
> that this situation can be detected. If you are writing code that tries
> to resume iterating after the iterator has been exhausted, I have to
> ask: why?

Because the those second  loops don't run if the iterator is already
exhausted, the else clause is executed instead (or nothing happens,
depending on the code).

Now, likely such code isn't common (so I shouldn't have said "lots"),
but the fact that I've done it at least once, maybe twice (but I can't
remember what context, it was a while ago), argues it isn't vanishingly
uncommon.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Committing a bug fix

2015-09-27 Thread R. David Murray
On Sun, 27 Sep 2015 21:08:02 -0400, Alexander Belopolsky 
 wrote:
> Can someone remind me which branches regular (non-security) bug fixes go to
> these days?  See  for context.

3.4, 3.5, and default.

(3.4 for only another few weeks.)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] xunit unittest.TestSuite

2015-09-26 Thread R. David Murray
On Sat, 26 Sep 2015 10:26:39 -0700, vijayram  wrote:
> I am facing this same issue described here: 
> https://github.com/nose-devs/nose/issues/542 
> 
> 
> any alternative or solution to this issue that anyone is aware of... please 
> kindly suggest...

This is a forum for the development of the CPython interpreter and
standard library.  For issues involving nose, you want a nose-specific
forum.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My collection of Python 3.5.0 regressions

2015-09-18 Thread R. David Murray
Once Steve comes back from vacation he's going to have a lot of Windows
install issues to look at.  IMO, we should resolve those, and then issue
3.5.1.

It's really too bad more people didn't test the installation with the
release candidates, and I'm very glad that those people who did so did
soI know there were a significant number of issues with the new
Windows installer infrastructure that were caught and fixed before final.

On Fri, 18 Sep 2015 15:18:32 +0200, Victor Stinner  
wrote:
> (Oh hey, I don't understand how I sent the previous email. Mistake
> with keyboard shortcut in Gmail?)
> 
> Hi,
> 
> Sadly, Python 3.5.0 comes with regressions. FYI I fixed the following
> regressions:
> 
> "OSError in os.waitpid() on Windows"
> http://bugs.python.org/issue25118
> 
> "Windows: datetime.datetime.now() raises an OverflowError for date
> after year 2038"
> http://bugs.python.org/issue25155
> 
> "3.5: Include/pyatomic.h is incompatible with OpenMP (compilation of
> the third-party yt module fails on Python 3.5)"
> http://bugs.python.org/issue25150
> 
> It may be good to not wait too long before releasing a first 3.5.1
> bugfix version :-)
> 
> I just pushed fixes. We may wait a little bit for buildbots ;-)
> 
> --
> 
> There are some more issues which may be Python 3.5 regressions:
> 
> "Regression: test_datetime fails on 3.5, Win 7, works on 3.4"
> http://bugs.python.org/issue25092
> 
> "asynico: add ssl_object extra info"
> http://bugs.python.org/issue25114
> 
> "test_httpservers hangs on 3.5.0, win 7"
> http://bugs.python.org/issue25095
> 
> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/rdmurray%40bitdance.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Collecting information about git

2015-09-16 Thread R. David Murray
On Wed, 16 Sep 2015 19:59:28 +1000, Chris Angelico  wrote:
> On Wed, Sep 16, 2015 at 7:46 PM, Oleg Broytman  wrote:
> > For example, I develop
> > SQLObject using two private clones (clean backup repo and dirty working
> > repo) and three public clones at Gitlab, GutHub and SourceForge. They
> > are all equal, none of them is the upstream. I don't even have
> > ``origin`` remote - the origin was in Subversion.
> 
> Right. And even when you do have an 'origin' remote, you can pull from
> anywhere else. (Internet connection's down? Pull from one computer
> straight to another over the LAN. Want to quickly compare two messy
> branches, without letting anyone else see them yet? Pull one of them
> onto the other computer and poke around with fred/master and master.
> Etcetera.) Deployment on Heroku can be done by setting up a remote and
> then "git push heroku master". Does that make those commits
> uneditable, or does "git push origin master" do that? I like the way
> git lets you shoot yourself in the foot if you want to, while warning
> you "your gun is currently pointing near your foot, please use --force
> or --force-with-lease to pull the trigger".
> 
> But this is a bit off-topic for python-dev.

Yes is it, and no it isn't, if we are even thinking of moving to git.

My experience with DVCS started with bazaar, moved on to hg for the
CPython project, and finally git.  Through all of that I did not
understand the DAG, and had trouble wrapping my mind around what was
going on, despite being able to get things done.  I read a bunch of
documentation about all three systems, but it wasn't until I watched
another instructor teach the git DAG at a Software Carpentry workshop
that it all clicked.  Partially, I had been continually confused by the
concept of a "branch", since git uses that term differently than CVS,
svn, hg, or bazaar.  But once I understood it, suddenly everything
became clear.

The DAG plus git branches-as-labels *fits in my head* in a way that the
DAG plus named-branches-and-other-things does not.

I think that's really the key thing.  Sure, you can do the equivalent in
hg, but hg is not *oriented* toward that simple model.  git has a simple
model that my mind can wrap around and I can *reason about* easily.
Figuring out how feature branches and throwaway branches and remote
branches and pushes and pulls and workflows and whatever is all all just a
matter of reasoning about this simple model[*].

*That* I think is the key to git's success, *despite* its command line
API woes.  Not github, though github has magnified the effect.

The other key concept is what Chris talks about above, Mercurial's
"prevent me from shooting myself in the foot" stance versus git's "shoot
if you really want to" stance.

I think Mercurial is a great product, and has lots of really great
features, and I suspect that phases is a power tool that will enable
certain workflows that git won't support as well if at all.

But, I think Mercurial matches what we might call the "corporate"
mindset (prevent me from shooting myself in the foot) better than the
Python mindset.

Python strives to have a simple mental model you can reason about, and
it is a consenting adults language.  IMO, git matches the Python mindset
better than Mercurial does.

Now, does that mean that *the CPython project* should adopt git?  Not
necessarily.  CPython may be more like a big corporate project, where
centralized tracking of the main lines of development and not shooting
ourselves in the foot are the most important things :)

--David

[*] And then getting confused about how to *do* it in the CLI, but hey,
Google is my friend.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Collecting information about git

2015-09-16 Thread R. David Murray
On Wed, 16 Sep 2015 09:17:38 -0700, Nikolaus Rath <nikol...@rath.org> wrote:
> On Sep 16 2015, "R. David Murray" <rdmur...@bitdance.com> wrote:
> > The DAG plus git branches-as-labels *fits in my head* in a way that the
> > DAG plus named-branches-and-other-things does not.
> 
> Hmm, that's odd. As far as I know, the difference between the hg and git
> DAG model can be summarized like this:
> 
>  * In git, leaves of the DAG must be assigned a name. If they don't have
>a name, they will be garbage collected. If they have a name, they are
>called a branch.
> 
>  * In hg, leaves of the DAG persist. If you want to remove them, you
>have to do so explicitly (hg strip), if you want them to have a name,
>you must do so explicitly (hg bookmark). A node of the DAG with a
>name is called a bookmark.
> 
>  * hg named branches have no equivalent in git. 
> 
> Does that help?

Well, that last bullet kind of complicates the model, doesn't it?  :)
Especially when you add the fact that (IIUC) which named branch a commit
is on is recorded in the commit or something, which means the DAG is
more complicated than just being a DAG of commits.  The fact that I have
to worry about (remember to delete) branches I no longer want is also an
additional mental load, especially since (again, IIUC) I'd have to
figure out which commit I wanted to strip *from* in order to get rid of
an abandoned branch.

This is what I mean by hg not being *oriented* toward the simple model:
if I end up with extra heads in my cpython repo I treat this as a bug
that needs to be fixed.  In git, it's just a branch I'm working with and
later do something with...or delete, and git takes care of cleaning up
the orphaned commits for me.  I'm leery (wrongly, I suspect) of creating
branches in hg because they don't fit into my mental model of how I'm
working with the cpython repository and its named branches.  Now, is
that just a consequence of having learned mercurial in the context of how
CPython uses it?  I don't know.

As another example of this orientation issue, rebase was a big no-no in
hg when we started with it, and so I would only deal with patch sets (hg
diff to save a work in progress, reapply it later...a pattern I still
follow with cpython/hg) so that I didn't screw up my history.  In git,
it is the most natural thing in the world to take a branch you've been
working on and rebase it on to the point in the tree where you want to
commit it.  Even now I have to read carefully and think hard every time
I use the hg rebase command...I'm not sure why it is, but again it
doesn't fit in my head the way the git rebase does[*].

None of these things that mercurial does is *wrong*, and in fact they
are very useful in the right context.

The point is that the git model is *simple*.  Like I said, it fits
in my head.  I guess I have a small head :)

But, now the thread is again drifting away from how mercurial and git
relate to cpython development into simply how mercurial and git differ.

--David

[*] Note also that the hg help lacks the DAG examples that the current
git help has, and that it talks about "repeated merging" when what I
want to do is *move* the commits, I don't want to merge.  I think it
means exactly the same thing, but again it doesn't fit into my simple
git mental model of moving a branch of the DAG from here to there.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Collecting information about git

2015-09-15 Thread R. David Murray
On Tue, 15 Sep 2015 20:32:33 +0200, Georg Brandl  wrote:
> On 09/15/2015 08:22 PM, Guido van Rossum wrote:
> > For one, because *I* have been a (moderate) advocate for switching to git 
> > and
> > GitHub.
>
> Fair enough. Still strange to read this PEP with the explicit caveat of
> "The author of the PEP doesn't currently plan to write a Process PEP on
> migration Python development from Mercurial to git."

I understood this to mean he's providing the info that would be needed
for writing a process PEP, but as an informational PEP because there's
no other place it fits better and Guido would like to have it on record,
but isn't himself planning to propose a switch *at the moment*, thus
cutting off panic from the community that there was an immanent proposal
to switch.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can't post to bugs.python.org

2015-09-10 Thread R. David Murray
On Thu, 10 Sep 2015 13:27:42 +0300, Serhiy Storchaka  
wrote:
> I can't neither post a message to existing issue nor open a new issue. 
> The irker854 bot on IRC channel #python-dev cites my message and the 
> tracker updates activity time of existing issue, but doesn't show my 
> message and doesn't reflect changes of status. Posting via e-mail 
> doesn't work as well.
> 
> Web server replies:
> 
> An error has occurred
> A problem was encountered processing your request. The tracker 
> maintainers have been notified of the problem.
> 
> E-mail server replies:
> 
> Subject: Failed issue tracker submission
> 
> You are not a registered user. Please register at:
> 
> http://bugs.python.org/user?@template=register
> 
> ...before sending mail to the tracker.

I haven't tried the email gateway, but I can update issues.  I see the
irker message from when you tried to create one and indeed that issue
does not exist.  However, there are no messages among those sent to the
tracker administrators that show errors around that time or for a few
hours before it.

If this continues to plague you, we'll probably need to do some live
debugging.  You can ping me (bitdancer) on IRC, I should be on for the
next 8 hours
or so.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can't post to bugs.python.org

2015-09-10 Thread R. David Murray
On Thu, 10 Sep 2015 09:02:01 -0400, "R. David Murray" <rdmur...@bitdance.com> 
wrote:
> If this continues to plague you, we'll probably need to do some live
> debugging.  You can ping me (bitdancer) on IRC, I should be on for the
> next 8 hours or so.

This turns out to have been specific to Serhiy (or any issue on which he
was nosy) and was due to a keyboard error on my part.  It is now fixed.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] what is wrong with hg.python.org

2015-09-09 Thread R. David Murray
On Wed, 09 Sep 2015 20:02:38 +0200, Ivan Levkivskyi  
wrote:
> https://hg.python.org/ returns 503 Service Unavailable for an hour or so.
> Is it a maintenance? When it is expected to end?

It was an attempt at maintenance (upgrade) that went bad.  No ETA yet,
I'm afraid.  The repo is still ssh accessible, but not via https or hgweb.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Yet another "A better story for multi-core Python" comment

2015-09-08 Thread R. David Murray
On Tue, 08 Sep 2015 10:12:37 -0400, Gary Robinson  wrote:
> 2) Have a mode where a particular data structure is not reference
> counted or garbage collected.

This sounds kind of like what Trent did in PyParallel (in a more generic
way).

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testing tkinter on Linux

2015-08-27 Thread R. David Murray
On Thu, 27 Aug 2015 14:24:36 -0400, Terry Reedy tjre...@udel.edu wrote:
 On 8/27/2015 12:35 AM, Chris Angelico wrote:
  On Thu, Aug 27, 2015 at 2:20 PM, Terry Reedy tjre...@udel.edu wrote:
  None of the linux buildbots run with X enabled.  Consequently none of the
  tkinter (or tkinter user) gui tests are run on Linux.  It was thus pointed
  out to me, during discussion of using ttk widgets in Idle, that we do not
  really know if ttk works on the variety of Linux systems (beyond the one
  Serhiy uses) and that I should look into this.
 
  If it helps, my buildbot has full GUI services, so if there's a simple
  way to tell it to run the GUI tests every time, they should pass.
 
 Somewhere your buildbot has a shell script to run that ends with a 
 command to start the tests. The commands are echoed to the buildbot 
 output.  Here are two that I found.

No, the master controls this.

 ./python  ./Tools/scripts/run_tests.py -j 1 -u all -W --timeout=3600
 ... PCbuild\..\lib\test\regrtest.py -uall -rwW -n --timeout 3600
 (and python -m test ... should work)
 
 If the command has -ugui (included in -uall) *and* a graphics system can 
 be initiated (X on Linux), then the gui resource is marked present and 
 gui tests will run.

I believe gui depends on the existence of the DISPLAY environment
variable on unix/linux (that is, TK will fail to start if DISPLAY is not
set, so _is_gui_available will return False).  You should be able to
confirm this by looking at the text of the skip message in the buildbot
output.

It is possible to create a virtual X on an otherwise headless linux
system, but I've never tried to do it myself.  If someone comes up
with a recipe we could add it to the devguide chapter on running
a buildbot.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6

2015-08-25 Thread R. David Murray
On Tue, 25 Aug 2015 13:11:37 -, Papa, Florin florin.p...@intel.com 
wrote:
 My name is Florin Papa and I work in the Server Languages Optimizations Team 
 at Intel Corporation.
 
 I would like to submit a patch that solves compatibility issues of the 
 django_v2 benchmark in the Grand Unified Python Benchmark. The django_v2 
 benchmark uses inspect.getargspec(), which is deprecated and was removed in 
 Python 3.6. Therefore, it crashes with the message ImportError: cannot 
 import name 'getargspec' when using the latest version of Python on the 
 default branch.
 
 The patch modifies the benchmark to use inspect.signature() when Python 
 version is 3.6 or above and keep using inspect.getargspec() otherwise.

Note that Papa has submitted the patch to the tracker:

http://bugs.python.org/issue24934

I'm not myself sure how we are maintaining that repo
(https://hg.python.org/benchmarks), but it does seem like the bug
tracker is the right place for such a patch.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6

2015-08-25 Thread R. David Murray
On Tue, 25 Aug 2015 11:18:54 -0400, Terry Reedy tjre...@udel.edu wrote:
 On 8/25/2015 10:51 AM, R. David Murray wrote:
  On Tue, 25 Aug 2015 13:11:37 -, Papa, Florin florin.p...@intel.com 
  wrote:
  My name is Florin Papa and I work in the Server Languages Optimizations 
  Team at Intel Corporation.
 
  I would like to submit a patch that solves compatibility issues of the 
  django_v2 benchmark in the Grand Unified Python Benchmark. The django_v2 
  benchmark uses inspect.getargspec(), which is deprecated and was removed 
  in Python 3.6. Therefore, it crashes with the message ImportError: cannot 
  import name 'getargspec' when using the latest version of Python on the 
  default branch.
 
  The patch modifies the benchmark to use inspect.signature() when Python 
  version is 3.6 or above and keep using inspect.getargspec() otherwise.
 
  Note that Papa has submitted the patch to the tracker:
 
   http://bugs.python.org/issue24934
 
  I'm not myself sure how we are maintaining that repo
  (https://hg.python.org/benchmarks), but it does seem like the bug
  tracker is the right place for such a patch.
 
 Is the django_v2 benchmark original to benchmarks, or a copy from django?

Yeah, that's one question that was in my mind when I said I don't know
how we maintain that repo.  I'm pretty sure it was originally a copy of the
django project, but how do we maintain it?

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Profile Guided Optimization active by-default

2015-08-25 Thread R. David Murray
On Tue, 25 Aug 2015 15:59:23 -, Brett Cannon br...@python.org wrote:
 On Mon, 24 Aug 2015 at 23:19 Nick Coghlan ncogh...@gmail.com wrote:
 
  On 25 August 2015 at 05:52, Gregory P. Smith g...@krypto.org wrote:
   What we tested and decided to use on our own builds after benchmarking at
   work was to build with:
  
   make profile-opt PROFILE_TASK=-m test.regrtest -w -uall,-audio -x
  test_gdb
   test_multiprocessing
  
   In general if a test is unreliable or takes an extremely long time,
  exclude
   it for your sanity.  (i'd also kick out test_subprocess on 2.7; we
  replaced
   subprocess with subprocess32 in our build so that wasn't an issue)
 
  Having the production ready make target be make profile-opt
  doesn't strike me as the most intuitive thing in the world.
 
  I agree we want the ./configure  make sequence to be oriented
  towards local development builds rather than highly optimised
  production ones, so perhaps we could provide a make production
  target that enables PGO with an appropriate training set from
  regrtest, and also complains if --with-pydebug is configured?
 
 
 That's an interesting idea for a make target. It might help get the
 visibility of PGO builds higher as well.

If we did want to make PGO the default, having a 'make develop' target
would also be an option.  We already have a precedent for that in the
'setup.py develop' command.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-08-18 Thread R. David Murray
On Tue, 18 Aug 2015 12:47:22 +1000, Ben Finney ben+pyt...@benfinney.id.au 
wrote:
 Robert Collins robe...@robertcollins.net writes:
 
  However - 9 isn't a bad number for 'patches that the triagers think
  are ready to commit' inventory.
 
  So yay!. Also - triagers, thank you for feeding patches through the
  process. Please keep it up :)
 
 If I were a cheerleader I would be able to lead a rousing “Yay, go team
 backlog burners!”

Which at this point in time I think pretty much means Robert, who I also
extend a hearty thanks to.  (I think I moved one issue out of commit
review because the test didn't fail, and that's been it for me since
Robert started his burn down...)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] How are we merging forward from the Bitbucket 3.5 repo?

2015-08-16 Thread R. David Murray
On Sun, 16 Aug 2015 11:24:32 -0400, R. David Murray rdmur...@bitdance.com 
wrote:
 On Sun, 16 Aug 2015 00:13:10 -0700, Larry Hastings la...@hastings.org wrote:
   3. After your push request is merged, you pull from
  bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge
  into 3.5.  In this version I don't have to do a final null merge!
  
  I'd prefer 3; that's what we normally do, and that's what I was 
  expecting.  So far people have done 1 and 2.

Thinking about this some more I realize why I was confused.  My
patch/pull request was something that got committed to 3.4.  In that
case, to follow your 3 I'd have to leave 3.4 open until you merged the
pull request, and that goes against our normal workflow.

Maybe my patch will be the only exception...

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How are we merging forward from the Bitbucket 3.5 repo?

2015-08-16 Thread R. David Murray
On Sun, 16 Aug 2015 00:13:10 -0700, Larry Hastings la...@hastings.org wrote:
 
 
 So far I've accepted two pull requests into 
 bitbucket.com/larry/cpython350 in the 3.5 branch, what will become 
 3.5.0rc2.  As usual, it's the contributor's responsibility to merge 
 forward; if their checkin goes in to 3.5, it's their responsibility to 
 also merge it into the hg.python.org/cpython 3.5 branch (which will be 
 3.5.1) and default branch (which right now is 3.6).
 
 But... what's the plan here?  I didn't outline anything specific, I just 
 assumed we'd do the normal thing, pulling from 3.5.0 into 3.5.1 and 
 merging.  But of the two pull requests so far accepted, one was merged 
 this way, though it cherry-picked the revision (skipping the pull 
 request merge revision Bitbucket created), and one was checked in to 
 3.5.1 directly (no merging).
 
 I suppose we can do whatever we like.  But it'd be helpful if we were 
 consistent.  I can suggest three approaches:
 
  1. After your push request is merged, you cherry-pick your revision
 from bitbucket.com/larry/cpython350 into hg.python.org/cpython and
 merge.  After 3.5.0 final is released I do a big null merge from
 bitbucket.com/larry/cpython350 into hg.python.org/cpython.
  2. After your push request is merged, you manually check in a new
 equivalent revision into hg.python.org/cpython in the 3.5 branch. 
 No merging necessary because from Mercurial's perspective it's
 unrelated to the revision I accepted.  After 3.5.0 final is released
 I do a big null merge from bitbucket.com/larry/cpython350 into
 hg.python.org/cpython.
  3. After your push request is merged, you pull from
 bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge
 into 3.5.  In this version I don't have to do a final null merge!
 
 I'd prefer 3; that's what we normally do, and that's what I was 
 expecting.  So far people have done 1 and 2.
 
 Can we pick one approach and stick with it?  Pretty-please?

Pick one Larry, you are the RM :)

The reason you got different things was that how to do this was
under-specified.  Which of course we didn't realize, this being a new
procedure and all.

That said, I'm still not sure how (3) works.  Can you give us a step by
step like you did for creating the pull request?  Including how it
relates to the workflow for the other branches?  (What I did was just do
the thing I normally do, and then follow your instructions for creating
a pull request using the same patch I had previously committed to 3.4.)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How are we managing 3.5 NEWS?

2015-08-16 Thread R. David Murray
The 3.5.0 patch flow question also brings up the question of how we
are managing NEWS for 3.5.0 vs 3.5.1.  We have some commits that
are going in to both 3.5.0a2 and 3.5.1, and some that are only going
in to 3.5.1.  Currently the 3.5.1 NEWS says things are going in to
3.5.0a2, but that's obviously wrong.

Do we relabel the section in 3.5.1 NEWS as 3.5.1a1?  That would leave us
with the 3.5.1 NEWS never having the last alpha sections from 3.5.0,
which is logical but might be confusing (or is that the way we've done
it in the past?)  Do we leave it to the RM to sort out each individual
patch when he merges 3.5.0 into the 3.5 branch?  That sounds like a lot
of work, although if there are few enough patches that go into the
alphas, it might not be too hard.

Either way, that final 3.5.0 merge is going to require an edit of the
NEWS file.

Larry, how do you plan to handle this?

--David

PS: We'll also need an answer to this question for the proposed new
NEWS workflow of putting the NEWS items in the tracker.  We'll probably
need to introduce x.y.z versions into the tracker.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About closures creates in exec

2015-08-12 Thread R. David Murray
On Wed, 12 Aug 2015 21:05:50 +0200, Andrea Griffini agr...@tin.it wrote:
 Is it intended that closures created in exec statement/function cannot see
 locals if the exec was provided a locals dictionary?
 
 This code gives an error (foo is not found during lambda execution):
 
 exec(def foo(x): return x\n\n(lambda x:foo(x))(0), globals(), {})
 
 while executes normally with
 
 exec(def foo(x): return x\n\n(lambda x:foo(x))(0))
 
 Is this the expected behavior? If so where is it documented?

Yes.  In the 'exec' docs, indirectly.  They say:

Remember that at module level, globals and locals are the same
dictionary. If exec gets two separate objects as globals and locals, the
code will be executed as if it were embedded in a class definition.

Try the above in a class def and you'll see you get the same behavior.

See also issue 24800.  I'm wondering if the exec docs need to talk about
this a little bit more, or maybe we need a faq entry and a link to it?

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ?

2015-08-11 Thread R. David Murray
On Tue, 11 Aug 2015 18:09:34 +1200, Robert Collins robe...@robertcollins.net 
wrote:
 So, there's  a patch on issue 9232 - allow trailing commas in function
 definitions - but there's been enough debate that I suspect we need a
 PEP.
 
 Would love it if someone could correct me, but I'd like to be able to
 either categorically say 'no' and close the ticket, or 'yes and this
 is what needs to happen next'.

I think we might just need another round of discussion here.

I'm +1 myself.  Granted there haven't been many times I've wanted it
(functions with enough arguments to want to make it easy to add and
remove elements are a bit of a code smell), but I have wanted it (and
even used the form that is accepted) several times.  On the other hand,
the number of times when the detection of a trailing comma has revealed
a missing argument to me (Raymond's objection) has been...well, I'm
pretty sure it is zero.  Especially since it only happens *sometimes*.
Since backward compatibility says we shouldn't disallow it where it is
currently allowed, the only logical thing to do, IMO, is consistently
allow it.

(If you wanted to fix an 'oops' trailing comma syntax issue, I'd vote for
disallowing trailing commas outside of ().  The number of times I've
ended up with an unintentional tuple after converting a dictionary to a
series of assignments outnumbers both of the above :)  Note, I am *not*
suggesting doing this!)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] trailing commas on statements

2015-08-11 Thread R. David Murray
On Wed, 12 Aug 2015 01:03:38 +1000, Chris Angelico ros...@gmail.com wrote:
 On Wed, Aug 12, 2015 at 12:46 AM, R. David Murray rdmur...@bitdance.com 
 wrote:
  (If you wanted to fix an 'oops' trailing comma syntax issue, I'd vote for
  disallowing trailing commas outside of ().  The number of times I've
  ended up with an unintentional tuple after converting a dictionary to a
  series of assignments outnumbers both of the above :)  Note, I am *not*
  suggesting doing this!)
 
 Outside of any form of bracket, I hope you mean. The ability to leave
 a trailing comma on a list or dict is well worth keeping:
 
 func = {
 +: operator.add,
 -: operator.sub,
 *: operator.mul,
 /: operator.truediv,
 }

Sorry, trailing comma outside () was a shorthand for 'trailing comma
on a complete statement'.  That is, what trips me up is going from
something like:

dict(abc=1,
 foo=2,
 bar=3,
 )

to:

  abc = 1,
  foo = 2,
  bar = 3,

That is, I got rid of the dict(), but forgot to delete the commas.
(Real world examples are more complex and it is often that the
transformation gets done piecemeal and/or via cut and paste and I only
miss one or two of the commas...

But, for backward compatibility reasons, we wouldn't change it even if
everyone thought it was a good idea for some reason :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ?

2015-08-11 Thread R. David Murray
On Tue, 11 Aug 2015 08:31:57 -0700, Chris Barker - NOAA Federal 
chris.bar...@noaa.gov wrote:
 Looking back at the previous discussion, it looked like it's all been
 said, and there was almost unanimous approval (with some key mild
 disapproval) for the idea, so what we need now is a pronouncement.

And we got it, so done :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-29 Thread R. David Murray
On Wed, 29 Jul 2015 06:26:44 +0200, Lennart Regebro rege...@gmail.com wrote:
 On Tue, Jul 28, 2015 at 10:26 PM, Tim Peters tim.pet...@gmail.com wrote:
  I have yet to see a use case for that.
 
  Of course you have.  When you address them, you usually dismiss them
  as calendar operations (IIRC).
 
 Those are not usecases for this broken behaviour.
 
 I agree there is a usecase for where you want to add one day to an 8am
 datetime, and get 8am the next day. Calling them date operations or
 calendar operations is not dismissing them. I got into this whole
 mess because I implemented calendars. That use case is the main
 usecase for those operations.
 
 But that usecase is easily handled in several ways. Already today in
 how datetime works, you have two solutions: The first is to use a time
 zone naive datetime. This is what most people who want to ignore time
 zones should use. The other is to separate the datetime into date and
 time objects and add a day to the date object.

I said I was done commenting, and this is supposed to move to the
datetime-sig, but this lack of use cases keeps coming up, so I'm going
to make one more comment, repeating something I said earlier.

What *I* want aware datetimes to do is give me the correct timezone
label when I format them, given the date and time information they hold.
The naive arithmetic is perfect, the conversion between timezones is
fine, the only thing missing from my point of view is a timezone
database such that if I tell a datetime it is in zone X, it will print
the correct offset and/or timezone label when I format it as a string.

That's my use case; and it is, I would venture to guess, *the* most
common use case that datetime does not currently support, and I was very
disappointed to find that pytz didn't support it either (except via
'normalize' calls, but why should I have to call normalize every
time I do datetime arithmetic?  It should just *do* it.)

Anything more would be gravy from my point of view.

Now, maybe tzinfo can't actually support this, but I haven't heard Tim
say that it can't.  Since the datetime is always passed to tzinfo,
I think it can.

--David

PS: annoying story: I scheduled an event six months in advance on my
tablet's calendar, and it scheduled it using what was my *current* GMT
offset (it calls it a time zone) even though it knew what date I was
scheduling it on.  Which meant the alarm in my calendar was off by an
hour.  I hope they have fixed this bug.  I relay this because it is
exactly the same problem I find to be present in pytz.  If the calendar
had been using aware datetimes and naive arithmetic as I describe above,
the alarm would not have been off by an hour.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-27 Thread R. David Murray
On Mon, 27 Jul 2015 02:09:19 -0500, Tim Peters tim.pet...@gmail.com wrote:
 Seriously, try this exercise:  how would you code Paul's example if
 your kind of arithmetic were in use instead?  For a start, you have
 no idea in advance how many hours you may need to add to get to the
 same local time tomorrow.  24 won't always work  Indeed, no _whole_
 number of hours may work (according to one source I found, Australia's
 Lord Howe Island uses a 30-minute DST adjustment).  So maybe you don't
 want to do it by addition.  What then?  Pick apart the year, month and
 day components, then simulate naive arithmetic by hand?
 
 The point is that there's no _obvious_ way to do it then.  I'd
 personally strip off the tzinfo member, leaving a wholly naive
 datetime where arithmetic works correctly ;-) , add the day, then
 attach the original tzinfo member again.

I *think* I'd be happy with that solution.

I'm not sure if the opinion of a relatively inexperienced timezone user
(whose head hurts when thinking about these things) is relevant, but in
case it is:

My brief experience with pytz is that it gets this all wrong.  (Wrong
is in quotes because it isn't wrong, it just doesn't match my use
cases).  If I add a timedelta to a pytz datetime that crosses the DST
boundary, IIRC I get something that still claims it is in the previous
timezone (which it therefore seemed to me was really a UTC offset),
and I have to call 'normalize' to get it to be in the correct timezone
(UTC offset).  I don't remember what that does to the time, and I have
no intuition about it (I just want it to do the naive arithmetic!)

This makes no sense to me, since I thought a tzinfo object was supposed
to represent the timezone including the DST transitions.  I presume this
comes from the fact that datetime does naive arithmetic and pytz is
trying to paste non-naive arithmetic on top?

So, would it be possible to take the timezone database support from
pytz, and continue to implement naive-single-zone arithmetic the way Tim
proposes, and have it automatically produce the correct UTC offset and
UTC-offset-label afterward, without a normalize call?  I assumed that
was what the PEP was proposing, but I never read it so I can't complain
that I was wrong :)

I have a feeling that I'm completely misunderstanding things, since
tzinfo is still a bit of a mystery to me.

Based on this discussion it seems to me that (1) datetime has to/should
continue to do naive arithmetic and (2) if you need to do non-naive UTC
based calculations (or conversions between timezones) you should be
converting to UTC *anyway* (explicit is better than implicit).  The
addition of the timezone DB would then give us the information to *add*
tools that do conversions between time zones c.

At Tim says, the issue of disambiguation is separate, and I seem to
recall a couple of proposals from the last time this thread went around
for dealing with that.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-27 Thread R. David Murray
On Mon, 27 Jul 2015 16:37:47 +0200, Lennart Regebro rege...@gmail.com wrote:
 On Mon, Jul 27, 2015 at 3:59 PM, R. David Murray rdmur...@bitdance.com 
 wrote:
  I don't remember what that does to the time, and I have
  no intuition about it (I just want it to do the naive arithmetic!)
 
 But what is it that you expect?

I just want it to do the naive arithmetic

  So, would it be possible to take the timezone database support from
  pytz, and continue to implement naive-single-zone arithmetic the way Tim
  proposes, and have it automatically produce the correct UTC offset and
  UTC-offset-label afterward, without a normalize call?
 
 That depends on your definition of correct.

If I have a time X on date Y in timezone Z, it is either this UTC offset
or that UTC offset, depending on what the politicians decided.  Couple
that with naive arithmetic, and I think you have something easily
understandable from the end user perspective, and useful for a wide
variety (but far from all) use cases.

I'll stop now :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread R. David Murray
On Sun, 26 Jul 2015 22:59:51 +0100, Paul Moore p.f.mo...@gmail.com wrote:
 On 26 July 2015 at 16:39, Berker Peksağ berker.pek...@gmail.com wrote:
  I'm not actually clear what Commit Review status means. I did do a
  quick check of the dev guide, and couldn't come up with anything,
 
  https://docs.python.org/devguide/triaging.html#stage
 
 Thanks, I missed that. The patches I checked seemed to have been
 committed and were still at commit review, though. Doesn't the roundup
 robot update the stage when there's a commit? (Presumably not, and
 people forget to do so too).

Yes, it is manual.  Making it automatic would be nice.  Patches accepted
:) Writing a Roundup detector for this shouldn't be all that hard once
you figure out how detectors work.  See:


http://www.roundup-tracker.org/docs/customizing.html#detectors-adding-behaviour-to-your-tracker

The steep part of the curve there is testing your work, which is
something some effort has been made to simplify, but unfortunately I'm
not up on the details of that currently.

In the meantime, this is a service triagers could perform: look at the
commit review issues and make sure that really is the correct stage.

Now, as for the original question:

First, a little history so that the old timers and the newer committers
are on the same page.  When 'commit review' was originally introduced,
what it was used for was for what was then a second committer required
review during the RC phase.  I have recently (last two years?) with
agreement of the workflow list and with no objection from committers
shifted this to the model documented in the devguide currently.

However, I agree that what is currently in the devguide is not
sufficient.  Here is my actual intent for the workflow:

1) Issue is opened.  Triager/committer sets it to 'patch needed' if they
believe the bug should be fixed/feature implemented.  (A committer may
choose to override a triager decision and close the bug, explaining why
for the benefit of the triager and all onlookers.)

2) Once we have a patch, one or more triage or committer people work
with the submitter or whoever is working on the patch (who may have no
special role or be a triager or be a committer) in a patch
review-and-update cycle until a triager or a committer thinks it is
ready for commit.

3) If the patch was submitted and/or worked on by a committer, the patch
can be committed.

4) If the patch is not by a committer, the stage should be set to
'commit review' by a triager.  Committers with time available should, as
Robert suggests, look for patches in 'commit review' status *and review
them*.  The wording of a quick once over in the devguide isn't
*wrong*, but it does give the impression the patch is ready to commit,
whereas the goal here is to review the work of the *triager*.  If the
patch is not actually commit ready for whatever reason, it gets bounced
back to patch review with an explanation as to why.

5) Eventually (hopefully the first time or quickly thereafter most of
the time!) the patch really is ready to commit and gets committed.

An here, to my mind, is the most important bit:

6) When the patches moved to commit ready by a given triager are
consistently committable after the step 4 review, it is time to offer
that triager commit privileges.

My goal here is to *transfer* the knowledge of what makes a good review
process and a good patch from the existing committers to new committers,
and therefore acquire new committers.

Now, the problem that Paul cites about not feeling comfortable with the
*commit* process is real (although I will say that at this point I can
go months without doing a commit and I still remember quite clearly how
to do one; it isn't that complicated).  Improving the tooling is one way
to attack that.  I think there can be two types of tooling:  the large
scale problem the PEPs are working toward, and smaller scale helper
scripts such as Paul mentions that one or more committers could develop
and publish (patchcheck on steroids).

Before that, though, it is clear that the devguide needs a memory
jogger cheat sheet on how to do a multi-branch commit, linked from
the quicklinks section.

So, I'm hoping Carol will take what I've written above and turn it into
updates for the devguide (assuming no one disagrees with what I've said :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Where are bugs with the web site reported?

2015-07-16 Thread R. David Murray
On Thu, 16 Jul 2015 12:24:45 -0700, Glenn Linderman v+pyt...@g.nevcal.com 
wrote:
 On 7/16/2015 12:11 PM, Ryan Gonzalez wrote:
  I have encountered this weird issue on Chrome for Android where 
  scrolling up just a little causes the page to dart to the top. I was 
  going to report it in the bug tracker, but I didn't see a label for 
  the web site itself.
 
  Worst part is, this is stopping me from reading the humor page!
 
 Sounds more like a bug in Chrome than on the site, unless it is 
 repeatable using other browsers, or unless the site has Chrome-specific 
 code.

python.org bugs are *not* reported on bugs.python.org.  I don't remember
where they are reported...it's on github somewhere I think.

The fact that it isn't obvious may be a good candidate for a bug
report :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Cross compiling C-python 2.7.10 maintenance release for ARM on 64 bit x86_64 systems.

2015-07-14 Thread R. David Murray
On Tue, 14 Jul 2015 10:22:05 -, Andrew Robinson andr...@r3dsolutions.com 
wrote:
 I'm trying to cross compile C-python 2.7.10 for an embedded system. (Eg: 
 a Kobo reader).
 But there appears to be some bugs that do not allow the latest 
 maintenance release of Python to correctly cross compile on an x86-64 
 build system, for a 32 bit arm system.

To my understanding we don't yet fully support this (though we'd like
to), because we don't have a buildbot that regularly does cross compiles.
There are open issues in the tracker, perhaps you can vet and/or submit
some patches.[*]  Or contribute a buildbot?

--David

[8} See eg http://bugs.python.org/issue5404; I'm guessing there are others.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Freeze exception for http://bugs.python.org/issue23661 ?

2015-07-13 Thread R. David Murray
On Tue, 14 Jul 2015 14:01:25 +1200, Robert Collins robe...@robertcollins.net 
wrote:
 So unittest.mock regressed during 3.5, and I found out when I released
 the mock backport.
 
 The regression is pretty shallow - I've applied the fix to 3.6, its a
 one-liner and comes with a patch.
 
 Whats the process for getting this into 3.5? Its likely to affect a
 lot of folk using mock (pretty much every OpenStack project got git
 with it when I released mock 1.1).

3.5 hasn't been released yet.  The patch ideally would have gone into
3.5 first, then been merged to 3.6.  As it is, you'll apply it to
3.5, and then do a null merge to 3.6.  It will get released in the
next 3.5 beta.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's New editing

2015-07-06 Thread R. David Murray
On Mon, 06 Jul 2015 21:45:01 +0300, Serhiy Storchaka storch...@gmail.com 
wrote:
 On 05.07.15 20:52, R. David Murray wrote:
  Just so people aren't caught unawares, it is very unlikely that I will have
  time to be the final editor on What's New for 3.5 they way I was for 3.3 
  and
  3.4.  I've tried to encourage people to keep What's New up to date, but
  *someone* should make a final editing pass.  Ideally they'd do at least the
  research Serhiy did last year on checking that there's a mention for all of 
  the
  versionadded and versionchanged 3.5's in the docs.  Even better would be to
  review the NEWS and/or commit history...but *that* is a really big job these
  days
 
 Many thanks you David for your invaluable work.
 
 Here is 3.5 NEWS file cleaned from duplicates in 3.4 NEWS file (i.e. 
 from entries about merged bug fixes). It is much less than unfiltered 
 NEWS file. Hope this will help volunteers.

That's great.  What I did was work from the html-rendered NEWS page, and
click through to the issue to figure out whether it was a bug fix or an
enhancement.  Not having to do that check should speed things up.  I
seem to recall I did find a couple of things that were screwed up and
still bore mentioning in whatsnew, but I doubt that is likely enough to
make enough difference to be worth it.  I also wound up fixing some
incorrect NEWS entries (wrong numbers, English, other errors), but that
is not central to the whatsnew project.  That activity was probably
included in the hours count, though.

For David (or whoever):  in addition to the obvious task of writing up
appropriate entries in What's New, part of what I did was to make sure
that all of the relevant documentation entries had the appropriate
versionchanged or versionadded tags, and that the new documentation made
sense.  As I recall, my working rhythm was to write the What's New entry
including links to the things that had changed, render the what's new
page to html, fix the links, then work through the links to make sure
the docs made sense and there were appropriate 'versionxxx' tags.  You,
of course, may find a different working style more beneficial :).

Oh, and work from newest change to oldest change.  I did it from oldest
to newest and only realized late in the game that was the wrong order,
because some changes got undone or modified by later changes :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] What's New editing

2015-07-05 Thread R. David Murray
Just so people aren't caught unawares, it is very unlikely that I will have
time to be the final editor on What's New for 3.5 they way I was for 3.3 and
3.4.  I've tried to encourage people to keep What's New up to date, but
*someone* should make a final editing pass.  Ideally they'd do at least the
research Serhiy did last year on checking that there's a mention for all of the
versionadded and versionchanged 3.5's in the docs.  Even better would be to
review the NEWS and/or commit history...but *that* is a really big job these
days

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's New editing

2015-07-05 Thread R. David Murray
On Mon, 06 Jul 2015 11:06:41 +1000, Nick Coghlan ncogh...@gmail.com wrote:
 On 6 July 2015 at 03:52, R. David Murray rdmur...@bitdance.com wrote:
  Just so people aren't caught unawares, it is very unlikely that I will have
  time to be the final editor on What's New for 3.5 they way I was for 3.3 
  and
  3.4.
 
 And thank you again for your work on those!
 
  I've tried to encourage people to keep What's New up to date, but
  *someone* should make a final editing pass.  Ideally they'd do at least the
  research Serhiy did last year on checking that there's a mention for all of 
  the
  versionadded and versionchanged 3.5's in the docs.  Even better would be to
  review the NEWS and/or commit history...but *that* is a really big job these
  days
 
 What would your rough estimate of the scope of work be? As you note,
 the amount of effort involved in doing a thorough job of that has
 expanded beyond what can reasonably be expected of volunteer
 contributors, so I'm wondering if it might make sense for the PSF to
 start offering a contract technical writing gig to finalise the What's
 New documentation for each new release.
 
 After all, the What's New doc is an essential component of
 communicating changes in recommended development practices to Python
 educators, so ensuring we do a good job with that can have a big
 multiplier effect on all the other work that goes into creating each
 new release.

I can tell you that 3.4 took me approximately 67 hours according to my
time log.  That was going through the list prepared by Serhiy, and going
through pretty much all of the NEWS entries but not the commit log.  I'm
a precisionist, so I suspect someone less...ocd...about the details
could do it a bit faster, perhaps at the cost of some small amount of
accuracy :)

On the other hand, my knowledge of the code base and the development
that had been going on probably sped up my analysis and writeup of
the missing entries (and revision of existing entries, in many cases).

On gripping hand, I also did some small amount of documentation
rewriting and clarification along the way.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Should asyncio ignore KeyboardInterrupt?

2015-07-04 Thread R. David Murray
Once long ago in Internet time (issue 581232) time.sleep on windows was
not interruptible and this was fixed.  Is it possible the work on EINTR
has broken that fix?

(I don't currently have 3.5 installed on windows to test that theory...)

On Sat, 04 Jul 2015 17:46:34 +0200, Guido van Rossum gu...@python.org wrote:
 I think this may be more of a Windows issue than an asyncio issue. I agree
 that ideally ^C should take effect immediately (as it does on UNIX).
 
 On Sat, Jul 4, 2015 at 9:54 AM, Terry Reedy tjre...@udel.edu wrote:
 
  Should the loop.run... methods of asyncio respect KeyboardInterrupt (^C)?
 
  Developer and user convenience and this paragraph in PEP
 
  However, exceptions deriving only from BaseException are typically not
  caught, and will usually cause the program to terminate with a traceback.
  In some cases they are caught and re-raised. (Examples of this category
  include KeyboardInterrupt and SystemExit ; it is usually unwise to treat
  these the same as most other exceptions.) 
 
  and this examples in the doc (two places)
 
  TCP echo server
  # Serve requests until CTRL+c is pressed
  print('Serving on {}'.format(server.sockets[0].getsockname()))
  try:
  loop.run_forever()
  except KeyboardInterrupt:
  pass
 
  suggest yes.  On the other hand, the section on
  Set signal handlers for SIGINT and SIGTERM
  suggests not, unless an explicit handler is registered and then only on
  Unix.
 
  In any case, Adam Bartos, python-list, An asyncio example, today asks.
  '''
  This is a minimal example:
 
  import asyncio
 
  async def wait():
  await asyncio.sleep(5)
 
  loop = asyncio.get_event_loop()
  loop.run_until_complete(wait())
 
  Ctrl-C doesn't interrupt the waiting, instead KeyboardInterrupt occurs
  after those five seconds. It's 3.5.0b2 on Windows. Is it a bug?
  '''
 
  Using run_forever instead, I found no way to stop other than killing the
  process (Idle or Command Prompt).
 
  --
  Terry Jan Reedy
 
  ___
  Python-Dev mailing list
  Python-Dev@python.org
  https://mail.python.org/mailman/listinfo/python-dev
  Unsubscribe:
  https://mail.python.org/mailman/options/python-dev/guido%40python.org
 
 
 
 
 -- 
 --Guido van Rossum (python.org/~guido)
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 https://mail.python.org/mailman/options/python-dev/rdmurray%40bitdance.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Importance of async keyword

2015-06-26 Thread R. David Murray
On Sat, 27 Jun 2015 01:10:33 +1000, Chris Angelico ros...@gmail.com wrote:
 The way I'm seeing it, coroutines are like cooperatively-switched
 threads; you don't have to worry about certain operations being
 interrupted (particularly low-level ones like refcount changes or list
 growth), but any time you hit an 'await', you have to assume a context
 switch. That's all very well, but I'm not sure it's that big a problem
 to accept that any call could context-switch; atomicity is already a
 concern in other cases, which is why we have principles like EAFP
 rather than LBYL.

Read Glyph's article, it explains why:

https://glyph.twistedmatrix.com/2014/02/unyielding.html

 There's clearly some benefit to being able to assume that certain
 operations are uninterruptible (because there's no 'await' anywhere in
 them), but are they truly so? Signals can already interrupt something
 _anywhere_:

Yes, and you could have an out of memory error anywhere in your program
as well.  (Don't do things in your signal handlers, set a flag.)  But
that doesn't change the stuff Glyph talks about (and Guido talks about)
about *reasoning* about your code.

I did my best to avoid using threads, and never invested the time and
effort in Twisted.  But I love programming with asyncio for highly
concurrent applications.  It fits in my brain :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tracker reviews look like spam

2015-06-09 Thread R. David Murray

On Tue, 09 Jun 2015 21:41:23 -, Gregory P. Smith g...@krypto.org wrote:
 I *believe* you can get this to happen today in a review if you add the
 rep...@bugs.python.org email address to the code review as the issueX
 in the subject line will make the tracker turn it into a bug comment.  If
 so, having that be the default cc for all reviews created would be a great
 feature (and modify it not to send mail to anyone else).

I haven't double checked, but I think the issue number has to be in
square brackets to be recognized.  Presumably that's a change that
could be made.  What is lacking is someone willing to climb the
relatively steep learning curve needed to submit patches for that
part of the system, and some one of us with the keys to the tracker
with time to apply the patch.  Given the former I think we can
manage the latter.

I believe Ezio did try to make rietveld update the tracker, and ran
into a problem whose nature I don't know...but I don't think he spent
a whole lot of time trying to debug the problem, whatever it was.
I imagine he'll chime in :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tracker reviews look like spam

2015-06-07 Thread R. David Murray
On Fri, 22 May 2015 08:05:49 +0300, Antti Haapala an...@haapala.name wrote:
 There's an issue about this at
 http://psf.upfronthosting.co.za/roundup/meta/issue562
 
 I believe the problem is not that of the SPF, but the fact that mail gets
 sent using IPv6 from an address that has neither a name mapping to it nor a
 reverse pointer from IP address to name in DNS. See the second-first
 comment where R. David Murray states that Mail is consistently sent from

The ipv6 reverse dns issue is being worked on.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] speed.python.org (was: 2.7 is here until 2020, please don't call it a waste.)

2015-06-04 Thread R. David Murray
On Thu, 04 Jun 2015 12:55:55 +0200, M.-A. Lemburg m...@egenix.com wrote:
 On 04.06.2015 04:08, Tetsuya Morimoto wrote:
  If someone were to volunteer to set up and run speed.python.org, I think
  we could add some additional focus on performance regressions. Right now,
  we don't have any way of reliably and reproducibly testing Python
  performance.
  
  I'm very interested in speed.python.org and feel regret that the project is
  standing still. I have a mind to contribute something ...
 
 On 03.06.2015 18:59, Maciej Fijalkowski wrote: On Wed, Jun 3, 2015 at 3:49 
 PM, R. David Murray
  I think we should look into getting speed.python.org up and
  running for both Python 2 and 3 branches:
 
   https://speed.python.org/
 
  What would it take to make that happen ?
 
  I guess ideal would be some cooperation from some of the cpython devs,
  so say someone can setup cpython buildbot
 
  What does set up cpython buildbot mean in this context?
 
  The way it works is dual - there is a program running the benchmarks
  (the runner) which is in the pypy case run by the pypy buildbot and
  the web side that reports stuff. So someone who has access to cpython
  buildbot would be useful.

(I don't seem to have gotten a copy of Maciej's message, at least not
yet.)

OK, so what you are saying is that speed.python.org will run a buildbot
slave so that when a change is committed to cPython, a speed run will be
triggered?  Is the runner a normal buildbot slave, or something
custom?  In the normal case the master controls what the slave
runs...but regardless, you'll need to let us know how the slave
invocation needs to be configured on the master.

 Ok, so there's interest and we have at least a few people who are
 willing to help.
 
 Now we need someone to take the lead on this and form a small
 project group to get everything implemented. Who would be up
 to such a task ?
 
 The speed project already has a mailing list, so you could use
 that for organizing the details.

If it's a low volume list I'm willing to sign up, but regardless I'm
willing to help with the buildbot setup on the CPython side.  (As soon
as my credential-update request gets through infrastructure, at least :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.7 is here until 2020, please don't call it a waste.

2015-06-03 Thread R. David Murray
On Wed, 03 Jun 2015 12:04:10 +0200, Maciej Fijalkowski fij...@gmail.com wrote:
 On Wed, Jun 3, 2015 at 11:38 AM, M.-A. Lemburg m...@egenix.com wrote:
  On 02.06.2015 21:07, Maciej Fijalkowski wrote:
  Hi
 
  There was a PSF-sponsored effort to improve the situation with the
  https://bitbucket.org/pypy/codespeed2/src being written (thank you
  PSF). It's not better enough than codespeed that I would like, but
  gives some opportunities.
 
  That said, we have a benchmark machine for benchmarking cpython and I
  never deployed nightly benchmarks of cpython for a variety of reasons.
 
  * would be cool to get a small VM to set up the web front
 
  * people told me that py3k is only interesting, so I did not set it up
  for py3k because benchmarks are mostly missing
 
  I'm willing to set up a nightly speed.python.org using nightly build
  on python 2 and possible python 3 if there is an interest. I need
  support from someone maintaining python buildbot to setup builds and a
  VM to set up stuff, otherwise I'm good to go
 
  DISCLAIMER: I did facilitate in codespeed rewrite that was not as
  successful as I would have hoped. I did not receive any money from the
  PSF on that though.
 
  I think we should look into getting speed.python.org up and
  running for both Python 2 and 3 branches:
 
   https://speed.python.org/
 
  What would it take to make that happen ?
 
 I guess ideal would be some cooperation from some of the cpython devs,
 so say someone can setup cpython buildbot

What does set up cpython buildbot mean in this context?

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.4): Issue #23840: tokenize.open() now closes the temporary binary file on error to

2015-05-26 Thread R. David Murray
On Tue, 26 May 2015 08:20:01 +0200, Victor Stinner victor.stin...@gmail.com 
wrote:
 What is wrong with except: in this specific case?

Nothing is wrong with it from a technical standpoint.  However, if we
use 'except BaseException' that makes it clear that someone has thought
about it and decided that all exceptions should be caught, as opposed to
it being legacy code or a programming mistake.

  On 2015-05-26 12:26 AM, Terry Reedy wrote:
 
  +try:
  +encoding, lines = detect_encoding(buffer.readline)
  +buffer.seek(0)
  +text = TextIOWrapper(buffer, encoding, line_buffering=True)
  +text.mode = 'r'
  +return text
  +except:
  +buffer.close()
  +raise
 
 
  Please do not add bare 'except:'.  If you mean 'except BaseException:',
  say so.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-21 Thread R. David Murray
On Tue, 21 Apr 2015 10:10:06 -0700, Guido van Rossum gu...@python.org wrote:
 On Tue, Apr 21, 2015 at 9:17 AM, R. David Murray rdmur...@bitdance.com
 wrote:
 
  Please be respectful rather than inflammatory.  If you read what I
  wrote, I did not say that I was going to stop contributing, I
  specifically talked about that gut reaction being both emotional and
  illogical.  That doesn't make the reaction any less real, and the fact
  that such reactions exist is a data point you should consider in
  conducting your PR campaign for this issue.  (I don't mean that last as
  a negative:  this issue *requires* an honest PR campaign.)
 
 
 Well, my own reactions at this point in the flame war are also quite
 emotional. :-(
 
 I have done my best in being honest in my PR campaign. But I feel like the

I believe you have.  My inclusion of the word 'honest' was meant to
contrast the kind of PR we need with the kind of PR people typically
think about, which is often not particularly honest.

 opposition (not you, but definitely some others -- have you seen Twitter?)
 are spreading FUD based on an irrational conviction that this will destroy

No, I tend only to peek in there occasionally.

 Python. It will not. It may not prove the solution to all Python's problems
 -- there's always 3.6. (Oh wait, Python 2.7 is perfect. I've heard that
 before -- Paul Everitt famously said the same of Python 1.5.2. Aren't you
 glad I didn't take him literally? :-P )

Yes.  But somewhere there or not long before was my introduction to
Python, so I remember it fondly :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-21 Thread R. David Murray
On Tue, 21 Apr 2015 16:55:49 -, Gregory P. Smith g...@krypto.org wrote:
 We will not be putting type annotations anywhere in the stdlib or expecting
 anyone else to maintain them there. That would never happen until tools
 that are convincing enough in their utility for developers to _want_ to use
 are available and accepted.  That'll be a developer workflow thing we could
 address with a later PEP. IF it happens at all.

This is the most reassuring statement I've heard so far ;)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-21 Thread R. David Murray
On Tue, 21 Apr 2015 21:31:49 +0300, Paul Sokolovsky pmis...@gmail.com wrote:
 On Tue, 21 Apr 2015 09:50:59 -0700 Ethan Furman et...@stoneleaf.us wrote:
 
  On 04/21, Paul Sokolovsky wrote:
   
   And for example yesterday's big theme was people blackmailing that
   they stop contributing to stdlib if annotations are in [...]
  
  A volunteer's honest reaction is not blackmail, and categorizing it
  as such is not helpful to the discussion.
 
 Sure, that was rather humoresque note. Still, one may wonder why
 honest reaction is like that, if from reading PEP484 it's clear that
 it doesn't change status quo: https://www.python.org/dev/peps/pep-3107
 added annotations long ago, and PEP484 just provides default

But what concerned me as a Python core developer was the perception that
as a core developer I would have to deal with type hints and type
checking in the stdlib, because there was talk of including type hints
for the stdlib (as stub files) in 3.6.  *That* was the source of my
concern, which is reduced by the statement that we'll only need to think
about type checking in the stdlib once the tools are *clearly* adding
value, in *our* (the stdlib module maintainers') opinion, so that
we *want* to use the type hints.

The discussion of stub files being maintained by interested parties in a
separate repository is also helpful.  Again, that means that wider
adoption should mostly only happen if the developers see real benefit.
I still dislike the idea of having to read type hints (I agree with
whoever it was said a well written docstring is more helpful than
abstract type hints when reading Python code), but perhaps I will get
used to it if/when it shows up in libraries I use.

I think it would serve your goal better if instead of dismissing
concerns you were to strive to understand them and then figure out how
to allay them.

Overall, I suspect that those who are doubtful are put off by the rah
rah this is great you are fools for not wanting to use it tone of
(some) of the proponents (which makes us think this is going to get
pushed on us willy-nilly), whereas hearing a consistent message that
this will *enable* interested parties to collaborate on type checking
without impacting the way the rest of you code *unless it proves its
value to you* would be better received.  Guido has done the latter,
IMO, as have some others.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-21 Thread R. David Murray
On Wed, 22 Apr 2015 01:09:52 +1000, Chris Angelico ros...@gmail.com wrote:
 def incremental_parser(input: FileLike) - List[Token]:
 tokens = []
 data = 
 while True:
 if not data:
 data = input.read(64)
 token = Token(data[0]); data = data[1:]
 while token.wants_more():
 token.give_more(data[0]); data = data[1:]
 tokens.append(token)
 if token.is_end_of_stream(): break
 input.seek(-len(data), 1)
 return tokens
 
 If you were to exhaustively stipulate the requirements on the
 file-like object, you'd have to say:
 
 * Provides a read() method which takes an integer and returns up to
 that many bytes
 * Provides a seek() method which takes two integers
 * Is capable of relative seeking by at least 63 bytes backward
 * Is open for reading
 * Etcetera
 
 That's not the type system's job. Not in Python. Maybe in Haskell, but

Just a note that if I'm reading the high level description right,
this kind of analysis is exactly the kind of thing that Flow does
for javascript.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-21 Thread R. David Murray
On Tue, 21 Apr 2015 18:27:50 +0300, Paul Sokolovsky pmis...@gmail.com wrote:
  I was replying to Steven's message. Did you read it?
 
 Yes. And I try to follow general course of discussion, as its hard to
 follow individual sub-threads. And for example yesterday's big theme
 was people blackmailing that they stop contributing to stdlib if
 annotations are in, and today's theme appear to be people telling that
 static type checking won't be useful. And just your reply to Steven
 was a final straw which prompted me to point out that static type
 checking is not a crux of it, but just one (not the biggest IMHO) usage.

Please be respectful rather than inflammatory.  If you read what I
wrote, I did not say that I was going to stop contributing, I
specifically talked about that gut reaction being both emotional and
illogical.  That doesn't make the reaction any less real, and the fact
that such reactions exist is a data point you should consider in
conducting your PR campaign for this issue.  (I don't mean that last as
a negative:  this issue *requires* an honest PR campaign.)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-20 Thread R. David Murray
I wrote a longer response and then realized it didn't really add much to
the discussion.  So let me be short: type annotations do *not* appeal to
me, and I am not looking forward to the cognitive overhead of dealing
with them.  Perhaps I will eventually grow to like them if the tools
that use them really add value.  You'll have to sell me on it, though.

On Mon, 20 Apr 2015 12:35:33 -0700, luk...@langa.pl wrote:
 Stub files have many downsides, too, unfortunately:
 - we don’t *want* to have them, but we *need* to have them (C extensions, 
 third-party modules, Python 2, …)
 - they bring cognitive overhead of having to switch between two files
 - they require the author to repeat himself quite a lot
 - they might go out of date much easier than annotations in the function 
 signature

The whole point of type hints is for the linters/IDEs, so IMO it is
perfectly reasonable to put the burden of making them useful onto the
linters/IDEs.  The UI for it can unify the two files into a single
view...I know because way back in the dark ages I wrote a small
editor-based IDE that did something very analogous on an IBM Mainframe,
and it worked really well as a development environment.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-20 Thread R. David Murray
+1 to this from me too. I'm afraid that means I'm -1 on the PEP.

I didn't write this in my earlier email because I wasn't sure about it,
but my gut reaction after reading Harry's email was if type annotations
are used in the stdlib, I'll probably stop contributing.  That doesn't
mean that's *true*, but that's the first time I've ever had that
thought, so it is probably worth sharing.

Basically, it makes Python less fun to program in.  That may be be an
emotional reaction and irrational, but I think it matters.  And yes, I
write production Python code for a living, though granted not at Google
or Facebook or Dropbox scale.

On Mon, 20 Apr 2015 19:00:53 -0500, Ryan Gonzalez rym...@gmail.com wrote:
 Although I like the concept of type annotations and the PEP, I have to
 agree with this. If I saw these type annotations when learning Python (I'm
 self-taught), there's a 99% chance I would've freaked.
 
 It's the same issue as with teaching C++: it's wrong to say, Hey, I taught
 you the basics, but there's other stuff that's going to confuse you to a
 ridiculous extent when you read it. People can't ignore it. It'll become a
 normal part of Python programs.
 
 At least now you can say, I'm using the mypy type checker.
 
 Don't get me wrong; I like mypy. I helped with their documentation and am
 watching the GitHub repo. But this is dead-on.
 
 
 On Mon, Apr 20, 2015 at 6:41 PM, Jack Diederich jackd...@gmail.com wrote:
 
  Twelve years ago a wise man said to me I suggest that you also propose a
  new name for the resulting language
 
  I talked with many of you at PyCon about the costs of PEP 484. There are
  plenty of people who have done a fine job promoting the benefits.
 
  * It is not optional. Please stop saying that. The people promoting it
  would prefer that everyone use it. If it is approved it will be optional in
  the way that PEP8 is optional. If I'm reading your annotated code it is
  certainly /not/ optional that I understand the annotations.
 
  * Uploading stubs for other people's code is a terrible idea. Who do I
  contact when I update the interface to my library? The random Joe who
  helped by uploading annotations three months ago and then quit the
  internet? I don't even want to think about people maliciously adding stubs
  to PyPI.
 
  * The cognitive load is very high. The average function signature will
  double in length. This is not a small cost and telling me it is optional
  to pretend that every other word on the line doesn't exist is a farce.
 
  * Every company's style guide is about to get much longer. That in itself
  is an indicator that this is a MAJOR language change and not just some
  optional add-on.
 
  * People will screw it up. The same people who can't be trusted to program
  without type annotations are also going to be *writing* those type
  annotations.
 
  * Teaching python is about to get much less attractive. It will not be
  optional for teachers to say just pretend all this stuff over here doesn't
  exist
 
  * No new syntax is a lie. Or rather a red herring. There are lots of new
  things it will be required to know and just because the compiler doesn't
  have to change doesn't mean the language isn't undergoing a major change.
 
  If this wasn't in a PEP and it wasn't going to ship in the stdlib very few
  people would use it. If you told everyone they had to install a different
  python implementation they wouldn't. This is much worse than that - it is
  Python4 hidden away inside a PEP.
 
  There are many fine languages that have sophisticated type systems. And
  many bondage  discipline languages that make you type things three times
  to make really really sure you meant to type that. If you find those other
  languages appealing I invite you to go use them instead.
 
  -Jack
 
  https://mail.python.org/pipermail/python-dev/2003-February/033291.html
 
  ___
  Python-Dev mailing list
  Python-Dev@python.org
  https://mail.python.org/mailman/listinfo/python-dev
  Unsubscribe:
  https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com
 
 
 
 
 -- 
 Ryan
 [ERROR]: Your autotools build scripts are 200 lines longer than your
 program. Something’s wrong.
 http://kirbyfan64.github.io/
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 https://mail.python.org/mailman/options/python-dev/rdmurray%40bitdance.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary of Python tracker Issues

2015-04-17 Thread R. David Murray
On Fri, 17 Apr 2015 18:08:24 +0200, Python tracker sta...@bugs.python.org 
wrote:
 
 ACTIVITY SUMMARY (2015-04-10 - 2015-04-17)
 Python tracker at http://bugs.python.org/
 
 To view or respond to any of the issues listed below, click on the issue.
 Do NOT respond to this message.
 
 Issues counts and deltas:
   open4792 (-31)
   closed 30957 (+113)
   total  35749 (+82)

That's a successful sprint week :)

Thanks everyone!

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Not being able to compile: make: *** [Programs/_freeze_importlib] Error 1

2015-04-16 Thread R. David Murray
On Thu, 16 Apr 2015 18:09:01 -0300, Facundo Batista facundobati...@gmail.com 
wrote:
 Full trace here:
 
   http://linkode.org/TgkzZw90JUaoodvYzU7zX6
 
 Before going into a deep debug, I thought about sending a mail to see
 if anybode else hit this issue, if it's a common problem, if there's a
 known workaround.

Most likely you just need to run 'make touch' so that it doesn't try
to rebuild stuff it doesn't need to (because we check in those
particular build artifacts, like the frozen importlib).

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] TypeError messages

2015-02-21 Thread R. David Murray
On Sun, 22 Feb 2015 00:26:23 +1000, Nick Coghlan ncogh...@gmail.com wrote:
 On 21 February 2015 at 00:05, Brett Cannon br...@python.org wrote:
   I agree that type names don't need to be quoted.
 
 As a user, I actually appreciate the quotes, because they make the
 error pattern easier for me to parse. Compare:
 
 int expected, but found str
 float expected, but found int
 
 'int' expected, but found 'str'
 'float' expected, but found 'int'

It's not a big deal to me either way, but I find the version with the
quotes makes me think it is looking for the literal string 'int', but
found the literal string 'str', whereas in the first case it seems
clear it is looking for objects of that type.  Perhaps this is just
because I am used to the existing messages, but I think it goes
beyond that.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Any grammar experts?

2015-01-26 Thread R. David Murray
On Mon, 26 Jan 2015 09:43:26 -0500, Barry Warsaw ba...@python.org wrote:
 On Jan 25, 2015, at 09:31 PM, R. David Murray wrote:
 
{*x for x in it}
   
which is a set comprehension, while the other is a dict comprehension 
:)
   
   
   That distinction doesn't bother me -- you might as well claim it's
   confusing that f(*x) passes positional args from x while f(**x) passes
   keyword args.
   
   And the varargs set comprehension is similar to the varargs list
   comprehension:
   
   [*x for x in it]
   
   If `it` were a list of three items, this would be the same as
   
   [*it[0], *it[1], *it[2]]
  
  I find all this unreadable and difficult to understand.
 
 I did too, before reading the PEP.
 
 After reading the PEP, it makes perfect sense to me.  Nor is the PEP
 complicated...it's just a matter of wrapping your head around the
 generalization[*] of what are currently special cases that is going on
 here.
 
 It does make sense after reading the PEP but it also reduces the readability
 and instant understanding of any such code.  This is head-scratcher code that
 I'm sure I'd get asked about from folks who aren't steeped in all the dark
 corners of Python.  I don't know if that's an argument not to adopt the PEP,
 but it I think it would be a good reason to recommend against using such
 obscure syntax, unless there's a good reason (and good comments!).

But it is only obscure because we are not used to it yet.  It is a
logical extension of Python's existing conventions once you think about
it.  It will become obvious quickly, IMO.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Any grammar experts?

2015-01-26 Thread R. David Murray
On Mon, 26 Jan 2015 22:05:44 +0100, Antoine Pitrou solip...@pitrou.net wrote:
 On Mon, 26 Jan 2015 12:22:20 -0800
 Ethan Furman et...@stoneleaf.us wrote:
  On 01/26/2015 12:09 PM, Antoine Pitrou wrote:
   On Mon, 26 Jan 2015 12:06:26 -0800
   Ethan Furman et...@stoneleaf.us wrote:
   It destroy's the chaining value and pretty much makes the improvement 
   not an improvement.  If there's a possibility that
   the same key could be in more than one of the dictionaries then you 
   still have to do the
  
 dict.update(another_dict)
   
   So what? Is the situation where chaining is desirable common enough?
  
  Common enough to not break it, yes.
 
 Really? What are the use cases?

My use case is a configuration method that takes keyword parameters.
In tests I want to specify a bunch of default values for the
configuration, but I want individual test methods to be able
to override those values.  So I have a bunch of code that does
the equivalent of:

from test.support import default_config
[...]
def _prep(self, config_overrides):
config = default.config.copy()
config.update(config_overrides)
my_config_object.load(**config)


With the current proposal I could instead do:

def _prep(self, config_overrides):
my_config_object.load(**default_config, **config_overrides)

I suspect I have run into situations like this elsewhere as well, but
this is the one from one of my current projects.

That said, I must admit to being a bit ambivalent about this, since
we otherwise are careful to disallow duplicate argument names.

So, instead we could write:

my_config_object.load(**{**default_config, **config_overrides})

since dict literals *do* allow duplicate keys.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Any grammar experts?

2015-01-26 Thread R. David Murray
On Tue, 27 Jan 2015 00:07:08 +0100, Antoine Pitrou solip...@pitrou.net wrote:
 On Mon, 26 Jan 2015 16:28:24 -0500
 R. David Murray rdmur...@bitdance.com wrote:
  
  My use case is a configuration method that takes keyword parameters.
  In tests I want to specify a bunch of default values for the
  configuration, but I want individual test methods to be able
  to override those values.  So I have a bunch of code that does
  the equivalent of:
  
  from test.support import default_config
  [...]
  def _prep(self, config_overrides):
  config = default.config.copy()
  config.update(config_overrides)
  my_config_object.load(**config)
  
  
  With the current proposal I could instead do:
  
  def _prep(self, config_overrides):
  my_config_object.load(**default_config, **config_overrides)
 
 It sounds like the _prep() method exists once in your code base, this
 isn't an idiom you are duplicating everywhere. The incentive for a
 syntactic shortcut looks pretty thin.

Something similar exists between five and ten times (I didn't go in and
count) in my current code base, in various specific forms for different
test classes.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why are generated files in the repository?

2015-01-25 Thread R. David Murray
On Sun, 25 Jan 2015 14:00:57 +1000, Nick Coghlan ncogh...@gmail.com wrote:
 It's far more developer friendly to aim to have builds from a source
 check-out just work if we can. That's pretty much where we are today
 (getting external dependencies for the optional parts on *nix can still be
 a bit fiddly - it may be worth maintaining instructions for at least apt
 and yum in the developer guide that cover that)

https://docs.python.org/devguide/setup.html#build-dependencies
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Any grammar experts?

2015-01-25 Thread R. David Murray
On Mon, 26 Jan 2015 01:21:24 +0100, Antoine Pitrou solip...@pitrou.net wrote:
 On Sun, 25 Jan 2015 14:59:42 -0800
 Guido van Rossum gu...@python.org wrote:
  On Sun, Jan 25, 2015 at 7:32 AM, Georg Brandl g.bra...@gmx.net wrote:
  
   On 01/25/2015 04:08 PM, Antoine Pitrou wrote:
On Sat, 24 Jan 2015 21:10:51 -0500
Neil Girdhar mistersh...@gmail.com wrote:
To finish PEP 448, I need to update the grammar for syntax such as
   
{**x for x in it}
   
Is this seriously allowed by the PEP? What does it mean exactly?
  
   It appears to go a bit far.  Especially since you also would have to allow
  
   {*x for x in it}
  
   which is a set comprehension, while the other is a dict comprehension :)
  
  
  That distinction doesn't bother me -- you might as well claim it's
  confusing that f(*x) passes positional args from x while f(**x) passes
  keyword args.
  
  And the varargs set comprehension is similar to the varargs list
  comprehension:
  
  [*x for x in it]
  
  If `it` were a list of three items, this would be the same as
  
  [*it[0], *it[1], *it[2]]
 
 I find all this unreadable and difficult to understand.

I did too, before reading the PEP.

After reading the PEP, it makes perfect sense to me.  Nor is the PEP
complicated...it's just a matter of wrapping your head around the
generalization[*] of what are currently special cases that is going on
here.

--David

[*] The *further* generalization...we've already had, for example,
the generalization that allows:

a, *b = (1, 3, 4)

to work, and that seems very clear to usnow.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] rst files

2015-01-23 Thread R. David Murray
On Fri, 23 Jan 2015 15:55:29 -0800, Guido van Rossum gu...@python.org wrote:
 This adds entries to the index of the document -- similar to the index at
 the end of a book. I think single vs. double refers to different types of
 entries. Check out this page: https://docs.python.org/3/genindex.html
 
 On Fri, Jan 23, 2015 at 3:43 PM, Ethan Furman et...@stoneleaf.us wrote:
 
  Can somebody please explain this?
 
  .. index::
 single: formatting, string (%)
 single: interpolation, string (%)
 single: string; formatting
 single: string; interpolation
 single: printf-style formatting
 single: sprintf-style formatting
 single: % formatting
 single: % interpolation
 
  Specifically, what does index mean?  What does single vs double vs triple
  mean?  Is there a reference somewhere I can read?

It is explained in the Sphinx documentation:
http://sphinx-doc.org/contents.html

Specifically:

http://sphinx-doc.org/markup/misc.html#directive-index

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] incorrect docstring for sys.int_info.sizeof_digit?

2015-01-21 Thread R. David Murray
On Wed, 21 Jan 2015 14:53:19 +, Tim Golden m...@timgolden.me.uk wrote:
 On 21/01/2015 11:07, Pfeiffer, Phillip E., IV wrote:
  Apologies if this has already been reported; I couldn't find a
  readily searchable archive for this mailing list (and apologies if
  I've just missed it).
 
 Depending on readily searchable, this isn't too bad:
 
 http://markmail.org/search/?q=list%3Aorg.python.python-dev+integer+docstring

But, if you are searching to see if a bug has been reported, you want to
search the tracker at bugs.python.org, not the python-dev mailing list.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New Windows installer for Python 3.5

2015-01-12 Thread R. David Murray
On Mon, 12 Jan 2015 17:26:43 +, Steve Dower steve.do...@microsoft.com 
wrote:
 David Anthoff wrote:
  Yes, those are good examples. Right now doing this in the way these guys do 
  is
  too much work for our small project... Anything that makes this easier 
  would be
  appreciated.
 
 I don't see how. All they've done is literally copy a Python
 installation into their install directory. Yes, they have their own
 launcher executables (py2exe generated, it looks like) and have
 precompiled the standard library and put it in a ZIP file, but you
 don't even need to go that far. Without knowing anything about your
 project I can't give specific suggestions, but simply dropping a
 Python installation in is not that difficult (and until the issues
 that Nick referred to are fixed, will have the same problems as
 TortoiseHg presumably has).

That's what py2exe *does*.  It does all the python-integration and
launcher-building work for you.  I use it for a small project myself
(proprietary), which I build an installer for using Inno Setup.  Works
very well, supports python3.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] contributing to multiprocessing

2015-01-08 Thread R. David Murray
On Thu, 08 Jan 2015 17:08:07 -0800, Ethan Furman et...@stoneleaf.us wrote:
 On 01/08/2015 03:21 PM, Davin Potts wrote:
  
  I am interested in making some serious ongoing contributions around 
  multiprocessing.
 
 Great!
 
  Rather than me simply walking through that backlog, offering comments or 
  encouragement here and there on issues, it
  makes more sense for me to ask:  what is the right way for me to proceed?  
  What is the next step towards me helping
  triage issues?  Is there a bridge-keeper with at least three, no more than 
  five questions for me?
 
 I would suggest having at least one, if not two or three, current core-devs 
 ready and willing to quickly review your
 work (I believe Raymond Hettinger may be one); then, go ahead and triage, 
 improve and/or submit patches, and make
 comments.  Once you've got a few of these under your belt, with favorable 
 reviews and your patches committed, you may
 get stuck with commit privileges of your own.  ;)

Indeed, the best way to proceed, regardless of any other issues, is in
fact to review, triage, comment on, and improve the issues you are
interested in.  Get the patches commit ready, and then get a current
core dev to do a commit review.

Oddly, the doc issues may be more problematic than the code issues.
Fixing bugs in docs isn't difficult to get done, but restructuring
documentation sometimes gets bogged down in differing opinions.  (I
haven't myself looked at your proposals since I don't use
multiprocessing, so I don't know how radical the proposed changes are).

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition

2014-12-16 Thread R. David Murray
On Tue, 16 Dec 2014 10:48:07 -0800, Mark Roberts wiz...@gmail.com wrote:
 On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou solip...@pitrou.net wrote:
 
  Iterating accross a dictionary doesn't need compatibility shims. It's
  dead simple in all Python versions:
 
  $ python2
  Python 2.7.8 (default, Oct 20 2014, 15:05:19)
  [GCC 4.9.1] on linux2
  Type help, copyright, credits or license for more information.
   d = {'a': 1}
   for k in d: print(k)
  ...
  a
 
  $ python3
  Python 3.4.2 (default, Oct  8 2014, 13:08:17)
  [GCC 4.9.1] on linux
  Type help, copyright, credits or license for more information.
   d = {'a': 1}
   for k in d: print(k)
  ...
  a
 
  Besides, using iteritems() and friends is generally a premature
  optimization, unless you know you'll have very large containers.
  Creating a list is cheap.
 
 
 It seems to me that every time I hear this, the author is basically
 admitting that Python is a toy language not meant for serious computing
 (where serious is defined in extremely modest terms). The advice is also
 very contradictory to literally every talk on performant Python that I've
 seen at PyCon or PyData or ... well, anywhere. And really, doesn't it
 strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of
 Python 3 a premature optimization?

No.  A premature optimization is one that is made before doing any
performance analysis, so language features are irrelevant to that
labeling.  This doesn't mean you shouldn't use better idioms when they
are clear.  But if you are complicating your code because of performance
concerns *without measuring it* you are doing premature optimization, by
definition[*].

 Isn't the whole reason that the
 default behavior switch was made is because creating lists willy nilly all
 over the place really *ISN'T* cheap? This isn't the first time someone has

No.  In Python3 we made the iterator protocol more central to the
language.  Any performance benefit is actually a side effect of that
change.  One that was considered, yes, but in the context of the
*language* as a whole and not any individual program's performance
profile.  And this doesn't make things worse for real world programs as
far as we can measure is a more important criterion for this kind of
language change than lets do this because we've measured and it makes
things better.

--David

[*] And yes, *we all do this*.  Sometimes doing it doesn't cost much.
Sometimes it does.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition

2014-12-13 Thread R. David Murray
On Sat, 13 Dec 2014 10:17:59 -0500, Barry Warsaw ba...@python.org wrote:
 On Dec 13, 2014, at 12:29 AM, Donald Stufft wrote:
 
 For what it’s worth, I almost exclusively write 2/3 compatible code (and
 that’s with the “easy” subset of 2.6+ and either 3.2+ or 3.3+) and 
 doing so
 does make the language far less fun for me than when I was writing 2.x only
 code.
 
 For myself, the way I'd put it is:
 
 With the libraries I maintain, I generally write Python 2/3 compatible code,
 targeting Python 2.7 and 3.4, with 2.6, 3.3, and 3.2 support as bonuses,
 although I will not contort too much to support those older versions.  Doing
 so does make the language far less fun for me than when I am writing 3.x only
 code.  All applications I write in pure Python 3, targeting Python 3.4, unless
 my dependencies are not all available in Python 3, or I haven't yet had the
 cycles/resources to port to Python 3.  Writing and maintaining applications in
 Python 2 is far less fun than doing so in Python 3.

I think this is an important distinction.  The considerations are very
different for library maintainers than they are for application
maintainers.  Most of my work is in (customer) applications, and except
for one customer who insists on using an old version of RedHat, I've
been on latest python3 for those for quite a while now.  I suspect we
hear less here from people in that situation than would be proportional
to their absolute numbers.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-08 Thread R. David Murray
On Mon, 08 Dec 2014 12:27:23 -0800, Jim J. Jewett jimjjew...@gmail.com 
wrote:
 Brett Cannon wrote:
  4. Contributor creates account on bugs.python.org and signs the
[contributor agreement](https://www.python.org/psf/contrib/contrib-form/)
 
 Is there an expiration on such forms?  If there doesn't need to be
 (and one form is good for multiple tickets), is there an objection
 (besides not done yet) to making signed the form part of the bug
 reporter account, and required to submit to the CI process?  (An I
 can't sign yet, bug me later option would allow the current workflow
 without the this isn't technically a patch workaround for small enough
 patches from those with slow-moving employers.)

No expiration.  Whether or not we have a CLA from a given tracker id
is recorded in the tracker.  People also get reminded to submit a CLA
if they haven't yet but have submitted a patch.

  At best core developers tell a contributor please send your PR
  against 3.4, push-button merge it, update a local clone, merge from
  3.4 to default, do the usual stuff, commit, and then push;
 
 Is it common for a patch that should apply to multiple branches to fail
 on some but not all of them?

Currently?  Yes when 2.7 is involved.  If we fix NEWS, then it won't
be *common* for maint-default, but it will happen.

 In other words, is there any reason beyond not done yet that submitting
 a patch (or pull request) shouldn't automatically create a patch per
 branch, with pushbuttons to test/reject/commit?

Not Done Yet (by any of the tools we know about) is the only reason I'm
aware of.

  Our code review tool is a fork that probably should be
  replaced as only Martin von Loewis can maintain it.
 
 Only he knows the innards, or only he is authorized, or only he knows
 where the code currently is/how to deploy an update?

Only he knows the innards.  (Although Ezio has made at least one patch
to it.)  I think Guido's point was that we (the community) shouldn't be
maintaining this private fork of a project that has moved on well beyond
us; instead we should be using an active project and leveraging
its community with our own contributions (like we do with Roundup).

 I know that there were times in the (not-so-recent) past when I had
 time and willingness to help with some part of the infrastructure, but
 didn't know where the code was, and didn't feel right making a blind
 offer.

Yeah, that's something that's been getting better lately (thanks,
infrastructure team), but where to get the info is still not as clear as
would be optimal.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tracker test instances (was: My thinking about the development process)

2014-12-06 Thread R. David Murray
On Sat, 06 Dec 2014 15:21:46 +, Brett Cannon br...@python.org wrote:
 On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft don...@stufft.io wrote:
  On Dec 6, 2014, at 9:11 AM, Brett Cannon br...@python.org wrote:
 
  On Fri Dec 05 2014 at 8:31:27 PM R. David Murray rdmur...@bitdance.com
  wrote:
  That's probably the biggest issue with *anyone* contributing to tracker
  maintenance, and if we could solve that, I think we could get more
  people interested in helping maintain it.  We need the equivalent of
  dev-in-a-box for setting up for testing proposed changes to
  bugs.python.org, but including some standard way to get it deployed so
  others can look at a live system running the change in order to review
  the patch.
 
  Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
  the past week, but this just screams container to me which would make
  getting a test instance set up dead simple.
 
  Heh, one of my thoughts on deploying the bug tracker into production was
  via a container, especially since we have multiple instances of it. I got
  side tracked on getting the rest of the infrastructure readier for a web
  application and some improvements there as well as getting a big postgresql
  database cluster set up (2x 15GB RAM servers running in Primary/Replica
  mode). The downside of course to this is that afaik Docker is a lot harder
  to use on Windows and to some degree OS X than linux. However if the
  tracker could be deployed as a docker image that would make the
  infrastructure side a ton easier. I also have control over the python/
  organization on Docker Hub too for whatever uses we have for it.
 
 
 I think it's something worth thinking about, but like you I don't know if
 the containers work on OS X or Windows (I don't work with containers
 personally).

(Had to fix the quoting there, somebody's email program got it wrong.)

For the tracker, being unable to run a test instance on Windows would
likely not be a severe limitation.  Given how few Windows people we get
making contributions to CPython, I'd really rather encourage them to
work there, rather than on the tracker.  OS/X is a bit more problematic,
but it sounds like it is also a bit more doable.

On the other hand, what's the overhead on setting up to use Docker?  If
that task is non-trivial, we're back to having a higher barrier to
entry than running a dev-in-a-box script...

Note also in thinking about setting up a test tracker instance we have
an additional concern: it requires postgres, and needs either a copy of
the full data set (which includes account data/passwords which would
need to be creatively sanitized) or a fairly large test data set.  I'd
prefer a sanitized copy of the real data.

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   5   6   7   8   9   10   >