[Python-Dev] Reminder: Python 3.4 feature freeze in a week

2013-11-16 Thread Larry Hastings



We're on schedule to tag Python 3.4 Beta 1 next Saturday.  And when that 
happens we go into "feature freeze" on Python trunk; no new features 
will be accepted in trunk until we branch the 3.4 release branch next 
February.  Time to get those checkins in folks.


Last call, everyone,


//arry/
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add transform() and untranform() methods

2013-11-16 Thread Nick Coghlan
On 16 Nov 2013 10:47, "Victor Stinner"  wrote:
>
> 2013/11/16 Nick Coghlan :
> > To address Serhiy's security concerns with the compression codecs (which are
> > technically independent of the question of restoring the aliases), I also
> > plan to document how to systematically blacklist particular codecs in an
> > application by setting attributes on the encodings module and/or appropriate
> > entries in sys.modules.
>
> I would be simpler and safer to blacklist bytes=>bytes and str=>str
> codecs from bytes.decode() and str.encode() directly. Marc Andre
> Lemburg proposed to add new attributes in CodecInfo to specify input
> and output types.

Yes, but that type compatibility introspection is a change for 3.5 at
the earliest (although I commented on
http://bugs.python.org/issue19619 with two alternate suggestions that
I think would be reasonable to implement for 3.4).

Everything codec related that I am doing at the moment is about
improving the state of 3.4 and source compatible 2/3 code. Proposals
for further 3.5+ only improvements are relevant only in the sense that
I don't want to lock us out from future improvements (which is why my
main aim is to clarify the status quo, with the only functional
changes related to restoring feature parity with Python 2 for
non-Unicode codecs).

> > The only functional *change* I'd still like to make for 3.4 is to restore
> > the shorthand aliases for the non-Unicode codecs (to ease the migration for
> > folks coming from Python 2), but this thread has convinced me I likely need
> > to write the PEP *before* doing that, and I still have to integrate
> > ensurepip into pyvenv before the beta 1 deadline.
> >
> > So unless you and Victor are prepared to +1 the restoration of the codec
> > aliases (closing issue 7475) in anticipation of that codecs infrastructure
> > documentation PEP, the change to restore the aliases probably won't be in
> > 3.4. (I *might* get the PEP written in time regardless, but I'm not betting
> > on it at this point).
>
> Using StackOverflow search engine, I found some posts where people
> asks for "hex" codec on Python 3. There are two answers: use binascii
> module or use codecs.encode(). So even if codecs.encode() was never
> documented, it looks like it is used. So I now agree that documenting
> it would not make the situation worse.

Aye, that was my conclusion (hence my proposal on issue 7475 back in April).

Can I take that observation as a +1 for restoring the aliases as well?
(That and more efficiently rejecting the non-Unicode codecs from
str.encode, bytes.decode and bytearray.decode are the only aspects of
this subject to the beta 1 deadline - we can be a bit more leisurely
when it comes to working out the details of the docs updates)

> Adding transform()/untransform() method to bytes and str is a non
> trivial change and not everybody likes them. Anyway, it's too late for
> Python 3.4.
>
> In my opinion, the best option is to add new input_type/output_type
> attributes to CodecInfo right now, and modify the codecs so
> "abc".encode("hex") raises a LookupError (instead of tricky error
> message with some evil low-level hacks on the traceback and the
> exception, which is my initial concern in this mail thread). It fixes
> also the security vulnerability.

The C level code for catching the input type errors only looks evil because:

- the C level equivalent of "exception Exception as Y: raise X from Y"
is just plain ugly in the first place
- the chaining includes a *lot* of checks of the original exception to
ensure that no data is lost by raising a new instance of the same
exception Type and chaining
- it chains ValueError, AttributeError and any other currently
stateless (aside from a str description) error the codec might throw,
not just input type validation errors (it deliberately doesn't chain
stateful errors as doing so might be backwards incompatible with
existing error handling).

However, the ugliness of that code is the reason I'm intrigued by the
possibility of traceback annotations as a potentially cleaner solution
than trying to seamlessly wrap exceptions with a new one that adds
more context information. While I think the gain in codec
debuggability is worth it in this case, my concern over the complexity
and the current limitations are the reason I didn't make it a public C
API.

> To keep backward compatibility (even with custom codecs registered
> manually), if input_type/output_type is not defined, we should
> consider that the codec is a classical text encoding (encode
> str=>bytes, decode bytes=>str).

Without an already existing ByteSequence ABC , it isn't feasible to
propose and implement this completely in the 3.4 time frame (since you
would need such an ABC to express the input type accepted by our
Unicode and binary codecs - the only one that wouldn't need it is
rot_13, since that's str->str).

However, the output types could be expressed solely as concrete types,
and that's all we need for the blacklist (sinc

Re: [Python-Dev] The pysandbox project is broken

2013-11-16 Thread Nick Coghlan
On 16 Nov 2013 11:35, "Christian Tismer"  wrote:
> IOW: Do we really need a full abstraction, embedded in a virtual OS, or
> is there already a compromise that suits 98 percent of the common needs?
>
> I think as a starter, categorizing the expectations of some measure of 
> 'secure python'
> would make sense. And I'm asking the people with better knowledge of these 
> matters
> than I have. (and not asking those who don't... ;-) )

The litany of vulnerability reports against the Java sandbox has long
confirmed my impression that secure sandboxing is a hard, not
completely solved problem, best left to better resourced platform
developers (or at least taking the appropriate steps to benefit from
their work).

A self-hosted language runtime level sandbox is, at best, a first line
of defence that protects against basic, naive attacks. One of the
assumptions I see from the folks working on operating systems, virtual
machine and container security is that the sandboxes *will* be
compromised at some point, so you have to make sure to understand what
the consequences of those breaches will be, and the best answer is
"they run into the next line of defence, so the only thing they have
gained is the ability to attack that").

In terms of in-process sandboxing of CPython (*at all*, let alone
self-hosted), we're currently missing some key foundational
components:

- the ability for a host process to cleanly configure the capabilities
of an embedded CPython interpreter (that's what PEP 432 is all about)
- elimination of all of the mechanisms by which hostile untrusted code
can trigger a segfault in the runtime (any segfault bug can reasonably
be assumed to be a security vulnerability waiting to be exploited, the
only question is whether the CPython runtime is part of the exposed
attack surface, and what the consequences are of compromising the
runtime). While Victor Stinner's recent work with failmalloc has been
a big step forward here, as have been various other changes in the
CPython code base (like adding recursion depth constraints to the
compiler toolchain), we're still a long way from being able to say
"CPython cannot be segfaulted by legal Python code that doesn't use
ctypes or an equivalent FFI library".

This is why I share Guido's (and the PyPy team's) view that secure,
cross-platform sandboxing of (C)Python is currently not possible.
Secure in-process sandboxing is hard even for languages like Lua,
JavaScript and Java that were designed from the ground up with
sandboxing in mind - sure, you can lock things down to the point where
untrusted code assuredly can't do any damage, but it often can't do
anything *useful* in that state, either.

By contrast, the PyPy sandbox model which uses a deliberately
constrained runtime to execute untrusted code in an OS level process
that is designed to only permit communication with the parent process
is *exactly* the kind of paranoid defence-in-depth approach that
should be employed when running untrusted code. Ideally, all of the
platform level "this child process is not allowed to do anything
except talk to me over stdin and stdout" would also be brought to bear
on the sandboxed runtime, so that as yet undiscovered vulnerabilities
in the PyPy sandbox don't result in a system compromise.

Anyone interested in sandboxing of Python code would be well-advised
to direct their efforts towards the parent process bindings for
http://doc.pypy.org/en/latest/sandbox.html, as well as identifying the
associated platform specific settings to lock out the child process
from all system access except communication with the parent process
over the standard streams.

Cheers,
Nick.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add transform() and untranform() methods

2013-11-16 Thread Victor Stinner
Why not using str type for str and str subtypes, and bytes type for bytes
and bytes-like object (bytearray, memoryview)? I don't think that we need
an ABC here.

Victor
Le 16 nov. 2013 10:44, "Nick Coghlan"  a écrit :

> On 16 Nov 2013 10:47, "Victor Stinner"  wrote:
> >
> > 2013/11/16 Nick Coghlan :
> > > To address Serhiy's security concerns with the compression codecs
> (which are
> > > technically independent of the question of restoring the aliases), I
> also
> > > plan to document how to systematically blacklist particular codecs in
> an
> > > application by setting attributes on the encodings module and/or
> appropriate
> > > entries in sys.modules.
> >
> > I would be simpler and safer to blacklist bytes=>bytes and str=>str
> > codecs from bytes.decode() and str.encode() directly. Marc Andre
> > Lemburg proposed to add new attributes in CodecInfo to specify input
> > and output types.
>
> Yes, but that type compatibility introspection is a change for 3.5 at
> the earliest (although I commented on
> http://bugs.python.org/issue19619 with two alternate suggestions that
> I think would be reasonable to implement for 3.4).
>
> Everything codec related that I am doing at the moment is about
> improving the state of 3.4 and source compatible 2/3 code. Proposals
> for further 3.5+ only improvements are relevant only in the sense that
> I don't want to lock us out from future improvements (which is why my
> main aim is to clarify the status quo, with the only functional
> changes related to restoring feature parity with Python 2 for
> non-Unicode codecs).
>
> > > The only functional *change* I'd still like to make for 3.4 is to
> restore
> > > the shorthand aliases for the non-Unicode codecs (to ease the
> migration for
> > > folks coming from Python 2), but this thread has convinced me I likely
> need
> > > to write the PEP *before* doing that, and I still have to integrate
> > > ensurepip into pyvenv before the beta 1 deadline.
> > >
> > > So unless you and Victor are prepared to +1 the restoration of the
> codec
> > > aliases (closing issue 7475) in anticipation of that codecs
> infrastructure
> > > documentation PEP, the change to restore the aliases probably won't be
> in
> > > 3.4. (I *might* get the PEP written in time regardless, but I'm not
> betting
> > > on it at this point).
> >
> > Using StackOverflow search engine, I found some posts where people
> > asks for "hex" codec on Python 3. There are two answers: use binascii
> > module or use codecs.encode(). So even if codecs.encode() was never
> > documented, it looks like it is used. So I now agree that documenting
> > it would not make the situation worse.
>
> Aye, that was my conclusion (hence my proposal on issue 7475 back in
> April).
>
> Can I take that observation as a +1 for restoring the aliases as well?
> (That and more efficiently rejecting the non-Unicode codecs from
> str.encode, bytes.decode and bytearray.decode are the only aspects of
> this subject to the beta 1 deadline - we can be a bit more leisurely
> when it comes to working out the details of the docs updates)
>
> > Adding transform()/untransform() method to bytes and str is a non
> > trivial change and not everybody likes them. Anyway, it's too late for
> > Python 3.4.
> >
> > In my opinion, the best option is to add new input_type/output_type
> > attributes to CodecInfo right now, and modify the codecs so
> > "abc".encode("hex") raises a LookupError (instead of tricky error
> > message with some evil low-level hacks on the traceback and the
> > exception, which is my initial concern in this mail thread). It fixes
> > also the security vulnerability.
>
> The C level code for catching the input type errors only looks evil
> because:
>
> - the C level equivalent of "exception Exception as Y: raise X from Y"
> is just plain ugly in the first place
> - the chaining includes a *lot* of checks of the original exception to
> ensure that no data is lost by raising a new instance of the same
> exception Type and chaining
> - it chains ValueError, AttributeError and any other currently
> stateless (aside from a str description) error the codec might throw,
> not just input type validation errors (it deliberately doesn't chain
> stateful errors as doing so might be backwards incompatible with
> existing error handling).
>
> However, the ugliness of that code is the reason I'm intrigued by the
> possibility of traceback annotations as a potentially cleaner solution
> than trying to seamlessly wrap exceptions with a new one that adds
> more context information. While I think the gain in codec
> debuggability is worth it in this case, my concern over the complexity
> and the current limitations are the reason I didn't make it a public C
> API.
>
> > To keep backward compatibility (even with custom codecs registered
> > manually), if input_type/output_type is not defined, we should
> > consider that the codec is a classical text encoding (encode
> > str=>bytes, decode bytes=>str).
>
> With

Re: [Python-Dev] The pysandbox project is broken

2013-11-16 Thread Maciej Fijalkowski
On Sat, Nov 16, 2013 at 12:12 PM, Nick Coghlan  wrote:
> On 16 Nov 2013 11:35, "Christian Tismer"  wrote:
>> IOW: Do we really need a full abstraction, embedded in a virtual OS, or
>> is there already a compromise that suits 98 percent of the common needs?
>>
>> I think as a starter, categorizing the expectations of some measure of 
>> 'secure python'
>> would make sense. And I'm asking the people with better knowledge of these 
>> matters
>> than I have. (and not asking those who don't... ;-) )
>
> The litany of vulnerability reports against the Java sandbox has long
> confirmed my impression that secure sandboxing is a hard, not
> completely solved problem, best left to better resourced platform
> developers (or at least taking the appropriate steps to benefit from
> their work).
>
> A self-hosted language runtime level sandbox is, at best, a first line
> of defence that protects against basic, naive attacks. One of the
> assumptions I see from the folks working on operating systems, virtual
> machine and container security is that the sandboxes *will* be
> compromised at some point, so you have to make sure to understand what
> the consequences of those breaches will be, and the best answer is
> "they run into the next line of defence, so the only thing they have
> gained is the ability to attack that").
>
> In terms of in-process sandboxing of CPython (*at all*, let alone
> self-hosted), we're currently missing some key foundational
> components:
>
> - the ability for a host process to cleanly configure the capabilities
> of an embedded CPython interpreter (that's what PEP 432 is all about)
> - elimination of all of the mechanisms by which hostile untrusted code
> can trigger a segfault in the runtime (any segfault bug can reasonably
> be assumed to be a security vulnerability waiting to be exploited, the
> only question is whether the CPython runtime is part of the exposed
> attack surface, and what the consequences are of compromising the
> runtime). While Victor Stinner's recent work with failmalloc has been
> a big step forward here, as have been various other changes in the
> CPython code base (like adding recursion depth constraints to the
> compiler toolchain), we're still a long way from being able to say
> "CPython cannot be segfaulted by legal Python code that doesn't use
> ctypes or an equivalent FFI library".
>
> This is why I share Guido's (and the PyPy team's) view that secure,
> cross-platform sandboxing of (C)Python is currently not possible.
> Secure in-process sandboxing is hard even for languages like Lua,
> JavaScript and Java that were designed from the ground up with
> sandboxing in mind - sure, you can lock things down to the point where
> untrusted code assuredly can't do any damage, but it often can't do
> anything *useful* in that state, either.
>
> By contrast, the PyPy sandbox model which uses a deliberately
> constrained runtime to execute untrusted code in an OS level process
> that is designed to only permit communication with the parent process
> is *exactly* the kind of paranoid defence-in-depth approach that
> should be employed when running untrusted code. Ideally, all of the
> platform level "this child process is not allowed to do anything
> except talk to me over stdin and stdout" would also be brought to bear
> on the sandboxed runtime, so that as yet undiscovered vulnerabilities
> in the PyPy sandbox don't result in a system compromise.
>
> Anyone interested in sandboxing of Python code would be well-advised
> to direct their efforts towards the parent process bindings for
> http://doc.pypy.org/en/latest/sandbox.html, as well as identifying the
> associated platform specific settings to lock out the child process
> from all system access except communication with the parent process
> over the standard streams.

Note Nick that the part that runs stuff in child process (as opposed
to have two different pythons running in the same process) is really
not a limitation of the approach. It's just that it's a proof of
concept and various other options are also possible, just noone seems
to be interested to pursue them. Additional OS level blocking is
really only working against potential segfaults, since we know that
there is no IO possible from the inner process. A JIT-less PyPy
sandbox can be made very secure by locking the executable pages as
non-writable (we know the code does not do any IO).

Cheers,
fijal
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The pysandbox project is broken

2013-11-16 Thread Maciej Fijalkowski
On Fri, Nov 15, 2013 at 6:56 PM, Trent Nelson  wrote:
> On Tue, Nov 12, 2013 at 01:16:55PM -0800, Victor Stinner wrote:
>> pysandbox cannot be used in practice
>> 
>>
>> To protect the untrusted namespace, pysandbox installs a lot of
>> different protections. Because of all these protections, it becomes
>> hard to write Python code. Basic features like "del dict[key]" are
>> denied. Passing an object to a sandbox is not possible to sandbox,
>> pysandbox is unable to proxify arbitary objects.
>>
>> For something more complex than evaluating "1+(2*3)", pysandbox cannot
>> be used in practice, because of all these protections. Individual
>> protections cannot be disabled, all protections are required to get a
>> secure sandbox.
>
> This sounds a lot like the work I initially did with PyParallel to
> try and intercept/prevent parallel threads mutating main-thread
> objects.
>
> I ended up arriving at a much better solution by just relying on
> memory protection; main thread pages are set read-only prior to
> parallel threads being able to run.  If a parallel thread attempts
> to mutate a main thread object; a SEH is raised (SIGSEV on POSIX),
> which I catch in the ceval loop and convert into an exception.
>
> See slide 138 of this: 
> https://speakerdeck.com/trent/pyparallel-how-we-removed-the-gil-and-exploited-all-cores-1
>
> I'm wondering if this sort of an approach (which worked surprisingly
> well) could be leveraged to also provide a sandbox environment?  The
> goals are the same: robust protection against mutation of memory
> allocated outside of the sandbox.
>
> (I'm purely talking about memory mutation; haven't thought about how
>  that could be extended to prevent file system interaction as well.)
>
>
> Trent.
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com

Trent, you should read the mail more carefully. Notably the same
issues that make it impossible to create a sandbox make it impossible
to create pyparaller really work. Being read-only is absolutely not
enough - you can read some internal structures in inconsistent state
that lead to crashes and/or very unexpected behavior even without
modifying anything.

PS. We really did a lot of work analyzing how STM-pypy can lead to
conflicts and/or inconsistent behavior.

Cheers,
fijal
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add transform() and untranform() methods

2013-11-16 Thread Nick Coghlan
On 16 November 2013 20:45, Victor Stinner  wrote:
> Why not using str type for str and str subtypes, and bytes type for bytes
> and bytes-like object (bytearray, memoryview)? I don't think that we need an
> ABC here.

We'd only need an ABC if info was added for supported input types.
However, that's not necessary since "encodes_to" and "decodes_to"  are
enough to identify Unicode encodings: "encodes_to in (None, bytes) and
decodes_to in (None, str)", so we don't need to track input type
support at all if the main question we want to answer is "is this a
Unicode codec or not?".

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add transform() and untranform() methods

2013-11-16 Thread M.-A. Lemburg
On 16.11.2013 01:47, Victor Stinner wrote:
> Adding transform()/untransform() method to bytes and str is a non
> trivial change and not everybody likes them. Anyway, it's too late for
> Python 3.4.

Just to clarify: I still like the idea of adding those methods.

I just don't see what this addition has to do with the codecs.encode()/
.decode() functions.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Nov 16 2013)
>>> Python Projects, Consulting and Support ...   http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ...   http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/

2013-11-19: Python Meeting Duesseldorf ...  3 days to go

: Try our mxODBC.Connect Python Database Interface for free ! ::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add transform() and untranform() methods

2013-11-16 Thread Antoine Pitrou
On Sat, 16 Nov 2013 19:44:51 +1000
Nick Coghlan  wrote:
> 
> Aye, that was my conclusion (hence my proposal on issue 7475 back in April).
> 
> Can I take that observation as a +1 for restoring the aliases as well?

I see no harm in restoring the aliases personally, so +1 from me.

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add transform() and untranform() methods

2013-11-16 Thread Nick Coghlan
On 16 November 2013 21:38, Nick Coghlan  wrote:
> On 16 November 2013 20:45, Victor Stinner  wrote:
>> Why not using str type for str and str subtypes, and bytes type for bytes
>> and bytes-like object (bytearray, memoryview)? I don't think that we need an
>> ABC here.
>
> We'd only need an ABC if info was added for supported input types.
> However, that's not necessary since "encodes_to" and "decodes_to"  are
> enough to identify Unicode encodings: "encodes_to in (None, bytes) and
> decodes_to in (None, str)", so we don't need to track input type
> support at all if the main question we want to answer is "is this a
> Unicode codec or not?".

I realised I misunderstood your proposal because of the field names
you initially suggested. I've now proposed a variation with different
field names (encodes_to instead of output_type and decodes_to instead
of input_type) and a "codecs.is_text_encoding" query function
(http://bugs.python.org/issue19619?#msg203037)

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add transform() and untranform() methods

2013-11-16 Thread Nick Coghlan
On 16 November 2013 21:49, M.-A. Lemburg  wrote:
> On 16.11.2013 01:47, Victor Stinner wrote:
>> Adding transform()/untransform() method to bytes and str is a non
>> trivial change and not everybody likes them. Anyway, it's too late for
>> Python 3.4.
>
> Just to clarify: I still like the idea of adding those methods.
>
> I just don't see what this addition has to do with the codecs.encode()/
> .decode() functions.

Part of the interest here is in making Python 3 better compete with
the ease of the following in Python 2:

>>> "68656c6c6f".decode("hex")
'hello'
>>> "hello".encode("hex")
'68656c6c6f'

Until recently, I (and others) thought the best Python 3 had to offer was:

>>> import codecs
>>> codecs.getencoder("hex")("hello")[0]
'68656c6c6f'
>>> codecs.getdecoder("hex")("68656c6c6f")[0]
'hello'

In reality, though, Python 3 has always supported the following, it
just wasn't documented so I (and others) didn't know it had actually
been available as an alternative interface to the codecs machinery
since Python 2.4:

>>> from codecs import encode, decode
>>> encode("hello", "hex")
'68656c6c6f'
>>> decode("68656c6c6f", "hex")
'hello'

That's almost as clean as the Python 2 version, it just requires the
initial import of the convenience functions from the codecs module.
The fact it is supported in Python 2 means that 2/3 compatible codecs
can also use it.

Accordingly, I now see ensuring that everyone has a common
understanding of *what is already available* as an essential next
step, and only then consider significant changes in the codecs
mechanisms*. I know I learned a hell of a lot about the distinction
between the type agnostic codec infrastructure and the Unicode text
model over the past several months, and I think this thread shows
clearly that there's still a lot of confusion over the matter, even
amongst core developers. That's a problem, and something we need to
fix before giving further consideration to the transform/untransform
idea.

*(Victor's proposal in issue 19619 is actually relatively modest, now
that I understand it properly, and entails taking the existing output
type checks and making it possible to do them in advance, without
touching input type checks)

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Maciej Fijalkowski
On Sat, Nov 16, 2013 at 3:51 AM, Terry Reedy  wrote:
> http://bugs.python.org/issue19562
> propose to change the first assert in Lib/datetime.py
>   assert 1 <= month <= 12, month
> to
>   assert 1 <= month <= 12,'month must be in 1..12'
> to match the next two asserts out of the *53* in the file. I think that is
> the wrong direction of change, but that is not my question here.
>
> Should stdlib code use assert at all?
>
> If user input can trigger an assert, then the code should raise a normal
> exception that will not disappear with -OO.

May I assert that -OO should instead be killed?
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Nick Coghlan
On 16 November 2013 23:17, Maciej Fijalkowski  wrote:
> On Sat, Nov 16, 2013 at 3:51 AM, Terry Reedy  wrote:
>> If user input can trigger an assert, then the code should raise a normal
>> exception that will not disappear with -OO.
>
> May I assert that -OO should instead be killed?

"I don't care about embedded devices" is not a good rationale for
killing features that really only benefit people running Python on
such systems.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Maciej Fijalkowski
On Sat, Nov 16, 2013 at 5:09 PM, Nick Coghlan  wrote:
> On 16 November 2013 23:17, Maciej Fijalkowski  wrote:
>> On Sat, Nov 16, 2013 at 3:51 AM, Terry Reedy  wrote:
>>> If user input can trigger an assert, then the code should raise a normal
>>> exception that will not disappear with -OO.
>>
>> May I assert that -OO should instead be killed?
>
> "I don't care about embedded devices" is not a good rationale for
> killing features that really only benefit people running Python on
> such systems.
>
> Cheers,
> Nick.

Can I see some writeup how -OO benefit embedded devices?
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Maciej Fijalkowski
On Sat, Nov 16, 2013 at 5:33 PM, Maciej Fijalkowski  wrote:
> On Sat, Nov 16, 2013 at 5:09 PM, Nick Coghlan  wrote:
>> On 16 November 2013 23:17, Maciej Fijalkowski  wrote:
>>> On Sat, Nov 16, 2013 at 3:51 AM, Terry Reedy  wrote:
 If user input can trigger an assert, then the code should raise a normal
 exception that will not disappear with -OO.
>>>
>>> May I assert that -OO should instead be killed?
>>
>> "I don't care about embedded devices" is not a good rationale for
>> killing features that really only benefit people running Python on
>> such systems.
>>
>> Cheers,
>> Nick.
>
> Can I see some writeup how -OO benefit embedded devices?

Or more importantly, how removing assert does. And how not naming it
--remove-asserts would not help (people really have an opinion it
would optimize their code)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Antoine Pitrou
On Sat, 16 Nov 2013 17:34:15 +0200
Maciej Fijalkowski  wrote:
> On Sat, Nov 16, 2013 at 5:33 PM, Maciej Fijalkowski  wrote:
> > On Sat, Nov 16, 2013 at 5:09 PM, Nick Coghlan  wrote:
> >> On 16 November 2013 23:17, Maciej Fijalkowski  wrote:
> >>> On Sat, Nov 16, 2013 at 3:51 AM, Terry Reedy  wrote:
>  If user input can trigger an assert, then the code should raise a normal
>  exception that will not disappear with -OO.
> >>>
> >>> May I assert that -OO should instead be killed?
> >>
> >> "I don't care about embedded devices" is not a good rationale for
> >> killing features that really only benefit people running Python on
> >> such systems.
> >>
> >> Cheers,
> >> Nick.
> >
> > Can I see some writeup how -OO benefit embedded devices?
> 
> Or more importantly, how removing assert does. And how not naming it
> --remove-asserts would not help (people really have an opinion it
> would optimize their code)

I agree that conflating the two doesn't help the discussion.
While removing docstrings may be beneficial on memory-constrained
devices, I can't remember a single situation where I've wanted to
remove asserts on a production system.

(I also tend to write less and less asserts in production code, since
all of them tend to go in unit tests instead, with the help of e.g.
mock objects)

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Nick Coghlan
On 17 November 2013 01:34, Maciej Fijalkowski  wrote:
> On Sat, Nov 16, 2013 at 5:33 PM, Maciej Fijalkowski  wrote:
>> On Sat, Nov 16, 2013 at 5:09 PM, Nick Coghlan  wrote:
>>> On 16 November 2013 23:17, Maciej Fijalkowski  wrote:
 On Sat, Nov 16, 2013 at 3:51 AM, Terry Reedy  wrote:
> If user input can trigger an assert, then the code should raise a normal
> exception that will not disappear with -OO.

 May I assert that -OO should instead be killed?
>>>
>>> "I don't care about embedded devices" is not a good rationale for
>>> killing features that really only benefit people running Python on
>>> such systems.
>>>
>>> Cheers,
>>> Nick.
>>
>> Can I see some writeup how -OO benefit embedded devices?
>
> Or more importantly, how removing assert does. And how not naming it
> --remove-asserts would not help (people really have an opinion it
> would optimize their code)

No, that's the wrong question to ask. The onus is on *you* to ask "Who
is this feature for? Do they still need it? Can we meet their needs in
a different way?". You're the one proposing to break things, so it's
up to you to make the case for why that's an OK thing to do.

And until you ask those questions, and openly and honestly do the
research to answer them (rather than assuming the answer you want),
and can provide evidence of having done so, then it's entirely
reasonable for me to dismiss the suggestion as you saying "this
doesn't benefit me, so it doesn't benefit anyone, so it's OK to get
rid of it".

That's not the way this works - backwards compatibility is sacrosanct,
and it requires some seriously compelling evidence to justify a
breach. (This even applies to the Python 3 transition: the really
annoying discrepancies between Python 2 and 3 are the ones where we
allowed a backwards compatibility break without adequate
justification, but now we're locked in to the decision due to internal
backwards compatibility constraints within the Python 3 series).

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Nick Coghlan
On 17 November 2013 01:46, Antoine Pitrou  wrote:
> I agree that conflating the two doesn't help the discussion.
> While removing docstrings may be beneficial on memory-constrained
> devices, I can't remember a single situation where I've wanted to
> remove asserts on a production system.

While I actually agree that having separate flags for --omit-debug,
--omit-asserts and --omit-docstrings would make more sense than the
current optimization levels, Maciej first proposed killing off -OO
(where the most significant effect is removing docstrings which can
result in substantial program footprint reductions for embedded
systems), and only later switched to asking about removing asserts
(part of -O, which also removes blocks guarded by "if __debug__", both
of which help embedded systems preserve precious ROM space, although
to a lesser degree than removing docstrings can save RAM).

One of the most important questions to ask when proposing the removal
of something is "What replacement are we offering for those users that
actually need (or even just think they need) this feature?". Sometimes
the answer is "Nothing", sometimes it's something that only covers a
subset of previous use cases, and sometimes it's a complete functional
equivalent with an improved spelling. But not asking the question at
all (or, worse, dismissing the concerns of affected users as
irrelevant and uninteresting) is a guaranteed way to annoy the very
people that actually rely on the feature that is up for removal or
replacement, when you *really* want them engaged and clearly
explaining their use cases.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Donald Stufft
Personally I think that none of the -O* should be removing asserts. It feels
like a foot gun to me. I’ve seen more than one codebase that would be
completely broken under -O* because they used asserts without even knowing
-O* existed.

Removing __debug__ blogs and doc strings I don’t think is as big of a deal,
although removing doc strings can break code as well.

On Nov 16, 2013, at 11:08 AM, Nick Coghlan  wrote:

> On 17 November 2013 01:46, Antoine Pitrou  wrote:
>> I agree that conflating the two doesn't help the discussion.
>> While removing docstrings may be beneficial on memory-constrained
>> devices, I can't remember a single situation where I've wanted to
>> remove asserts on a production system.
> 
> While I actually agree that having separate flags for --omit-debug,
> --omit-asserts and --omit-docstrings would make more sense than the
> current optimization levels, Maciej first proposed killing off -OO
> (where the most significant effect is removing docstrings which can
> result in substantial program footprint reductions for embedded
> systems), and only later switched to asking about removing asserts
> (part of -O, which also removes blocks guarded by "if __debug__", both
> of which help embedded systems preserve precious ROM space, although
> to a lesser degree than removing docstrings can save RAM).
> 
> One of the most important questions to ask when proposing the removal
> of something is "What replacement are we offering for those users that
> actually need (or even just think they need) this feature?". Sometimes
> the answer is "Nothing", sometimes it's something that only covers a
> subset of previous use cases, and sometimes it's a complete functional
> equivalent with an improved spelling. But not asking the question at
> all (or, worse, dismissing the concerns of affected users as
> irrelevant and uninteresting) is a guaranteed way to annoy the very
> people that actually rely on the feature that is up for removal or
> replacement, when you *really* want them engaged and clearly
> explaining their use cases.
> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan   |   [email protected]   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/donald%40stufft.io


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Antoine Pitrou
On Sat, 16 Nov 2013 11:16:48 -0500
Donald Stufft  wrote:
> Personally I think that none of the -O* should be removing asserts. It feels
> like a foot gun to me. I’ve seen more than one codebase that would be
> completely broken under -O* because they used asserts without even knowing
> -O* existed.

Originally it was probably done so as to mimick C, where compiling in
optimized mode will indeed disable any assert()s in the code.

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Gregory Salvan
Hi,
Some languages (C#, java) do the reverse by removing assertions if we don't
tell compiler to keep them.
Personnaly, I find this solution relatively accurate as I expect assertions
not to be run in production.

It would be painful to have this behaviour in python now, but I hope we'll
keep a way to remove assertions and find interesting the solution of
specific flags (--omit-debug,
--omit-asserts and --omit-docstrings).


cheers,
Grégory


2013/11/16 Donald Stufft 

> Personally I think that none of the -O* should be removing asserts. It
> feels
> like a foot gun to me. I’ve seen more than one codebase that would be
> completely broken under -O* because they used asserts without even knowing
> -O* existed.
>
> Removing __debug__ blogs and doc strings I don’t think is as big of a deal,
> although removing doc strings can break code as well.
>
> On Nov 16, 2013, at 11:08 AM, Nick Coghlan  wrote:
>
> > On 17 November 2013 01:46, Antoine Pitrou  wrote:
> >> I agree that conflating the two doesn't help the discussion.
> >> While removing docstrings may be beneficial on memory-constrained
> >> devices, I can't remember a single situation where I've wanted to
> >> remove asserts on a production system.
> >
> > While I actually agree that having separate flags for --omit-debug,
> > --omit-asserts and --omit-docstrings would make more sense than the
> > current optimization levels, Maciej first proposed killing off -OO
> > (where the most significant effect is removing docstrings which can
> > result in substantial program footprint reductions for embedded
> > systems), and only later switched to asking about removing asserts
> > (part of -O, which also removes blocks guarded by "if __debug__", both
> > of which help embedded systems preserve precious ROM space, although
> > to a lesser degree than removing docstrings can save RAM).
> >
> > One of the most important questions to ask when proposing the removal
> > of something is "What replacement are we offering for those users that
> > actually need (or even just think they need) this feature?". Sometimes
> > the answer is "Nothing", sometimes it's something that only covers a
> > subset of previous use cases, and sometimes it's a complete functional
> > equivalent with an improved spelling. But not asking the question at
> > all (or, worse, dismissing the concerns of affected users as
> > irrelevant and uninteresting) is a guaranteed way to annoy the very
> > people that actually rely on the feature that is up for removal or
> > replacement, when you *really* want them engaged and clearly
> > explaining their use cases.
> >
> > Cheers,
> > Nick.
> >
> > --
> > Nick Coghlan   |   [email protected]   |   Brisbane, Australia
> > ___
> > Python-Dev mailing list
> > [email protected]
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/donald%40stufft.io
>
>
> -
> Donald Stufft
> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
> DCFA
>
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/apieum%40gmail.com
>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The pysandbox project is broken

2013-11-16 Thread Trent Nelson
On Sat, Nov 16, 2013 at 02:53:22AM -0800, Maciej Fijalkowski wrote:
> On Fri, Nov 15, 2013 at 6:56 PM, Trent Nelson  wrote:
> > On Tue, Nov 12, 2013 at 01:16:55PM -0800, Victor Stinner wrote:
> >> pysandbox cannot be used in practice
> >> 
> >>
> >> To protect the untrusted namespace, pysandbox installs a lot of
> >> different protections. Because of all these protections, it becomes
> >> hard to write Python code. Basic features like "del dict[key]" are
> >> denied. Passing an object to a sandbox is not possible to sandbox,
> >> pysandbox is unable to proxify arbitary objects.
> >>
> >> For something more complex than evaluating "1+(2*3)", pysandbox cannot
> >> be used in practice, because of all these protections. Individual
> >> protections cannot be disabled, all protections are required to get a
> >> secure sandbox.
> >
> > This sounds a lot like the work I initially did with PyParallel to
> > try and intercept/prevent parallel threads mutating main-thread
> > objects.
> >
> > I ended up arriving at a much better solution by just relying on
> > memory protection; main thread pages are set read-only prior to
> > parallel threads being able to run.  If a parallel thread attempts
> > to mutate a main thread object; a SEH is raised (SIGSEV on POSIX),
> > which I catch in the ceval loop and convert into an exception.
> >
> > See slide 138 of this: 
> > https://speakerdeck.com/trent/pyparallel-how-we-removed-the-gil-and-exploited-all-cores-1
> >
> > I'm wondering if this sort of an approach (which worked surprisingly
> > well) could be leveraged to also provide a sandbox environment?  The
> > goals are the same: robust protection against mutation of memory
> > allocated outside of the sandbox.
> >
> > (I'm purely talking about memory mutation; haven't thought about how
> >  that could be extended to prevent file system interaction as well.)
> >
> >
> > Trent.
> > ___
> > Python-Dev mailing list
> > [email protected]
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe: 
> > https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
> 
> Trent, you should read the mail more carefully. Notably the same
> issues that make it impossible to create a sandbox make it impossible
> to create pyparaller really work. Being read-only is absolutely not
> enough - you can read some internal structures in inconsistent state
> that lead to crashes and/or very unexpected behavior even without
> modifying anything.

What do you mean by inconsistent state?  Like a dict half way
through `a['foo'] = 'bar'`?  That can't happen with PyParallel;
parallel threads don't run when the main thread runs and vice
versa.  The main thread's memory (and internal object structure)
will always be consistent by the time the parallel threads run.

> PS. We really did a lot of work analyzing how STM-pypy can lead to
> conflicts and/or inconsistent behavior.

But you support free-threading though, right?  As in, code that
subclasses threading.Thread should be able to benefit from your
STM work?

I explicitly don't support free-threading.  Your threading.Thread
code will not magically run faster with PyParallel.  You'll need
to re-write your code using the parallel and async façade APIs I
expose.

On the plus side, I can completely control everything about the
main thread and parallel thread execution environments; obviating
the need to protect against internal inconsistencies by virtue of
the fact that the main thread will always be in a consistent state
when the parallel threads are running.

(And it works really well in practice; I ported SimpleHTTPServer to
use my new async stuff and it flies -- it'll automatically exploit
all your cores if there is sufficient incoming load.  Unexpected
side-effect of my implementation is that code executing in parallel
callbacks actually runs faster than normal single-threaded Python
code; no need to do reference counting, GC, and the memory model is
ridiculously cache and TLB friendly.)

This is getting off-topic though and I don't want to hijack the
sandbox thread.  I was planning on sending an e-mail in a few days
when the PyData video of my talk is live -- we can debate the merits
of my parallel/async approach then :-)

> Cheers,
> fijal

Trent.

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] What's the story on Py_FrozenMain?

2013-11-16 Thread Eric Snow
While looking at something unrelated, I happened to peek at
Python/frozenmain.c and found Py_FrozenMain().  I kind of get the idea
of it, but am curious what motivated the addition and who might be
using it.  The function is not documented and doesn't have much
explanation.  I'm guessing that not many are familiar with it (e.g.
http://bugs.python.org/issue15893).

FWIW the function was added quite a while ago (and hasn't been touched
a whole lot since):

changeset:   1270:14369a5e61679364deeae9a9a0deedbd593a72e0
branch:  legacy-trunk
user:Guido van Rossum 
date:Thu Apr 01 20:59:32 1993 +
summary: Support for frozen scripts; added -i option.

-eric
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's the story on Py_FrozenMain?

2013-11-16 Thread M.-A. Lemburg
On 16.11.2013 18:48, Eric Snow wrote:
> While looking at something unrelated, I happened to peek at
> Python/frozenmain.c and found Py_FrozenMain().  I kind of get the idea
> of it, but am curious what motivated the addition and who might be
> using it.  The function is not documented and doesn't have much
> explanation.  I'm guessing that not many are familiar with it (e.g.
> http://bugs.python.org/issue15893).
> 
> FWIW the function was added quite a while ago (and hasn't been touched
> a whole lot since):
> 
> changeset:   1270:14369a5e61679364deeae9a9a0deedbd593a72e0
> branch:  legacy-trunk
> user:Guido van Rossum 
> date:Thu Apr 01 20:59:32 1993 +
> summary: Support for frozen scripts; added -i option.

It's used as main()-function for frozen Python interpreters.

See eGenix PyRun as an example and the freeze tool in Tools/freeze/
for the implementation that uses this API:

http://www.egenix.com/products/python/PyRun/

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Nov 16 2013)
>>> Python Projects, Consulting and Support ...   http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ...   http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/

2013-11-19: Python Meeting Duesseldorf ...  3 days to go

: Try our mxODBC.Connect Python Database Interface for free ! ::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's the story on Py_FrozenMain?

2013-11-16 Thread Guido van Rossum
This is very old DNA. The persistent user request was a way to bundle up a
Python program as a single executable file that could be sent to a friend
or colleague and run without first having to install Python. If you Google
for python freeze you'll still see old references to it.

IIRC I did the original version -- it would scan your main program and try
to follow all your imports to get a list of modules (yours and stdlib) that
would be needed, and it would then byte-compile all of these and produce a
huge C file. You would then compile and link that C file with the rest of
the Python executable. All extensions would have to be statically linked.

I think this was also used as the basis of a similar tool that worked for
Windows.

Nowadays installers are much more accessible and easier to use, and Python
isn't so new and unknown any more, so there's not much demand left.

--Guido


On Sat, Nov 16, 2013 at 9:48 AM, Eric Snow wrote:

> While looking at something unrelated, I happened to peek at
> Python/frozenmain.c and found Py_FrozenMain().  I kind of get the idea
> of it, but am curious what motivated the addition and who might be
> using it.  The function is not documented and doesn't have much
> explanation.  I'm guessing that not many are familiar with it (e.g.
> http://bugs.python.org/issue15893).
>
> FWIW the function was added quite a while ago (and hasn't been touched
> a whole lot since):
>
> changeset:   1270:14369a5e61679364deeae9a9a0deedbd593a72e0
> branch:  legacy-trunk
> user:Guido van Rossum 
> date:Thu Apr 01 20:59:32 1993 +
> summary: Support for frozen scripts; added -i option.
>
> -eric
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's the story on Py_FrozenMain?

2013-11-16 Thread Eric Snow
On Sat, Nov 16, 2013 at 10:55 AM, M.-A. Lemburg  wrote:
> It's used as main()-function for frozen Python interpreters.
>
> See eGenix PyRun as an example and the freeze tool in Tools/freeze/
> for the implementation that uses this API:
>
> http://www.egenix.com/products/python/PyRun/

On Sat, Nov 16, 2013 at 10:57 AM, Guido van Rossum  wrote:
> This is very old DNA. The persistent user request was a way to bundle up a
> Python program as a single executable file that could be sent to a friend or
> colleague and run without first having to install Python. If you Google for
> python freeze you'll still see old references to it.
>
> IIRC I did the original version -- it would scan your main program and try
> to follow all your imports to get a list of modules (yours and stdlib) that
> would be needed, and it would then byte-compile all of these and produce a
> huge C file. You would then compile and link that C file with the rest of
> the Python executable. All extensions would have to be statically linked.
>
> I think this was also used as the basis of a similar tool that worked for
> Windows.
>
> Nowadays installers are much more accessible and easier to use, and Python
> isn't so new and unknown any more, so there's not much demand left.

Thanks for the explanations.  It's interesting stuff. :)

-eric
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Accepting PEP 3154 for 3.4?

2013-11-16 Thread Antoine Pitrou

Hello,

Alexandre Vassalotti (thanks a lot!) has recently finalized his work on
the PEP 3154 implementation - pickle protocol 4.

I think it would be good to get the PEP and the implementation accepted
for 3.4. As far as I can say, this has been a low-controvery proposal,
and it brings fairly obvious improvements to the table (which table?).
I still need some kind of BDFL or BDFL delegate to do that, though --
unless I am allowed to mark my own PEP accepted :-)

(I've asked Tim, specifically, for comments, since he contributed a lot
to previous versions of the pickle protocol.)

The PEP is at http://www.python.org/dev/peps/pep-3154/ (should be
rebuilt soon by the server, I'd say)

Alexandre's implementation is tracked at
http://bugs.python.org/issue17810

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PyParallel: alternate async I/O and GIL removal

2013-11-16 Thread Trent Nelson
Hi folks,

Video of the presentation I gave last weekend at PyData NYC
regarding PyParallel just went live: https://vimeo.com/79539317

Slides are here: 
https://speakerdeck.com/trent/pyparallel-how-we-removed-the-gil-and-exploited-all-cores-1

The work was driven by the async I/O discussions around this time
last year on python-ideas.  That resulted in me sending this:


http://markmail.org/thread/kh3qgjbydvxt3exw#query:+page:1+mid:arua62vllzugjy2v+state:results

where I attempted to argue that there was a better way of
doing async I/O on Windows than the status quo of single-threaded,
non-blocking I/O with an event multiplex syscall.

I wasn't successful in convincing anyone at the time; I had no code
to back it up and I didn't articulate my plans for GIL removal at
the time either (figuring the initial suggestion would be met with
enough scepticism as is).

So, in the video above, I spend a lot of time detailing how IOCP
works on Windows, how it presents us with a better environment than
UNIX for doing asynchronous I/O, and how it paired nicely with the
other work I did on coming up with a way for multiple threads to
execute simultaneously across all cores without introducing any
speed penalties.

I'm particularly interested to hear if the video/slides helped
UNIX-centric people gain a better understanding of how Windows does
IOCP and why it would be preferable when doing async I/O.

The reverse is also true: if you still think single-threaded, non-
blocking synchronous I/O via kqueue/epoll is better than the
approach afforded by IOCP, I'm interested in hearing why.

As crazy as it sounds, my long term goal would be to try and
influence Linux and BSD kernels to implement thread-agnostic I/O
support such that an IOCP-like mechanism could be exposed; Solaris
and AIX already do this via event ports and AIX's verbatim copy of
Windows' IOCP API.

(There is some promising work already being done on Linux; see
 recent MegaPipe paper for an example.)

Regards,

Trent.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Mixed up core/module source file locations in CPython

2013-11-16 Thread Eric Snow
If you look at the Python and Modules directories in the cpython repo,
you'll find modules in Python/ and core files (like python.c and
main.c) in Modules/.  (It's like parking on a driveway and driving on
a parkway. )  It's not that big a deal and not that hard to
figure out (so I'm fine with the status quo), but it is a bit
surprising.  When I was first getting familiar with the code base  a
few years ago (as a C non-expert), it was a not insignificant but not
major stumbling block.

The situation is mostly a consequence of history, if I understand
correctly.  The subject has come up before and I don't recall any
objections to doing something about it.  I haven't had the time to
track down those earlier discussions, though I remember Benjamin
having some comment about it.

Would it be too disruptive (churn, etc.) to clean this up in 3.5?  I
see it similarly to when I moved a light switch from outside my
bathroom to inside.  For a while, but not that long, I kept
unconsciously reaching for the switch that was no longer there on the
outside.  Regardless I'm glad I did it.  Likewise, moving the handful
of files around is a relatively inconsequential change that would make
the project just a little less surprising, particularly for new
contributors.

-eric

p.s. Either way I'll probably take some time (it shouldn't take long)
after the PEP 451 implementation is done to put together a patch that
moves the files around, just to see what difference it makes.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Accepting PEP 3154 for 3.4?

2013-11-16 Thread Eric Snow
On Sat, Nov 16, 2013 at 11:15 AM, Antoine Pitrou  wrote:
>
> Hello,
>
> Alexandre Vassalotti (thanks a lot!) has recently finalized his work on
> the PEP 3154 implementation - pickle protocol 4.
>
> I think it would be good to get the PEP and the implementation accepted
> for 3.4.

+1

Once the core portion of the PEP 451 implementation is done, I plan on
tweaking pickle to take advantage of module.__spec__ (where
applicable).  It would be nice to have the PEP 3154 implementation in
place before I do anything in that space.

-eric
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] NTPath or WindowsPath?

2013-11-16 Thread Antoine Pitrou

Hello,

In a (private) discussion about PEP 428 and pathlib, Guido proposed
that maybe NTPath should be renamed to WindowsPath, since the name is
more likely to stay relevant in the middle term. What do you think?

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NTPath or WindowsPath?

2013-11-16 Thread Benjamin Peterson
2013/11/16 Antoine Pitrou :
>
> Hello,
>
> In a (private) discussion about PEP 428 and pathlib, Guido proposed
> that maybe NTPath should be renamed to WindowsPath, since the name is
> more likely to stay relevant in the middle term. What do you think?

I agree with Guido.


-- 
Regards,
Benjamin
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NTPath or WindowsPath?

2013-11-16 Thread Steve Dower
Sounds good to me. NT is already an obsolete term - Win32 would be more 
accurate - but WinRT hasn't changed the path format, so WindowsPath will be 
accurate for the foreseeable future.

Cheers,
Steve

Top posted from my Windows Phone

From: Benjamin Peterson
Sent: ‎11/‎16/‎2013 11:22
To: Antoine Pitrou
Cc: Python Dev
Subject: Re: [Python-Dev] NTPath or WindowsPath?

2013/11/16 Antoine Pitrou :
>
> Hello,
>
> In a (private) discussion about PEP 428 and pathlib, Guido proposed
> that maybe NTPath should be renamed to WindowsPath, since the name is
> more likely to stay relevant in the middle term. What do you think?

I agree with Guido.


--
Regards,
Benjamin
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/steve.dower%40microsoft.com
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NTPath or WindowsPath?

2013-11-16 Thread Serhiy Storchaka

16.11.13 21:15, Antoine Pitrou написав(ла):

In a (private) discussion about PEP 428 and pathlib, Guido proposed
that maybe NTPath should be renamed to WindowsPath, since the name is
more likely to stay relevant in the middle term. What do you think?


What about nturl2path, os.name, sysconfig.get_scheme_names()?


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mixed up core/module source file locations in CPython

2013-11-16 Thread Brett Cannon
On Sat, Nov 16, 2013 at 1:40 PM, Eric Snow wrote:

> If you look at the Python and Modules directories in the cpython repo,
> you'll find modules in Python/ and core files (like python.c and
> main.c) in Modules/.  (It's like parking on a driveway and driving on
> a parkway. )  It's not that big a deal and not that hard to
> figure out (so I'm fine with the status quo), but it is a bit
> surprising.  When I was first getting familiar with the code base  a
> few years ago (as a C non-expert), it was a not insignificant but not
> major stumbling block.
>
> The situation is mostly a consequence of history, if I understand
> correctly.  The subject has come up before and I don't recall any
> objections to doing something about it.  I haven't had the time to
> track down those earlier discussions, though I remember Benjamin
> having some comment about it.
>
> Would it be too disruptive (churn, etc.) to clean this up in 3.5?  I
> see it similarly to when I moved a light switch from outside my
> bathroom to inside.  For a while, but not that long, I kept
> unconsciously reaching for the switch that was no longer there on the
> outside.  Regardless I'm glad I did it.  Likewise, moving the handful
> of files around is a relatively inconsequential change that would make
> the project just a little less surprising, particularly for new
> contributors.
>
> -eric
>
> p.s. Either way I'll probably take some time (it shouldn't take long)
> after the PEP 451 implementation is done to put together a patch that
> moves the files around, just to see what difference it makes.
>

I personally think it would be a good idea to re-arrange the files to make
things more beginner-friendly. I believe Nick was also talking about
renaming directories, etc. at some point.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Accepting PEP 3154 for 3.4?

2013-11-16 Thread Nick Coghlan
On 17 Nov 2013 04:45, "Eric Snow"  wrote:
>
> On Sat, Nov 16, 2013 at 11:15 AM, Antoine Pitrou 
wrote:
> >
> > Hello,
> >
> > Alexandre Vassalotti (thanks a lot!) has recently finalized his work on
> > the PEP 3154 implementation - pickle protocol 4.
> >
> > I think it would be good to get the PEP and the implementation accepted
> > for 3.4.
>
> +1
>
> Once the core portion of the PEP 451 implementation is done, I plan on
> tweaking pickle to take advantage of module.__spec__ (where
> applicable).  It would be nice to have the PEP 3154 implementation in
> place before I do anything in that space.

+1 from me, too.

Cheers,
Nick.

>
> -eric
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mixed up core/module source file locations in CPython

2013-11-16 Thread Nick Coghlan
On 17 Nov 2013 09:48, "Brett Cannon"  wrote:
>
>
> I personally think it would be a good idea to re-arrange the files to
make things more beginner-friendly. I believe Nick was also talking about
renaming directories, etc. at some point.

Yeah, the main ones I am interested in are a separate "Programs" dir for
the C files that define C main functions and breaking up the monster that
is pythonrun.c.

I was looking into that as part of the PEP 432 implementation, so it fell
by the wayside when I deferred that to help out with packaging issues
instead.

Cheers,
Nick.

>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyParallel: alternate async I/O and GIL removal

2013-11-16 Thread Guido van Rossum
Trent, I watched your video and read your slides. (Does the word
"motormouth" mean anything to you? :-)

Clearly your work isn't ready for python-dev -- it is just too speculative.
I've moved python-dev to BCC and added python-ideas.

It possibly doesn't even belong on python-ideas -- if you are serious about
wanting to change Linux or other *NIX variants, you'll have to go find a
venue where people who do forward-looking kernel work hang out.

Finally, I'm not sure why you are so confrontational about the way Twisted
and Tulip do things. We are doing things the only way they *can* be done
without overhauling the entire CPython implementation (which you have
proven will take several major release cycles, probably until 4.0). It's
fine that you are looking further forward than most of us. I don't think it
makes sense that you are blaming the rest of us for writing libraries that
can be used today.


On Sat, Nov 16, 2013 at 10:13 AM, Trent Nelson  wrote:

> Hi folks,
>
> Video of the presentation I gave last weekend at PyData NYC
> regarding PyParallel just went live: https://vimeo.com/79539317
>
> Slides are here:
> https://speakerdeck.com/trent/pyparallel-how-we-removed-the-gil-and-exploited-all-cores-1
>
> The work was driven by the async I/O discussions around this time
> last year on python-ideas.  That resulted in me sending this:
>
>
> http://markmail.org/thread/kh3qgjbydvxt3exw#query:+page:1+mid:arua62vllzugjy2v+state:results
>
> where I attempted to argue that there was a better way of
> doing async I/O on Windows than the status quo of single-threaded,
> non-blocking I/O with an event multiplex syscall.
>
> I wasn't successful in convincing anyone at the time; I had no code
> to back it up and I didn't articulate my plans for GIL removal at
> the time either (figuring the initial suggestion would be met with
> enough scepticism as is).
>
> So, in the video above, I spend a lot of time detailing how IOCP
> works on Windows, how it presents us with a better environment than
> UNIX for doing asynchronous I/O, and how it paired nicely with the
> other work I did on coming up with a way for multiple threads to
> execute simultaneously across all cores without introducing any
> speed penalties.
>
> I'm particularly interested to hear if the video/slides helped
> UNIX-centric people gain a better understanding of how Windows does
> IOCP and why it would be preferable when doing async I/O.
>
> The reverse is also true: if you still think single-threaded, non-
> blocking synchronous I/O via kqueue/epoll is better than the
> approach afforded by IOCP, I'm interested in hearing why.
>
> As crazy as it sounds, my long term goal would be to try and
> influence Linux and BSD kernels to implement thread-agnostic I/O
> support such that an IOCP-like mechanism could be exposed; Solaris
> and AIX already do this via event ports and AIX's verbatim copy of
> Windows' IOCP API.
>
> (There is some promising work already being done on Linux; see
>  recent MegaPipe paper for an example.)
>
> Regards,
>
> Trent.
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Accepting PEP 3154 for 3.4?

2013-11-16 Thread Guido van Rossum
On Sat, Nov 16, 2013 at 10:15 AM, Antoine Pitrou wrote:

> Alexandre Vassalotti (thanks a lot!) has recently finalized his work on
> the PEP 3154 implementation - pickle protocol 4.
>
> I think it would be good to get the PEP and the implementation accepted
> for 3.4. As far as I can say, this has been a low-controvery proposal,
> and it brings fairly obvious improvements to the table (which table?).
> I still need some kind of BDFL or BDFL delegate to do that, though --
> unless I am allowed to mark my own PEP accepted :-)
>
> (I've asked Tim, specifically, for comments, since he contributed a lot
> to previous versions of the pickle protocol.)
>
> The PEP is at http://www.python.org/dev/peps/pep-3154/ (should be
> rebuilt soon by the server, I'd say)
>
> Alexandre's implementation is tracked at
> http://bugs.python.org/issue17810
>

Assuming Tim doesn't object (hi Tim!) I think this PEP is fine to accept --
all the ideas sound good, and I agree with moving to support 8-byte sizes
and framing. I haven't looked at the implementation but I trust you as a
reviewer and the beta release process.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyParallel: alternate async I/O and GIL removal

2013-11-16 Thread Terry Reedy

On 11/16/2013 8:39 PM, Guido van Rossum wrote:

Trent, I watched your video and read your slides.


I only read the slides.


(Does the word "motormouth" mean anything to you? :-)


The extra background (and repetition) was helpful to me in filling in 
things, especially about Windows, that I could not follow in the 
discussion a year ago. But I noticed that it might be tedious to people 
already familiar with the subject.



Clearly your work isn't ready for python-dev -- it is just too
speculative. I've moved python-dev to BCC and added python-ideas.

It possibly doesn't even belong on python-ideas -- if you are serious
about wanting to change Linux or other *NIX variants, you'll have to go
find a venue where people who do forward-looking kernel work hang out.


A working and useful Windows-only PyParallel (integrated with CPython or 
not) might help stimulate open-source *nix improvements to match the new 
Windows stuff.



Finally, I'm not sure why you are so confrontational about the way
Twisted and Tulip do things. We are doing things the only way they *can*
be done without overhauling the entire CPython implementation (which you
have proven will take several major release cycles, probably until 4.0).
It's fine that you are looking further forward than most of us. I don't
think it makes sense that you are blaming the rest of us for writing
libraries that can be used today.


Just reading the slides, I did not pick up as much confrontation and 
blame as you did. I certainly appreciate that asyncio is working *now*, 
and on all three major target systems. I look forward to looking at the 
doc when it is added. I will understand a bit better for having read 
Trent's slides.


--
Terry Jan Reedy

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Steven D'Aprano
On Sat, Nov 16, 2013 at 04:46:00PM +0100, Antoine Pitrou wrote:

> I agree that conflating the two doesn't help the discussion.
> While removing docstrings may be beneficial on memory-constrained
> devices, I can't remember a single situation where I've wanted to
> remove asserts on a production system.
> 
> (I also tend to write less and less asserts in production code, since
> all of them tend to go in unit tests instead, with the help of e.g.
> mock objects)

I'm the opposite. I like using asserts in my code, and while I don't 
*typically* run it with -O, I do think it is valuable to have to 
opportunity to remove asserts. I've certainly written code where -O has 
given a major micro-optimization of individual functions (anything up to 
50% speedup). I've never measured if that lead to a significant 
whole-application speedup, but I assume that was some benefit. (I know, 
premature optimization and all that...)


-- 
Steven
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Steven D'Aprano
On Sun, Nov 17, 2013 at 01:50:31AM +1000, Nick Coghlan wrote:

> No, that's the wrong question to ask. The onus is on *you* to ask "Who
> is this feature for? Do they still need it? Can we meet their needs in
> a different way?". You're the one proposing to break things, so it's
> up to you to make the case for why that's an OK thing to do.
[...]
> That's not the way this works - backwards compatibility is sacrosanct,
> and it requires some seriously compelling evidence to justify a
> breach.

Thanks for saying this.

I get frustrated by the number of times people propose removing -O 
(apparently) just because they personally don't use it. I personally 
have never got any benefit from the tarfile or multiprocessing modules, 
but I don't ask for them to be removed :-)


-- 
Steven
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)

2013-11-16 Thread Steven D'Aprano
On Sat, Nov 16, 2013 at 11:16:48AM -0500, Donald Stufft wrote:
> Personally I think that none of the -O* should be removing asserts. It feels
> like a foot gun to me. I’ve seen more than one codebase that would be
> completely broken under -O* because they used asserts without even knowing
> -O* existed.

I agree that many people misuse asserts. I agree that this is a problem 
for them. But I disagree that the solution is to remove what I consider 
to be a useful feature, and a common one in other languages, just to 
protect people from their broken code.

I prefer to see better efforts to educate them. To that end, I've taken 
the liberty of sending a post to [email protected] describing when 
to use asserts, and when not to use them:

https://mail.python.org/pipermail/python-list/2013-November/660401.html

I will continue to use my best effort to encourage good use of 
assertions in Python.



-- 
Steven
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com