Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Paul Moore
On 16 May 2018 at 05:35, Alex Walters  wrote:
> In the spirit of learning why there is a fence across the road before I tear
> it down out of ignorance [1], I'd like to know the rationale behind source
> only releases of cpython.  I have an opinion on their utility and perhaps an
> idea about changing them, but I'd like to know why they are done (as opposed
> to source+binary releases or no release at all) before I head over to
> python-ideas.  Is this documented somewhere where my google-fu can't find
> it?
>
>
> [1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence

Assuming you're referring to the practice of no longer distributing
binaries for patch releases of older versions of Python, the reason is
basically as follows:

1. Producing binaries (to the quality we normally deliver - I'm not
talking about auto-built binaries produced from a CI system) is a
chunk of extra work for the release managers.
2. The releases in question are essentially end of life, and we're
only accepting security fixes.
3. Not even releasing sources means that people still using those
releases will no longer have access to security fixes, so we'd be
reducing the length of time we offer that level of support.

So extra binaries = more work for the release managers, no source
release = less support for our users.

There's no reason we couldn't have a discussion on changing the
policy, but any such discussion would probably need active support
from the release managers if it were to stand any chance of going
anywhere (as they are the people directly impacted by any such
change).

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Serhiy Storchaka

16.05.18 07:35, Alex Walters пише:

In the spirit of learning why there is a fence across the road before I tear
it down out of ignorance [1], I'd like to know the rationale behind source
only releases of cpython.  I have an opinion on their utility and perhaps an
idea about changing them, but I'd like to know why they are done (as opposed
to source+binary releases or no release at all) before I head over to
python-ideas.  Is this documented somewhere where my google-fu can't find
it?


Taking a snapshot of sources at the random point of time is dangerous. 
You can get broken sources. Making a source only release means that 
sources are in consistent state, most buildbots are green, and core 
developers made necessary changes and stopped merging risky changes for 
some period before the release.


The difference with source+binary releases is that latter adds 
additional burden to release managers: building binaries and installers 
on different platforms and publishing results on the site.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Serhiy Storchaka

16.05.18 07:35, Alex Walters пише:

[1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence


And I wish that every author who suggested the idea for Python was 
familiar with the Chesterton's fence principle.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Paul Moore
On 16 May 2018 at 09:34, Serhiy Storchaka  wrote:
> 16.05.18 07:35, Alex Walters пише:
>>
>> [1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence
>
>
> And I wish that every author who suggested the idea for Python was familiar
> with the Chesterton's fence principle.

Agreed - thanks Alex for taking the time to research the issue!

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Eric V. Smith

On 5/16/18 4:34 AM, Serhiy Storchaka wrote:

16.05.18 07:35, Alex Walters пише:

[1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence


And I wish that every author who suggested the idea for Python was
familiar with the Chesterton's fence principle.


Indeed! It's refreshing. Thanks, Alex.

Eric

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Ned Deily
On May 16, 2018, at 00:35, Alex Walters  wrote:
> In the spirit of learning why there is a fence across the road before I tear
> it down out of ignorance [1], I'd like to know the rationale behind source
> only releases of cpython.  I have an opinion on their utility and perhaps an
> idea about changing them, but I'd like to know why they are done (as opposed
> to source+binary releases or no release at all) before I head over to
> python-ideas.  Is this documented somewhere where my google-fu can't find
> it?

The Python Developer's Guide has a discussion of the lifecycle of cPython 
releases here:

https://devguide.python.org/#status-of-python-branches

The ~short answer is that we produce source+binary (Windows and macOS binary 
installers) artifacts for release branches in "bugfix" (AKA "maintenance") mode 
(currently 3.6 and 2.7) as well as during the later stages of the 
in-development phase for future feature releases ("prerelease" mode) (currently 
3.7); we produce only source releases for release branches in "security" mode.

After the initial release of a new feature branch (for example, the upcoming 
3.7.0 release), we will continue to support the previous release branch in 
bugfix mode for some overlapping period of time.  So, for example, the current 
plan is to support both 3.7.x and 3.6.x (along with 2.7.x) in bugfix mode, 
releasing both source and binary artifacts for about six months after the 3.7.0 
release.  At that point, 3.6.x will transition to security-fix-only mode, where 
we will only produce releases on an as-needed basis and only in source form.  
Currently, 3.5 and 3.4 are also in security-fix-only mode.  Eventually, usually 
five years after its initial release, a release branch will reach end-of-life: 
the branch will be frozen and no further issues for that release branch will be 
accepted nor will fixes be produced by Python Dev.  2.7 is a special case, with 
a greatly extended bugfix phase; it will proceed directly to end-of-life status 
as of 2020-01-01.

There is more information later elsewhere in the devguide:

https://devguide.python.org/devcycle/

and in the release PEPs linked in the Status of Python Branches section.

Hope that helps!

--
  Ned Deily
  n...@python.org -- []

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Barry Warsaw
On May 16, 2018, at 00:35, Alex Walters  wrote:
> 
> In the spirit of learning why there is a fence across the road before I tear
> it down out of ignorance [1], I'd like to know the rationale behind source
> only releases of cpython.

Historically, it was a matter of resources.  Making binary releases incurs 
costs and delays on the release process and release managers, including the 
folks who actually have to produce the binaries.  As a version winds down, we 
wanted to impose less work on those folks and less friction and delay in 
cutting a release.  There is still value in spinning a tarball though, for 
downstream consumers who need a tagged and blessed release.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-05-16 Thread Petr Viktorin

On 05/15/18 17:55, Jeroen Demeyer wrote:

On 2018-05-15 18:36, Petr Viktorin wrote:

Naturally, large-scale
changes have less of a chance there.


Does it really matter that much how large the change is? I think you are 
focusing too much on the change instead of the end result.


As I said in my previous post, I could certainly make less disruptive 
changes. But would that really be better? (If you think that the answer 
is "yes" here, I honestly want to know).


Yes, I believe it is better.
The larger a change is, the harder it is to understand, meaning that 
less people can meaningfully join the conversation, think about how it 
interacts with their own use cases, and notice (and think through) any 
unpleasant details.

Less disruptive changes tend to have a better backwards compatibility story.
A less intertwined change makes it easier to revert just a single part, 
in case that becomes necessary.


I could make the code less different than today but at the cost of added 
complexity. Building on top of the existing code is like building on a 
bad foundation: the higher you build, the messier it gets. Instead, I 
propose a solid new foundation. Of course, that requires more work to 
build but once it is built, the finished building looks a lot better.


To continue the analogy: the tenants have been customizing their 
apartments inside that building, possibly depending on structural 
details that we might think should be hidden from them. And they expect 
to continue living there while the foundation is being swapped under them :)



With such a "finished product" PEP, it's hard to see if some of the
various problems could be solved in a better way -- faster, more
maintainable, or less disruptive.


With "faster", you mean runtime speed? I'm pretty confident that we 
won't lose anything there.


As I argued above, my PEP might very well make things "more 
maintainable", but this is of course very subjective. And "less 
disruptive" was never a goal for this PEP.



It's also harder from a psychological point of view: you obviously
already put in a lot of good work, and it's harder to waste that work if
an even better solution is found.


I hope that this won't be my psychology. As a developer, I prefer to 
focus on problems rather than on solutions: I don't want to push a 
particular solution, I want to fix a particular problem. If an even 
better solution is accepted, I will be a very happy man.


What I would hate is that this PEP gets rejected because some people 
claim that the problem can be solved in a better way, but without 
actually suggesting such a better way.


Mark Shannon has an upcoming PEP with an alternative to some of the 
issues. (Not all of them – but less intertwined is better, all else 
being equal.)



Is a branching class hierarchy, with quite a few new of flags for
feature selection, the kind of simplicity we want?


Maybe yes because it *concentrates* all complexity in one small place. 
Currently, we have several independent classes 
(builtin_function_or_method, method_descriptor, function, method) which 
all require various forms of special casing in the interpreter with some 
code duplication. With my PEP, this all goes away and instead we need to 
understand just one class, namely base_function.



Would it be possible to first decouple things, reducing the complexity,
and then tackle the individual problems?


What do you mean with "decouple things"? Can you be more concrete?


Currently, the "outside" of a function (how it looks when introspected) 
is tied to the "inside" (what happens internally when it's called). 
That's what I'd like to see decoupled.
Can we better enable pydoc/IPython developers to tackle introspection 
problems without wading deep in the internals and call optimizations?



The class hierarchy still makes it hard to decouple the introspection
side (how functions look on the outside) from the calling mechanism (how
the calling works internally).


Any class who wants to profit from fast function calls can inherit from 
base_function. It can add whatever attributes it wants and it can choose 
to implement documentation and/or introspection in whatever way it 
wants. It can choose to not care about that at all. That looks very 
decoupled to me.


But, it still has to inherit from base_function to "look like a 
function". Can we remove that limitation in favor of duck typing?



Starting from an idea and ironing out the details it lets you (and, if
since you published results, everyone else) figure out the tricky
details. But ultimately it's exploring one path of doing things – it
doesn't necessarily lead to the best way of doing something.


So far I haven't seen any other proposals...


That's a good question. Maybe inspect.isfunction() serves too many use
cases to be useful. Cython functons should behave like "def" functions
in some cases, and like built-in functions in others.


 From the outside, i.e. user's point of view, I want them to behave like 

Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-05-16 Thread Stefan Behnel
Petr Viktorin schrieb am 15.05.2018 um 18:36:
> On 05/15/18 05:15, Jeroen Demeyer wrote:
>> An important note is that it was never my goal to create a minimal PEP. I
>> did not aim for changing as little as possible. I was thinking: we are
>> changing functions, what would be the best way to implement them?
> 
> That might be a problem. For the change to be accepted, a core developer
> will need to commit to maintaining the code, understand it, and accept
> responsibility for anything that's broken. Naturally, large-scale changes
> have less of a chance there.

Honestly, the current implementation involves such a clutter of special
cases that the internal code simplification that this PEP allows should
make every core developer who needs to make their hands dirty with the
current code bow to Jeroen for coming up with this PEP and even
implementing it.

Just my personal opinion.

Stefan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-05-16 Thread Jeroen Demeyer

On 2018-05-16 17:31, Petr Viktorin wrote:

The larger a change is, the harder it is to understand


I already disagree here...

I'm afraid that you are still confusing the largeness of the *change* 
with the complexity of the *result* after the change was implemented.
A change that *removes* complexity should be considered a good thing, 
even if it's a large change.


That being said, if you want me to make smaller changes, I could do it. 
But I would do it for *you* personally because I'm afraid that other 
people might rightly complain that I'm making things too complicated.


So I would certainly like some feedback from others on this point.


Less disruptive changes tend to have a better backwards compatibility story.


Maybe in very general terms, yes. But I believe that the "disruptive" 
changes that I'm making will not contribute to backwards 
incompatibility. Adding new ml_flags flags shouldn't break anything and 
adding a base class shouldn't either (I doubt that there is code relying 
on the fact that type(len).__base__ is object).


In my opinion, the one change that is most likely to cause backwards 
compatibility problems is changing the type of bound methods of 
extension types. And that change is even in the less disruptive PEP 576.



Mark Shannon has an upcoming PEP with an alternative to some of the
issues.


I'm looking forward to a serious discussion about that. However, from a 
first reading, I'm not very optimistic about its performance implications.



Currently, the "outside" of a function (how it looks when introspected)
is tied to the "inside" (what happens internally when it's called).
Can we better enable pydoc/IPython developers to tackle introspection
problems without wading deep in the internals and call optimizations?


I proposed complete decoupling in https://bugs.python.org/issue30071 and 
that was rejected. Anyway, decoupling of introspection is not the 
essence of this PEP. This PEP is really about allowing custom built-in 
function subclasses. That's the hard part where CPython internals come 
in. So I suggest that we leave the discussion about introspection and 
focus on the function classes.



But, it still has to inherit from base_function to "look like a
function". Can we remove that limitation in favor of duck typing?


Duck typing is a Python thing, I don't know what "duck typing" would 
mean on the C level. We could change the existing isinstance(..., 
base_function) check by a different fast check. For example, we 
(together with the Cython devs) have been pondering about a new "type" 
field, say tp_cfunctionoffset pointing to a certain C field in the 
object structure. That would work but it would not be so fundamentally 
different from the current PEP.



*PS*: On friday, I'm leaving for 2 weeks on holidays. So if I don't 
reply to comments on PEP 575 or alternative proposals, don't take it as 
a lack of interest.



Jeroen.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Hashes in Python3.5 for tuples and frozensets

2018-05-16 Thread Anthony Flury via Python-Dev

This may be known but I wanted to ask this esteemed body first.

I understand that from Python3.3 there was a security fix to ensure that 
different python processes would generate different hash value for the 
same input - to prevent denial of service based on crafted hash conflicts.


I opened two python REPLs on my Linux 64bit PC and did the following

Terminal 1:

>>> hash('Hello World')
   -1010252950208276719

>>> hash( frozenset({1,9}) )
 -7625378979602737914
>>> hash(frozenset({300,301}))
   -8571255922896611313

>>> hash((1,9))
   3713081631926832981
>>> hash((875,932))
   3712694086932196356



Terminal 2:

>>> hash('Hello World')
   -8267767374510285039

>>> hash( frozenset({1,9}) )
 -7625378979602737914
>>> hash(frozenset({300,301}))
   -8571255922896611313

>>> hash((1,9))
   3713081631926832981
>>> hash((875,932))
   3712694086932196356

As can be seen - taking a hash of a string does indeed create a 
different value between the two processes (as expected).


However the frozen set hash, the same in both cases, as is the hash of 
the tuples - suggesting that the vulnerability resolved in Python 3.3 
wasn't resolved across all potentially hashable values. lI even used 
different large numbers to ensure that the integers weren't being interned.


I can imagine that frozensets aren't used frequently as hash keys - but 
I would think that tuples are regularly used. Since that their hashes 
are not salted does the vulnerability still exist in some form ?.


--
--
Anthony Flury
email : *anthony.fl...@btinternet.com*
Twitter : *@TonyFlury *

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hashes in Python3.5 for tuples and frozensets

2018-05-16 Thread Raymond Hettinger


> On May 16, 2018, at 5:48 PM, Anthony Flury via Python-Dev 
>  wrote:
> 
> However the frozen set hash, the same in both cases, as is the hash of the 
> tuples - suggesting that the vulnerability resolved in Python 3.3 wasn't 
> resolved across all potentially hashable values.

You are correct.  The hash randomization only applies to strings.  None of the 
other object hashes were altered.  Whether this is a vulnerability or not 
depends greatly on what is exposed to users (generally strings) and how it is 
used.

For the most part, it is considered a feature that integers hash to themselves. 
 That is very fast to compute :-) Also, it tends to prevent hash collisions for 
consecutive integers.



Raymond
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hashes in Python3.5 for tuples and frozensets

2018-05-16 Thread Christian Heimes
On 2018-05-16 18:10, Raymond Hettinger wrote:
> 
> 
>> On May 16, 2018, at 5:48 PM, Anthony Flury via Python-Dev 
>>  wrote:
>>
>> However the frozen set hash, the same in both cases, as is the hash of the 
>> tuples - suggesting that the vulnerability resolved in Python 3.3 wasn't 
>> resolved across all potentially hashable values.
> 
> You are correct.  The hash randomization only applies to strings.  None of 
> the other object hashes were altered.  Whether this is a vulnerability or not 
> depends greatly on what is exposed to users (generally strings) and how it is 
> used.
> 
> For the most part, it is considered a feature that integers hash to 
> themselves.  That is very fast to compute :-) Also, it tends to prevent hash 
> collisions for consecutive integers.

Raymond is 100% correct. Just one small nit pick: randomization applies
to both string and bytes.

Christian
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Alex Walters
Thank you, that's exactly what I needed to read.

> -Original Message-
> From: Ned Deily 
> Sent: Wednesday, May 16, 2018 7:07 AM
> To: Alex Walters 
> Cc: Python-Dev 
> Subject: Re: [Python-Dev] What is the rationale behind source only
releases?
> 
> On May 16, 2018, at 00:35, Alex Walters  wrote:
> > In the spirit of learning why there is a fence across the road before I
tear
> > it down out of ignorance [1], I'd like to know the rationale behind
source
> > only releases of cpython.  I have an opinion on their utility and
perhaps an
> > idea about changing them, but I'd like to know why they are done (as
> opposed
> > to source+binary releases or no release at all) before I head over to
> > python-ideas.  Is this documented somewhere where my google-fu can't
> find
> > it?
> 
> The Python Developer's Guide has a discussion of the lifecycle of cPython
> releases here:
> 
> https://devguide.python.org/#status-of-python-branches
> 
> The ~short answer is that we produce source+binary (Windows and macOS
> binary installers) artifacts for release branches in "bugfix" (AKA
> "maintenance") mode (currently 3.6 and 2.7) as well as during the later
> stages of the in-development phase for future feature releases
> ("prerelease" mode) (currently 3.7); we produce only source releases for
> release branches in "security" mode.
> 
> After the initial release of a new feature branch (for example, the
upcoming
> 3.7.0 release), we will continue to support the previous release branch in
> bugfix mode for some overlapping period of time.  So, for example, the
> current plan is to support both 3.7.x and 3.6.x (along with 2.7.x) in
bugfix
> mode, releasing both source and binary artifacts for about six months
after
> the 3.7.0 release.  At that point, 3.6.x will transition to
security-fix-only mode,
> where we will only produce releases on an as-needed basis and only in
> source form.  Currently, 3.5 and 3.4 are also in security-fix-only mode.
> Eventually, usually five years after its initial release, a release branch
will
> reach end-of-life: the branch will be frozen and no further issues for
that
> release branch will be accepted nor will fixes be produced by Python Dev.
> 2.7 is a special case, with a greatly extended bugfix phase; it will
proceed
> directly to end-of-life status as of 2020-01-01.
> 
> There is more information later elsewhere in the devguide:
> 
> https://devguide.python.org/devcycle/
> 
> and in the release PEPs linked in the Status of Python Branches section.
> 
> Hope that helps!
> 
> --
>   Ned Deily
>   n...@python.org -- []


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Alex Walters
This is precisely what I meant.  Before asking this question, I didn’t fully 
understand why, for example, 3.5.4 got a binary installer for windows and mac, 
but 3.5.5 did not.  This thread has cleared that up for me.

 

From: Python-Dev  On 
Behalf Of Donald Stufft
Sent: Wednesday, May 16, 2018 1:23 AM
To: Ben Finney 
Cc: python-dev@python.org
Subject: Re: [Python-Dev] What is the rationale behind source only releases?

 

 

On May 16, 2018, at 1:06 AM, Ben Finney mailto:ben+pyt...@benfinney.id.au> > wrote:

 


I'd like to know the rationale behind source only releases of cpython.


Software freedom entails the freedom to modify and build the software.
For that, one needs the source form of the software.

Portable software should be feasible to build from source, on a platform
where no builds (of that particular release) were done before. For that,
one needs the source form of the software.

 

I’m guessing the question isn’t why is it useful to have a source release of 
CPython, but why does CPython transition from having both source releases and 
binary releases to only source releases. My assumption is the rationale is to 
reduce the maintenance burden as time goes on for older release channels.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Alex Walters


> -Original Message-
> From: Paul Moore 
> Sent: Wednesday, May 16, 2018 4:07 AM
> To: Alex Walters 
> Cc: Python Dev 
> Subject: Re: [Python-Dev] What is the rationale behind source only releases?
> 
> On 16 May 2018 at 05:35, Alex Walters  wrote:
> > In the spirit of learning why there is a fence across the road before I tear
> > it down out of ignorance [1], I'd like to know the rationale behind source
> > only releases of cpython.  I have an opinion on their utility and perhaps an
> > idea about changing them, but I'd like to know why they are done (as
> opposed
> > to source+binary releases or no release at all) before I head over to
> > python-ideas.  Is this documented somewhere where my google-fu can't
> find
> > it?
> >
> >
> > [1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence
> 
> Assuming you're referring to the practice of no longer distributing
> binaries for patch releases of older versions of Python, the reason is
> basically as follows:
> 
> 1. Producing binaries (to the quality we normally deliver - I'm not
> talking about auto-built binaries produced from a CI system) is a
> chunk of extra work for the release managers.

This is actually the heart of the reason I asked the question.  CI tools are 
fairly good now.  If the CI tools could be used in such a way to make the 
building of binary artifacts less of a burden on the release managers, would 
there be interest in doing that, and in the process, releasing binary artifact 
installers for all security update releases.

My rationale for asking if its possible is... well.. security releases are 
important, and it's hard to ask Windows users to install Visual Studio and 
build python to use the most secure version of python that will run your python 
program.  Yes there are better ideal solutions (porting your code to the latest 
and greatest feature release version), but that’s not a zero burden option 
either.

If CI tools just aren't up to the task, then so be it, and this isn't something 
I would darken -ideas' door with.

> 2. The releases in question are essentially end of life, and we're
> only accepting security fixes.
> 3. Not even releasing sources means that people still using those
> releases will no longer have access to security fixes, so we'd be
> reducing the length of time we offer that level of support.
> 
> So extra binaries = more work for the release managers, no source
> release = less support for our users.
> 
> There's no reason we couldn't have a discussion on changing the
> policy, but any such discussion would probably need active support
> from the release managers if it were to stand any chance of going
> anywhere (as they are the people directly impacted by any such
> change).
> 
> Paul

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the rationale behind source only releases?

2018-05-16 Thread Terry Reedy

On 5/16/2018 11:46 PM, Alex Walters wrote:


This is actually the heart of the reason I asked the question.  CI tools are 
fairly good now.  If the CI tools could be used in such a way to make the 
building of binary artifacts less of a burden on the release managers, would 
there be interest in doing that, and in the process, releasing binary artifact 
installers for all security update releases.


The CI tools are used to test whether the repository is ready for a 
release.  The release manager and the two binary builders manually 
follow written scripts that include running various programs and 
scripts.  I don't know whether they master scripts are stable enough to 
automate yet.  The Windows binary production process was redone for 3.5. 
 The MacOS process was redone for 3.7 (.0b1).



My rationale for asking if its possible is... well.. security releases are 
important, and it's hard to ask Windows users to install Visual Studio and 
build python to use the most secure version of python that will run your python 
program.


I believe one rationale for not offering binaries is the the security 
patches are mostly of interest to server people, who *do* build Python 
themselves.


If you think otherwise, you could offer to build an installer and see if 
a release manager would include it on python.org as an experiment.


--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hashes in Python3.5 for tuples and frozensets

2018-05-16 Thread Victor Stinner
Hi,

String hash is randomized, but not the integer hash:

$ python3.5 -c 'print(hash("abc"))'
-8844814677999896014
$ python3.5 -c 'print(hash("abc"))'
-7757160699952389646

$ python3.5 -c 'print(hash(1))'
1
$ python3.5 -c 'print(hash(1))'
1

frozenset hash is combined from values of the set. So it's only
randomized if values hashes are randomized.

The denial of service is more likely to occur with strings as keys,
than with integers.

See the following link for more information:
http://python-security.readthedocs.io/vuln/cve-2012-1150_hash_dos.html

Victor

2018-05-16 17:48 GMT-04:00 Anthony Flury via Python-Dev :
> This may be known but I wanted to ask this esteemed body first.
>
> I understand that from Python3.3 there was a security fix to ensure that
> different python processes would generate different hash value for the same
> input - to prevent denial of service based on crafted hash conflicts.
>
> I opened two python REPLs on my Linux 64bit PC and did the following
>
> Terminal 1:
>
> >>> hash('Hello World')
>-1010252950208276719
>
> >>> hash( frozenset({1,9}) )
>  -7625378979602737914
> >>> hash(frozenset({300,301}))
>-8571255922896611313
>
> >>> hash((1,9))
>3713081631926832981
> >>> hash((875,932))
>3712694086932196356
>
>
>
> Terminal 2:
>
> >>> hash('Hello World')
>-8267767374510285039
>
> >>> hash( frozenset({1,9}) )
>  -7625378979602737914
> >>> hash(frozenset({300,301}))
>-8571255922896611313
>
> >>> hash((1,9))
>3713081631926832981
> >>> hash((875,932))
>3712694086932196356
>
> As can be seen - taking a hash of a string does indeed create a different
> value between the two processes (as expected).
>
> However the frozen set hash, the same in both cases, as is the hash of the
> tuples - suggesting that the vulnerability resolved in Python 3.3 wasn't
> resolved across all potentially hashable values. lI even used different
> large numbers to ensure that the integers weren't being interned.
>
> I can imagine that frozensets aren't used frequently as hash keys - but I
> would think that tuples are regularly used. Since that their hashes are not
> salted does the vulnerability still exist in some form ?.
>
> --
> --
> Anthony Flury
> email : *anthony.fl...@btinternet.com*
> Twitter : *@TonyFlury *
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com