Re: [Python-Dev] Draft PEP for time zone support.

2012-12-13 Thread Terry Reedy

On 12/13/2012 1:06 AM, Lennart Regebro wrote:

On Thu, Dec 13, 2012 at 2:24 AM, Terry Reedy tjre...@udel.edu wrote:

Or ask the user where to put it.


If we ask where it should be installed, then we need a registry
setting for that


Right.


So I think that asking is not an option at all. It either goes in
%PROGRAMDATA%\Python\zoneinfo or it's not shared at all.


If that works for all xp+ versions, fine.




I know where I would choose, and it would
not be on my C drive. Un-installers would not delete (unless a reference
count were kept and were decremented to 0).


True, and that's annoying when those counters go wrong.


It seems to me that Windows has a mechanism for this, at least in some 
versions. But maybe it only works for dlls.



All in all I would say I would prefer to install this per Python.


Then explicit update requires multiple downloads or copying. This is a 
violation of DRY. If if is not too large, it would not hurt to never 
delete it.


--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Draft PEP for time zone support.

2012-12-13 Thread Glenn Linderman

On 12/12/2012 11:32 PM, Janzert wrote:

On 12/13/2012 1:39 AM, Glenn Linderman wrote:

On 12/12/2012 6:10 PM, Janzert wrote:

On 12/12/2012 8:43 PM, Glenn Linderman wrote:

On 12/12/2012 5:36 PM, Brian Curtin wrote:


 C:\ProgramData\Python



   ^ That.  Is not the path that the link below is talking
about, though.



It actually does; it is rather confusing though. :/


I agree with the below. But I have never seen a version of Windows on
which c:\ProgramData was the actual path for FOLDERID_ProgramData. Can
you reference documentation that states that it was there, for some
version?  This documentation speaks of:

c:\Documents and Settings\AllUsers\Application Data (which I knew from
XP, and I think 2000, not sure I remember NT)

In Vista.0, Vista.1, and Vista.2, I guess it is moved to
C:\users\AllUsers\AppData\Roaming (typically).

Neither of those would result in C:\ProgramData\Python.



The SO answer links to the KNOWNFOLDERID docs; the relevant entry 
specifically is at


http://msdn.microsoft.com/en-us/library/windows/desktop/dd378457.aspx#FOLDERID_ProgramData 



which gives the default path as,

%ALLUSERSPROFILE% (%ProgramData%, %SystemDrive%\ProgramData)

checking on my local windows 7 install gives:

C:\echo %ALLUSERSPROFILE%
C:\ProgramData

C:\echo %ProgramData%
C:\ProgramData


Interesting.  It _did_ say something about data that is not specific to 
a user... and yet I overlooked that.


Those environment variable settings are, indeed, on my Win 7 machine, so 
I have erred and apologize.


That said, the directory C:\ProgramData does NOT exist on my Win 7 
machine, so it appears that VERY LITTLE software actually uses that 
setting. (I have nearly a hundred free and commercial packages installed 
on this machine. Not that 100 is a large percentage of the available 
software for Windows, but if the use was common, 100 packages would be 
likely to contain one that used it, eh?).


Thanks for the education, especially because you had to beat it into my 
skull!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Draft PEP for time zone support.

2012-12-13 Thread Lennart Regebro
On Thu, Dec 13, 2012 at 9:22 AM, Terry Reedy tjre...@udel.edu wrote:
 On 12/13/2012 1:06 AM, Lennart Regebro wrote:
 All in all I would say I would prefer to install this per Python.

 Then explicit update requires multiple downloads or copying. This is a
 violation of DRY. If if is not too large, it would not hurt to never delete
 it.

Yes, but this is no different that if you want to keep any library
updated over multiple Python versions. And I don't want to invent
another installation procedure that works for just this, or have a
little script that checks periodically for updates only for this,
adding to the plethora of update checkers on windows already. You
either keep your Python and it's libraries updated or you do not, I
don't think this is any different, and I think it should have the
exact same mechanisms and functions as all other third-party PyPI
packages.

//Lennart
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Draft PEP for time zone support.

2012-12-13 Thread Antoine Pitrou
Le Thu, 13 Dec 2012 10:07:34 +0100,
Lennart Regebro rege...@gmail.com a écrit :
 On Thu, Dec 13, 2012 at 9:22 AM, Terry Reedy tjre...@udel.edu wrote:
  On 12/13/2012 1:06 AM, Lennart Regebro wrote:
  All in all I would say I would prefer to install this per Python.
 
  Then explicit update requires multiple downloads or copying. This
  is a violation of DRY. If if is not too large, it would not hurt to
  never delete it.
 
 Yes, but this is no different that if you want to keep any library
 updated over multiple Python versions. And I don't want to invent
 another installation procedure that works for just this, or have a
 little script that checks periodically for updates only for this,
 adding to the plethora of update checkers on windows already. You
 either keep your Python and it's libraries updated or you do not, I
 don't think this is any different, and I think it should have the
 exact same mechanisms and functions as all other third-party PyPI
 packages.

Agreed. This doesn't warrant special-casing.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Draft PEP for time zone support.

2012-12-13 Thread Christian Heimes
Am 13.12.2012 10:07, schrieb Lennart Regebro:
 Yes, but this is no different that if you want to keep any library
 updated over multiple Python versions. And I don't want to invent
 another installation procedure that works for just this, or have a
 little script that checks periodically for updates only for this,
 adding to the plethora of update checkers on windows already. You
 either keep your Python and it's libraries updated or you do not, I
 don't think this is any different, and I think it should have the
 exact same mechanisms and functions as all other third-party PyPI
 packages.

+1

This PEP does fine without any auto-updatefeature. Please let Lennart
concentrate on the task at hand. If an auto-update system is still
wanted, it can and should be designed by somebody else as a separate
PEP. IMHO it's not Lennart's obligation to do so.

Christian

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: expose TCP_FASTOPEN and MSG_FASTOPEN

2012-12-13 Thread Benjamin Peterson
2012/12/13 Antoine Pitrou solip...@pitrou.net:
 On Thu, 13 Dec 2012 04:24:54 +0100 (CET)
 benjamin.peterson python-check...@python.org wrote:
 http://hg.python.org/cpython/rev/5435a9278028
 changeset:   80834:5435a9278028
 user:Benjamin Peterson benja...@python.org
 date:Wed Dec 12 22:24:47 2012 -0500
 summary:
   expose TCP_FASTOPEN and MSG_FASTOPEN

 files:
   Misc/NEWS  |  3 +++
   Modules/socketmodule.c |  7 ++-
   2 files changed, 9 insertions(+), 1 deletions(-)


 diff --git a/Misc/NEWS b/Misc/NEWS
 --- a/Misc/NEWS
 +++ b/Misc/NEWS
 @@ -163,6 +163,9 @@
  Library
  ---

 +- Expose the TCP_FASTOPEN and MSG_FASTOPEN flags in socket when they're
 +  available.

 This should be documented, no?

Other similar constants are documented only by TCP_* and a suggestion
to look at the manpage.


-- 
Regards,
Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Draft PEP for time zone support.

2012-12-13 Thread Terry Reedy

On 12/13/2012 4:07 AM, Lennart Regebro wrote:

On Thu, Dec 13, 2012 at 9:22 AM, Terry Reedy tjre...@udel.edu wrote:

On 12/13/2012 1:06 AM, Lennart Regebro wrote:

All in all I would say I would prefer to install this per Python.


Then explicit update requires multiple downloads or copying. This is a
violation of DRY. If if is not too large, it would not hurt to never delete
it.


Yes, but this is no different that if you want to keep any library
updated over multiple Python versions.


How I do that for my multi-version packages is to put them in a separate 
'python' directory and put python.pth with the path to that directory in 
the various site-packages directories. Any change to the *one* copy is 
available to all versions and all will operate the same if the code is 
truly multi-version. When I installed 3.3, I copied python.pth into its 
site-packages and was ready to go.


 And I don't want to invent another installation procedure
 that works for just this,

An email or so ago, you said that the tz database should go in 
C:\programdata (which currently does not exist on my machine either). 
That would be a new, invented installation procedure.


 or have a  little script that checks periodically
 for updates only for this,

adding to the plethora of update checkers on windows already.


I *never* suggested this. In fact, I said that installing an updated 
database (available to all Python versions) with each release would be 
sufficient for nearly everyone on Windows.



either keep your Python and it's libraries updated or you do not, I
don't think this is any different,and I think it should have the
exact same mechanisms and functions as all other third-party PyPI
packages.


When I suggested that users be able to put the database where they want, 
*just like with any other third-party package PyPI package*, you are the 
one who said no, this should be special cased.


The situation is this: most *nixes have or can have one system tz 
database. Python code that uses it will give the same answer regardless 
of the Python version. Windows apparently does not have such a thing. So 
we can
a) not use the tz database in the stdlib because it would not work on 
Windows (the defacto current situation);

b) use it but let the functions fail on Windows;
c) install a different version of the database with each Python 
installation, that can only be used by that installation, so that 
results may depend on the Python version. (This seem to be what you are 
now proposing, and if bugfix releases update the data only for that 
version, could result in earlier versions giving more accurate answers.);
d) install one database at a time so all Python versions give the same 
answer, just as on *nix.


--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Downloads page: Which version of Python should be listed first?

2012-12-13 Thread Chris Angelico
The default version shown on http://docs.python.org/ is now 3.3.0,
which I think is a Good Thing. However, http://python.org/download/
puts 2.7 first, and says:

If you don't know which version to use, start with Python 2.7; more
existing third party software is compatible with Python 2 than Python
3 right now.

Firstly, is this still true? (I wouldn't have a clue.) And secondly,
would this be better worded as one's better but the other's a good
fall-back? Something like:

Don't know which version to use? Python 3.3 is the recommended
version for new projects, but much existing software is compatible
with Python 2.

I only ever send people there to learn about programming, not to get a
dependency for an existing codebase, so I don't know what is actually
used.

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Downloads page: Which version of Python should be listed first?

2012-12-13 Thread Ross Lagerwall
On Fri, Dec 14, 2012 at 07:57:52AM +1100, Chris Angelico wrote:
 The default version shown on http://docs.python.org/ is now 3.3.0,
 which I think is a Good Thing. However, http://python.org/download/
 puts 2.7 first, and says:
 
 If you don't know which version to use, start with Python 2.7; more
 existing third party software is compatible with Python 2 than Python
 3 right now.
 
 Firstly, is this still true? (I wouldn't have a clue.) And secondly,
 would this be better worded as one's better but the other's a good
 fall-back? Something like:
 
 Don't know which version to use? Python 3.3 is the recommended
 version for new projects, but much existing software is compatible
 with Python 2.
 

I would say listing 3.3 as the recommended version to use is a good
thing, especially as distros like Ubuntu and Fedora transition to Python
3. It also makes sense, given that the docs default to 3.3.

-- 
Ross Lagerwall
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Distutils] Is is worth disentangling distutils?

2012-12-13 Thread Antonio Cavallo

I'll have a look into distutils2, I tough it was (another) dead end.
I every case my target is py2k (2.7.x) and I've no case for 
transitioning to py3k (too much risk).





Lennart Regebro wrote:

On Mon, Dec 10, 2012 at 8:22 AM, Antonio Cavallo
a.cava...@cavallinux.eu  wrote:

Hi,
I wonder if is it worth/if there is any interest in trying to clean up
distutils: nothing in terms to add new features, just a *major* cleanup
retaining the exact same interface.


I'm not planning anything like *adding features* or rewriting rpm/rpmbuild
here, simply cleaning up that un-holy code mess. Yes it served well, don't
get me wrong, and I think it did work much better than anything it was meant
to replace it.

I'm not into the py3 at all so I wonder how possibly it could fit/collide
into the big plan.

Or I'll be wasting my time?


The effort of making something that replaces distutils is, as far as I
can understand, currently on the level of taking the best bits out of
distutils2 and putting it into Python 3.4 under the name packaging.
I'm sure that effort can need more help.

//Lennart

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Distutils] Is is worth disentangling distutils?

2012-12-13 Thread Nick Coghlan
On Fri, Dec 14, 2012 at 10:10 AM, Antonio Cavallo
a.cava...@cavallinux.euwrote:

 I'll have a look into distutils2, I tough it was (another) dead end.
 I every case my target is py2k (2.7.x) and I've no case for transitioning
 to py3k (too much risk).


distutils2 started as a copy of distutils, so it's hard to tell the
difference between the parts which have been fixed and the parts which are
still just distutils legacy components (this is why the merge back was
dropped from 3.3 - too many pieces simply weren't ready and simply would
have perpetuated problems inherited from distutils).

distlib (https://distlib.readthedocs.org/en/latest/overview.html) is a
successor project that takes a different view of building up the low level
pieces without inheriting the bad parts of the distutils legacy (a problem
suffered by both setuptools/distribute and distutils2). distlib also runs
natively on both 2.x and 3.x, as the idea is that these interoperability
standards should be well supported in *current* Python versions, not just
those where the stdlib has caught up (i.e. now 3.4 at the earliest)

The aim is to get to a situation more like that with wsgiref, where the
stdlib defines the foundation and key APIs and data formats needed for
interoperability, while allowing a flourishing ecosystem of user-oriented
tools (like pip, bento, zc.buildout, etc) that still solve the key problems
addressed by setuptools/distribute without the opaque and hard to extend
distutils core that can make the existing tools impossible to debug when
they go wrong.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Mercurial workflow question...

2012-12-13 Thread Trent Nelson
Scenario: I'm working on a change that I want to actively test on a
bunch of Snakebite hosts.  Getting the change working is going to be
an iterative process -- lots of small commits trying to attack the
problem one little bit at a time.

Eventually I'll get to a point where I'm happy with the change.  So,
it's time to do all the necessary cruft that needs to be done before
making the change public.  Updating docs, tweaking style, Misc/NEWS,
etc.  That'll involve at least a few more commits.  Most changes
will also need to be merged to other branches, too, so that needs to
be taken care of.  (And it's a given that I would have been pulling
and merging from hg.p.o/cpython during the whole process.)

Then, finally, it's time to push.

Now, if I understand how Mercurial works correctly, using the above
workflow will result in all those little intermediate hacky commits
being forever preserved in the global/public cpython repo.  I will
have polluted the history of all affected files with all my changes.

That just doesn't feel right.  But, it appears as though it's an
intrinsic side-effect of how Mercurial works.  With git, you have a
bit more flexibility to affect how your final public commits via
merge fast-forwarding.  Subversion gives you the ultimate control of
how your final commit looks (albeit at the expense of having to do
the merging in a much more manual fashion).

As I understand it, even if I contain all my intermediate commits in
a server-side cloned repo, that doesn't really change anything; all
commits will eventually be reflected in cpython via the final `hg
push`.

So, my first question is this: is this actually a problem?  Is the
value I'm placing on pristine log histories misplaced in the DVCS
world?  Do we, as a collective, care?

I can think of two alternate approaches I could use:

- Use a common NFS mount for each source tree on every Snakebite
  box (and coercing each build to be done in a separate area).
  Get everything perfect and then do a single commit of all
  changes.  The thing I don't like about this approach is that
  I can't commit/rollback/tweak/bisect intermediate commits as
  I go along -- some changes are complex and take a few attempts
  to get right.

- Use a completely separate clone to house all the intermediate
  commits, then generate a diff once the final commit is ready,
  then apply that diff to the main cpython repo, then push that.
  This approach is fine, but it seems counter-intuitive to the
  whole concept of DVCS.

Thoughts?


Trent.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread Larry Hastings

On 12/13/2012 05:21 PM, Trent Nelson wrote:

 Thoughts?


% hg help rebase


//arry/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread Nick Coghlan
On Fri, Dec 14, 2012 at 12:02 PM, Larry Hastings la...@hastings.org wrote:

  On 12/13/2012 05:21 PM, Trent Nelson wrote:

 Thoughts?


 % hg help rebase


And also the histedit extension (analagous to git rebase -i).

Both Git and Hg recognise there is a difference between interim commits and
ones you want to publish and provide tools to revise a series of commits
into a simpler set for publication to an official repo. The difference is
that in Git this is allowed by default for all branches (which can create
fun and games if someone upstream of you edits the history of you branch
you used as a base for your own work), while Hg makes a distinction between
different phases (secret - draft - public) and disallows operations that
rewrite history if they would affect public changesets.

So the challenge with Mercurial over Git is ensuring the relevant branches
stay in draft mode locally even though you want to push them to a
server-side clone for distribution to the build servers. I know one way to
do that would be to ask that the relevant clone be switched to
non-publishing mode (see
http://mercurial.selenic.com/wiki/Phases#Publishing_Repository). I don't
know if there's another way to do it without altering the config on the
server.

General intro to phases: http://www.logilab.org/blogentry/88203

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Downloads page: Which version of Python should be listed first?

2012-12-13 Thread Eric Snow
On Thu, Dec 13, 2012 at 1:57 PM, Chris Angelico ros...@gmail.com wrote:
 The default version shown on http://docs.python.org/ is now 3.3.0,
 which I think is a Good Thing. However, http://python.org/download/
 puts 2.7 first, and says:

 If you don't know which version to use, start with Python 2.7; more
 existing third party software is compatible with Python 2 than Python
 3 right now.

 Firstly, is this still true? (I wouldn't have a clue.)

Nope:

http://py3ksupport.appspot.com/
http://python3wos.appspot.com/ (plone and zope skew the results)

-eric
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread R. David Murray
On Thu, 13 Dec 2012 20:21:24 -0500, Trent Nelson tr...@snakebite.org wrote:
 - Use a completely separate clone to house all the intermediate
   commits, then generate a diff once the final commit is ready,
   then apply that diff to the main cpython repo, then push that.
   This approach is fine, but it seems counter-intuitive to the
   whole concept of DVCS.

Perhaps.  But that's exactly what I did with the email package changes
for 3.3.

You seem to have a tension between all those dirty little commits and
clean history and the fact that a dvcs is designed to preserve all
those commits...if you don't want those intermediate commits in the
official repo, then why is a diff/patch a bad way to achieve that?  If
you keep your pulls up to date in your feature repo, the diff/patch
process is simple and smooth.

The repo I worked on the email features in is still available, too, if
anyone is crazy enough to want to know about those intermediate steps...

--David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread Chris Jerdonek
On Thu, Dec 13, 2012 at 6:48 PM, R. David Murray rdmur...@bitdance.com wrote:
 On Thu, 13 Dec 2012 20:21:24 -0500, Trent Nelson tr...@snakebite.org wrote:
 - Use a completely separate clone to house all the intermediate
   commits, then generate a diff once the final commit is ready,
   then apply that diff to the main cpython repo, then push that.
   This approach is fine, but it seems counter-intuitive to the
   whole concept of DVCS.

 Perhaps.  But that's exactly what I did with the email package changes
 for 3.3.

 You seem to have a tension between all those dirty little commits and
 clean history and the fact that a dvcs is designed to preserve all
 those commits...if you don't want those intermediate commits in the
 official repo, then why is a diff/patch a bad way to achieve that?

Right.  And you usually have to do this beforehand anyways to upload
your changes to the tracker for review.

Also, for the record (not that anyone has said anything to the
contrary), our dev guide says, You should collapse changesets of a
single feature or bugfix before pushing the result to the main
repository. The reason is that we don’t want the history to be full of
intermediate commits recording the private history of the person
working on a patch. If you are using the rebase extension, consider
adding the --collapse option to hg rebase. The collapse extension is
another choice.

(from http://docs.python.org/devguide/committing.html#working-with-mercurial )

--Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread Barry Warsaw
On Dec 14, 2012, at 12:36 PM, Nick Coghlan wrote:

Both Git and Hg recognise there is a difference between interim commits and
ones you want to publish and provide tools to revise a series of commits
into a simpler set for publication to an official repo.

One of the things I love about Bazaar is that it has a concept of main line
of development that usually makes all this hand-wringing a non-issue.  When I
merge my development branch, with all its interim commits into trunk, all
those revisions go with it.  But it never matters because when you view
history (and bisect, etc.) on trunk, you see the merge as one commit.

Sure, you can descend into the right-hand side if you want to see all those
sub-commits, and the graphical tools allow you to expand them fairly easily,
but usually you just ignore them.

Nothing's completely for free of course, and having a main line of development
does mean you have to be careful about merge directionality, but that's
generally something you ingrain in your workflow once, and then forget about
it.  The bottom line is that Bazaar users rarely feel the need to rebase, even
though you can if you want to.

Cheers,
-Barry


signature.asc
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread Ned Deily
In article 20121214024824.3bccc250...@webabinitio.net,
 R. David Murray rdmur...@bitdance.com wrote:
 On Thu, 13 Dec 2012 20:21:24 -0500, Trent Nelson tr...@snakebite.org wrote:
  - Use a completely separate clone to house all the intermediate
commits, then generate a diff once the final commit is ready,
then apply that diff to the main cpython repo, then push that.
This approach is fine, but it seems counter-intuitive to the
whole concept of DVCS.
 
 Perhaps.  But that's exactly what I did with the email package changes
 for 3.3.
 
 You seem to have a tension between all those dirty little commits and
 clean history and the fact that a dvcs is designed to preserve all
 those commits...if you don't want those intermediate commits in the
 official repo, then why is a diff/patch a bad way to achieve that?  If
 you keep your pulls up to date in your feature repo, the diff/patch
 process is simple and smooth.

Also, if you prefer to go the patch route, hg provides the mq extension 
(inspired by quilt) to simplify managing patches including version 
controlling the patches.  I find it much easy to deal that way with 
maintenance changes that may have a non-trivial gestation period.

-- 
 Ned Deily,
 n...@acm.org

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial workflow question...

2012-12-13 Thread Stephen J. Turnbull
R. David Murray writes:

  those commits...if you don't want those intermediate commits in the
  official repo, then why is a diff/patch a bad way to achieve that?

Because a decent VCS provides TOOWTDI.  And sometimes there are
different degrees of intermediate, or pehaps you even want to slice,
dice, and mince the patches at the hunk level.  Presenting the logic
of the change often is best done in pieces but in an ahistorical way,
but debugging often benefits from the context of an exact sequential
history.

That said, diff/patch across repos is not per se evil, and may be
easier for users to visualize than the results of the DAG
transformations (such as rebase) provided by existing dVCSes.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: Using 'long double' to force this structure to be worst case aligned is no

2012-12-13 Thread Gregory P. Smith
On Mon, Dec 10, 2012 at 11:16 PM, Antoine Pitrou solip...@pitrou.netwrote:

 On Tue, 11 Dec 2012 03:05:19 +0100 (CET)
 gregory.p.smith python-check...@python.org wrote:
Using 'long double' to force this structure to be worst case aligned
 is no
  longer required as of Python 2.5+ when the gc_refs changed from an int (4
  bytes) to a Py_ssize_t (8 bytes) as the minimum size is 16 bytes.
 
  The use of a 'long double' triggered a warning by Clang trunk's
  Undefined-Behavior Sanitizer as on many platforms a long double requires
  16-byte alignment but the Python memory allocator only guarantees 8 byte
  alignment.
 
  So our code would allocate and use these structures with technically
 improper
  alignment.  Though it didn't matter since the 'dummy' field is never
 used.
  This silences that warning.
 
  Spelunking into code history, the double was added in 2001 to force
 better
  alignment on some platforms and changed to a long double in 2002 to
 appease
  Tru64.  That issue should no loner be present since the upgrade from int
 to
  Py_ssize_t where the minimum structure size increased to 16 (unless
 anyone
  knows of a platform where ssize_t is 4 bytes?)

 What?? Every 32-bit platform has a 4 bytes ssize_t (and size_t).


No they don't.

size_t and ssize_t exist in large part because they are often larger than
an int or long on 32bit platforms.  They are 64-bit on Linux regardless of
platform (i think there is a way to force a compile in ancient mode that
forces them and the APIs being used to be 32-bit size_t variants but nobody
does that).


  We can probably get rid of the double and this union hack all together
 today.
  That is a slightly more invasive change that can be left for later.

 How do you suggest to get rid of it? Some platforms still have strict
 alignment rules and we must enforce that PyObjects (*) are always
 aligned to the largest possible alignment, since a PyObject-derived
 struct can hold arbitrary C types.

 (*) GC-enabled PyObjects, anyway. Others will be naturally aligned
 thanks to the memory allocator.


 What's more, I think you shouldn't be doing this kind of change in a
 bugfix release. It might break compiled C extensions since you are
 changing some characteristics of object layout (although you would
 probably only break those extensions which access the GC header, which
 is probably not many of them). Resource consumption improvements
 generally go only into the next feature release.


This isn't a resource consumption improvement.  It is a compilation
correctness change with zero impact on the generated code or ABI
compatibility before and after.  The structure, as defined, is was flagged
as problematic by Clang's undefined behavior sanitizer because it contains
a 'long double' which requires 16-byte alignment but Python's own memory
allocator was using an 8 byte boundary.

So changing the definition of the dummy side of the union makes zero
difference to already compiled code as it (a) doesn't change the
structure's size and (b) all existing implementations already align these
on an 8 byte boundary.

-gps
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: Using 'long double' to force this structure to be worst case aligned is no

2012-12-13 Thread Gregory P. Smith
On Thu, Dec 13, 2012 at 11:27 PM, Gregory P. Smith g...@krypto.org wrote:


 On Mon, Dec 10, 2012 at 11:16 PM, Antoine Pitrou solip...@pitrou.netwrote:

 On Tue, 11 Dec 2012 03:05:19 +0100 (CET)
 gregory.p.smith python-check...@python.org wrote:
Using 'long double' to force this structure to be worst case aligned
 is no
  longer required as of Python 2.5+ when the gc_refs changed from an int
 (4
  bytes) to a Py_ssize_t (8 bytes) as the minimum size is 16 bytes.
 
  The use of a 'long double' triggered a warning by Clang trunk's
  Undefined-Behavior Sanitizer as on many platforms a long double requires
  16-byte alignment but the Python memory allocator only guarantees 8 byte
  alignment.
 
  So our code would allocate and use these structures with technically
 improper
  alignment.  Though it didn't matter since the 'dummy' field is never
 used.
  This silences that warning.
 
  Spelunking into code history, the double was added in 2001 to force
 better
  alignment on some platforms and changed to a long double in 2002 to
 appease
  Tru64.  That issue should no loner be present since the upgrade from
 int to
  Py_ssize_t where the minimum structure size increased to 16 (unless
 anyone
  knows of a platform where ssize_t is 4 bytes?)

 What?? Every 32-bit platform has a 4 bytes ssize_t (and size_t).


 No they don't.

 size_t and ssize_t exist in large part because they are often larger than
 an int or long on 32bit platforms.  They are 64-bit on Linux regardless of
 platform (i think there is a way to force a compile in ancient mode that
 forces them and the APIs being used to be 32-bit size_t variants but nobody
 does that).


  We can probably get rid of the double and this union hack all together
 today.
  That is a slightly more invasive change that can be left for later.

 How do you suggest to get rid of it? Some platforms still have strict
 alignment rules and we must enforce that PyObjects (*) are always
 aligned to the largest possible alignment, since a PyObject-derived
 struct can hold arbitrary C types.

 (*) GC-enabled PyObjects, anyway. Others will be naturally aligned
 thanks to the memory allocator.


 What's more, I think you shouldn't be doing this kind of change in a
 bugfix release. It might break compiled C extensions since you are
 changing some characteristics of object layout (although you would
 probably only break those extensions which access the GC header, which
 is probably not many of them). Resource consumption improvements
 generally go only into the next feature release.


BTW - This change was done on tip only. The comment about this being 'in a
bugfix release' is wrong.

While I personally believe this is needed in all of the release branches I
didn't commit this one there *just in case* there is some weird platform
where this change actually makes a difference. I don't believe such a thing
exists in 2012, but as there is no way that is worth my time for me to find
that out, I didn't put it in a bugfix branch.

-gps


 This isn't a resource consumption improvement.  It is a compilation
 correctness change with zero impact on the generated code or ABI
 compatibility before and after.  The structure, as defined, is was flagged
 as problematic by Clang's undefined behavior sanitizer because it contains
 a 'long double' which requires 16-byte alignment but Python's own memory
 allocator was using an 8 byte boundary.

 So changing the definition of the dummy side of the union makes zero
 difference to already compiled code as it (a) doesn't change the
 structure's size and (b) all existing implementations already align these
 on an 8 byte boundary.

 -gps


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: Using 'long double' to force this structure to be worst case aligned is no

2012-12-13 Thread Gregory P. Smith
On Mon, Dec 10, 2012 at 11:21 PM, Antoine Pitrou solip...@pitrou.netwrote:

 On Tue, 11 Dec 2012 08:16:27 +0100
 Antoine Pitrou solip...@pitrou.net wrote:

  On Tue, 11 Dec 2012 03:05:19 +0100 (CET)
  gregory.p.smith python-check...@python.org wrote:
 Using 'long double' to force this structure to be worst case aligned
 is no
   longer required as of Python 2.5+ when the gc_refs changed from an int
 (4
   bytes) to a Py_ssize_t (8 bytes) as the minimum size is 16 bytes.
  
   The use of a 'long double' triggered a warning by Clang trunk's
   Undefined-Behavior Sanitizer as on many platforms a long double
 requires
   16-byte alignment but the Python memory allocator only guarantees 8
 byte
   alignment.
  
   So our code would allocate and use these structures with technically
 improper
   alignment.  Though it didn't matter since the 'dummy' field is never
 used.
   This silences that warning.
  
   Spelunking into code history, the double was added in 2001 to force
 better
   alignment on some platforms and changed to a long double in 2002 to
 appease
   Tru64.  That issue should no loner be present since the upgrade from
 int to
   Py_ssize_t where the minimum structure size increased to 16 (unless
 anyone
   knows of a platform where ssize_t is 4 bytes?)
 
  What?? Every 32-bit platform has a 4 bytes ssize_t (and size_t).
 
   We can probably get rid of the double and this union hack all together
 today.
   That is a slightly more invasive change that can be left for later.
 
  How do you suggest to get rid of it? Some platforms still have strict
  alignment rules and we must enforce that PyObjects (*) are always
  aligned to the largest possible alignment, since a PyObject-derived
  struct can hold arbitrary C types.

 Ok, I hadn't seen your proposal. I find it reasonable:

 “A more correct non-hacky alternative if any alignment issues are still
 found would be to use a compiler specific alignment declaration on the
 structure and determine which value to use at configure time.”


 However, the commit is still problematic, and I think it should be
 reverted. We can't remove the alignment hack just because it seems to
 be useless on x86(-64).


I didn't remove it.  I made it match what our memory allocator is already
doing.

Thanks for reviewing commits in such detail BTW.  I do appreciate it.

BTW, I didn't notice your replies until now because you didn't include me
in the to/cc list on the responses.  Please do that if you want a faster
response. :)

-gps
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com