[Python-Dev] Mailman 2 - not receiving moderators emails

2021-04-05 Thread david
I recently rebuilt my server (Ubuntu 20.04) and rebuilt mailman 2 - upgrading 
to the latest version 2.1.34. The mail server is postfix 

I run three mailing lists, including "TESTmail". The same issue below affects 
all three lists.

On the General Page, I have filled in an administrator and a owner. The list 
options are set up that emails from non-members are held and the moderator 
should be notified. 
* The poster of the email receives a notification that the email is being held
* The moderator does not receive any messages
* But looking at the postfix logs, it appears to me that mailman does try to 
send a message to the moderator

Apr  5 22:12:05 ip-xxx-xxx-xxx-xxx postfix/smtp[9473]: 6D0973EB91: 
to=, 

But, the email message is being sent to the generic owner, rather than the 
specific names filled into either the moderator or administrator boxes on the 
general tab.

Also, I note that the footer of the mailing list has:
   TESTmail list run by testmail-owner at lists.XXX.org.uk
rather than the specific owner.

Any clues as to what is not quite correct in the mailman setup?
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YJG4MBWTEUOYXLSPAZLQSZOSTWEIRU5G/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] Define a place for code review in Python workflow

2010-07-27 Thread David
>
> I'd welcome any patch submitted to Rietveld for review.  However, your
>
proposed "review.py" module does not exist as far as I know, and unless
> someone writes it, it won't.
>

Haven't personally tested that it works with Rietveld due to lack of patches
sitting around, but cursory investigation suggests that reports of
non-existence may have been exaggerated ;)

http://pypi.python.org/pypi/review/r537


Love regards etc

David Miller
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] sum(...) limitation

2014-08-02 Thread David Wilson
On Sat, Aug 02, 2014 at 05:39:12PM +1000, Steven D'Aprano wrote:

> Repeated list and str concatenation both have quadratic O(N**2)
> performance, but people frequently build up strings with + and rarely
> do the same for lists. String concatenation with + is an attractive
> nuisance for many people, including some who actually know better but
> nevertheless do it. Also, for reasons I don't understand, many people
> dislike or cannot remember to use ''.join.

join() isn't preferable in cases where it damages readability while
simultaneously providing zero or negative performance benefit, such as
when concatenating a few short strings, e.g. while adding a prefix to a
filename.

Although it's true that join() is automatically the safer option, and
especially when dealing with user supplied data, the net harm caused by
teaching rote and ceremony seems far less desirable compared to fixing a
trivial slowdown in a script, if that slowdown ever became apparent.

Another (twisted) interpretation is that since the quadratic behaviour
is a CPython implementation detail, and there are alternatives where
__add__ is constant time, encouraging users to code against
implementation details becomes undesirable. In our twisty world, __add__
becomes *preferable* since the resulting programs more closely resemble
pseudo-code.

$ cat t.py
a = 'this '
b = 'is a string'
c = 'as we can tell'

def x():
return a + b + c

def y():
return ''.join([a, b, c])

$ python -m timeit -s 'import t' 't.x()'
100 loops, best of 3: 0.477 usec per loop

$ python -m timeit -s 'import t' 't.y()'
100 loops, best of 3: 0.695 usec per loop


David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython: Issue #22003: When initialized from a bytes object, io.BytesIO() now

2014-08-02 Thread David Wilson
Thanks for spotting,

There is a new patch in http://bugs.python.org/issue22125 to fix the
warnings.


David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 476: Enabling certificate validation by default!

2014-08-29 Thread David Reid
Alex Gaynor  gmail.com> writes:

> 
> Hi all,
> 
> I've just submitted PEP 476, on enabling certificate validation by default for
> HTTPS clients in Python. Please have a look and let me know what you think.

Yes please.

The two most commons answers I get to "Why did you switch to go?" are 
"Concurrency" and "The  stdlib HTTP client verifies TLS by default."

In a work related survey of webhook providers I found that only ~7% of HTTPS 
URLs would be affected by a change like this.

-David


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 476: Enabling certificate validation by default!

2014-09-02 Thread David Reid
Nick Coghlan  gmail.com> writes:

> Creating *new* incompatibilities between Python 2 & Python 3 is a major point
> of concern. 

Clearly this change should be backported to Python2.

-David


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bytes-like objects

2014-10-05 Thread David Wilson
On Sun, Oct 05, 2014 at 11:32:08PM +0200, Victor Stinner wrote:

> I'm not sure that the term has an unique definition. In some parts of
> Python, I saw explicit checks on the type: bytes or bytearray,
> sometimes memoryview is accepted. The behaviour is different in C
> functions using PyArg API. It probably depends if the function relies
> on the content (read bytes) or on methods (ex: call .find).

Buffers aren't "bytes like" in that many of them aren't immutable, even
when the buffer object itself is hashable. An example of this is the
buffer exposed by mmap.mmap() with MAP_SHARED.

This came up during the StringIO optimization changes a few months back,
it seems some oversight in the original design that got carried through
to Python 3. If we're approaching the topic of defining bytes-like
things with improved rigor, it might be worth discussing.


Making bytes-like objects an explicit thing in the language feels like
solidifying what is mostly a CPython implementation detail. For example,
at least until recently, PyPy emulated buffers in ways that often made
them much slower to use than regular bytes.

Another aspect is that the ability to twiddle bits derived from a
buffer mostly implies that the code you are calling is going to always
be written in C, or its future implementation will remain sufficiently
restricted as to always accept buffers (perhaps by first copying them to
bytes -- exactly what PyPy did until recently).


+1 on improving the notion of bytes-like things in Python, but not
necessarily by concretizing the existing interface.


David
> 
> Victor
> 

> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/dw%2Bpython-dev%40hmmz.org

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-24 Thread David Bolen
Starting yesterday, my XP buildbot began failing to execute clone
operations against hg.python.org.  There's not a lot of data being
given aside from a transaction abort message (and my buildbot log
showing the hg command exiting), and I'm wondering if something may be
amiss on the server or its configuration?

Note that this is a full clone (which for some reason the Windows
buildbots seem to fall back on with some frequency) and can take quite
a while.  My Windows 7 buildbot is ok so far but it's still doing
incremental pulls over the same time period.

I've got two separate Internet connections here and have tried routing
over both so I don't think it's a network issue.  I've completely
flushed the local build trees and rebooted the buildbot.

Is there anything that might be available on the server to see if
there are errors being logged?  Or anything that could have changed
configuration wise recently (maybe timeout related or something)?  I'm
running a bit low of items to try to change or reset on the buildbot
side.

Thanks.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-24 Thread David Bolen
Donald Stufft  writes:

> Is this using HTTPS or SSH.

Um, good question - whatever the buildbot build process uses.

Looking at the slave log on buildbot.python.org (I don't get the hg
output locally), appears to be http (it's cloning
http://hg.python.org/cpython) - though I thought I saw it using https
(port 443) in some traffic monitoring I was doing, so maybe it gets
redirected?

Oh yeah, the log also shows "real URL is
https://hg.python.org/cpython"; as the first output from hg.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-24 Thread David Bolen
Antoine Pitrou  writes:

> Have you tried running the hg clone manually from the buildbot?
> You could try to add --debug to get more info where the thing breaks.

Yes, I had but pretty much got the same output as the buildbot slave.
But I just tried --traceback and it's definitely complaining about the
connection being terminated.

Regular test:

> hg clone --verbose --noupdate http://hg.python.org/cpython test
real URL is https://hg.python.org/cpython
requesting all changes
adding changesets
adding manifests
transaction abort!
rollback completed
abort: connection ended unexpectedly

Traceback:

> hg clone --traceback --verbose --noupdate http://hg.python.org/cpython test
real URL is https://hg.python.org/cpython
requesting all changes
adding changesets
adding manifests
transaction abort!
rollback completed
Traceback (most recent call last):
  File "mercurial\dispatch.pyc", line 54, in _runcatch
  File "mercurial\dispatch.pyc", line 490, in _dispatch
  File "mercurial\dispatch.pyc", line 351, in runcommand
  File "mercurial\dispatch.pyc", line 541, in _runcommand
  File "mercurial\dispatch.pyc", line 495, in checkargs
  File "mercurial\dispatch.pyc", line 488, in 
  File "mercurial\util.pyc", line 420, in check
  File "mercurial\commands.pyc", line 725, in clone
  File "mercurial\hg.pyc", line 334, in clone
  File "mercurial\localrepo.pyc", line 1853, in clone
  File "mercurial\localrepo.pyc", line 1206, in pull
  File "mercurial\localrepo.pyc", line 1695, in addchangegroup
  File "mercurial\revlog.pyc", line 1239, in addgroup
  File "mercurial\changegroup.pyc", line 31, in chunkiter
  File "mercurial\changegroup.pyc", line 20, in getchunk
  File "mercurial\util.pyc", line 924, in read
  File "mercurial\httprepo.pyc", line 22, in zgenerator
IOError: [Errno None] connection ended unexpectedly
abort: connection ended unexpectedly


I also stuck on --debug which generates a metric ton of output, but
the final portion is:

> hg clone --debug --traceback --verbose --noupdate 
> http://hg.python.org/cpython test

(...)
manifests: 5271/93170 chunks (5.66%)
manifests: 5272/93170 chunks (5.66%)
manifests: 5273/93170 chunks (5.66%)
manifests: 5274/93170 chunks (5.66%)
manifests: 5275/93170 chunks (5.66%)
manifests: 5276/93170 chunks (5.66%)
manifests: 5277/93170 chunks (5.66%)
manifests: 5278/93170 chunks (5.66%)
transaction abort!
rollback completed
Traceback (most recent call last):
  File "mercurial\dispatch.pyc", line 54, in _runcatch
  File "mercurial\dispatch.pyc", line 490, in _dispatch
  File "mercurial\dispatch.pyc", line 351, in runcommand
  File "mercurial\dispatch.pyc", line 541, in _runcommand
  File "mercurial\dispatch.pyc", line 495, in checkargs
  File "mercurial\dispatch.pyc", line 488, in 
  File "mercurial\util.pyc", line 420, in check
  File "mercurial\commands.pyc", line 725, in clone
  File "mercurial\hg.pyc", line 334, in clone
  File "mercurial\localrepo.pyc", line 1853, in clone
  File "mercurial\localrepo.pyc", line 1206, in pull
  File "mercurial\localrepo.pyc", line 1695, in addchangegroup
  File "mercurial\revlog.pyc", line 1239, in addgroup
  File "mercurial\changegroup.pyc", line 31, in chunkiter
  File "mercurial\changegroup.pyc", line 20, in getchunk
  File "mercurial\util.pyc", line 924, in read
  File "mercurial\httprepo.pyc", line 22, in zgenerator
IOError: [Errno None] connection ended unexpectedly
abort: connection ended unexpectedly

which appears to die mid-stream while receiving the manifests.

So I'm sort of hoping there might be some record server-side as to why
things are falling apart mid-way.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-24 Thread David Bolen
David Bolen  writes:

> which appears to die mid-stream while receiving the manifests.
>
> So I'm sort of hoping there might be some record server-side as to why
> things are falling apart mid-way.

Just to follow-up to myself, I get the same same error trying to do a
clone from my own personal XP machine rather than the buildbot (which
is a VM).  I've had the issue with hg 1.6.2, 2.5.2 and 3.1.2.

However, the same clones completely successfully under OSX and Linux.

So that's sort of strange.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-24 Thread David Bolen
Donald Stufft  writes:

> What version of OpenSSL is it using.

I'm using the pre-built Windows Mercurial installer, but if I unpack
the included library.zip, the SSLEAY32.DLL shows version 0.9.8r.

This is from the 3.1.2 install I just did a few hours ago.  It appears
that hg 2.5.2 on my other XP box also has 0.9.8r.  The prior buildbot
version (1.6.2) looks like it had 0.9.8o.

I also got around to trying a manual clone on the Windows 7 buildbot,
and it worked fine, even with the older hg 1.6.2.

So it seems to correlate with XP more than anything else at the moment.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-24 Thread David Bolen
Do you mean your local repo?  If so, I don't have a local repo at this
point - the failure is during the first clone.

-- David

On Sat, Oct 25, 2014 at 1:19 AM, Steve Dower 
wrote:

>   I was seeing this recently and had to run recover on my repo (not sure
> what the command line is for that - TortoiseHg had a menu). YMMV, but the
> symptoms sound the same.
>
> Cheers,
> Steve
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-25 Thread David Bolen
Donald Stufft  writes:

> I have an idea, can you run https://bpaste.net/show/c5d7cd102f5b and
> tell me what it outputs? Both on a machine that works and one that
> doesn’t.

All but Linux (so XP/7 buildbots, XP standalone, OSX) return:
   ('DHE-RSA-AES128-SHA', 'TLSv1/SSLv3', 128)
My Linux (Ubuntu 12.04) returns:
   ('ECDHE-RSA-AES128-SHA', 'TLSv1/SSLv3', 128)

The script was run under a default Python on each box (2.6 on Windows,
2.7 on OSX and Linux).  I tried 2.6 through 3.1 on my standalone XP with
no change, so I don't think it differs by Python version.  Its not
precisely the same as running hg, since it has its own embedded Python
under Windows, but I installed a source install on my XP box under 2.7
and it fails a clone the same way.

In new news though, I just the same failure on the Win7 buildbot in a
clone test.  In repeated attempts, that's the only one so far.

I also realized that one shared feature is that the XP boxes were using
IPv4 while the other boxes were all IPv6 (an HE tunnel on my side).
Though my earlier Win7 failure was also IPv6.  I manually forced the
Win7 box to use IPv4, but didn't see much difference.  It certainly didn't
start failing like the XP boxes.

Anecdotally, the failing XP attempts appear to be running slower in
general (with lower transfer rates as monitored by my router).  I have
had slow clones work on other boxes, so that's not automatically bad.
But I wonder if it's still some sort of timeout somewhere.

I don't think I currently have an active ssh account, but if there were
a way to test a clone over ssh rather than http perhaps that would be a
useful data point, in terms of eliminating some middlemen processing.

-- David


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-25 Thread David Bolen
As another data point, I've tried cloning randomly selected other
repositories from hg.python.org, and smaller repositories (distutils2,
peps, jython to name a few) are all working fine under XP, even though
with jython for example, the clone takes longer in terms of wall time
than I'll often see cpython fail.(*)

A test of what I presumed was a more comparably sized repository
(features/cdecimal) dies like cpython.

-- David

(*) Overall clone time is probably unrelated anyway since the XP
buildbot traditionally needed 10+min for clones in the past (such as
when the new build script changes were in place and every test used a
clone) and was working fine with that.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XP buildbot problem cloning from hg.python.org

2014-10-27 Thread David Bolen
Ned Deily  writes:

> Update: after consulting with Donald on IRC, it appears that the problem 
> was on the python.org end and is now fixed.  David, is it now working 
> again for you?

Sorry for the delay - yes, it appears to be working again for me as
well.  And it looks like clones during the buildbot tests were working
again as of tests yesterday.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-29 Thread David Cournapeau
On Wed, Oct 29, 2014 at 3:25 PM, Antoine Pitrou  wrote:

> On Thu, 30 Oct 2014 01:09:45 +1000
> Nick Coghlan  wrote:
> >
> > Lots of folks are happy with POSIX emulation layers on Windows, as
> > they're OK with "basically works" rather than "works like any other
> > native application". "Basically works" isn't sufficient for many
> > Python-on-Windows use cases though, so the core ABI is a platform
> > native one, rather than a POSIX emulation.
> >
> > This makes Python fit in more cleanly with other Windows applications,
> > but makes it harder to write Python applications that span both POSIX
> > and Windows.
>
> I don't really understanding why that's the case. Only the
> building and packaging may be more difficult, and that assumes you're
> familiar with mingw32. But mingw32, AFAIK, doesn't make the Windows
> runtime magically POSIX-compatible (Cygwin does, to some extent).
>

mingw32 is a more compliant C compiler (VS2008 does not implement much from
C89), and it does implement quite a few things not implemented in the C
runtime, especially for math.

But TBH, those are not compelling cases to build python itself on mingw,
only to better support C extensions with mingw.

David


> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/cournape%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-29 Thread David Cournapeau
On Wed, Oct 29, 2014 at 5:17 PM, David Cournapeau 
wrote:

>
>
> On Wed, Oct 29, 2014 at 3:25 PM, Antoine Pitrou 
> wrote:
>
>> On Thu, 30 Oct 2014 01:09:45 +1000
>> Nick Coghlan  wrote:
>> >
>> > Lots of folks are happy with POSIX emulation layers on Windows, as
>> > they're OK with "basically works" rather than "works like any other
>> > native application". "Basically works" isn't sufficient for many
>> > Python-on-Windows use cases though, so the core ABI is a platform
>> > native one, rather than a POSIX emulation.
>> >
>> > This makes Python fit in more cleanly with other Windows applications,
>> > but makes it harder to write Python applications that span both POSIX
>> > and Windows.
>>
>> I don't really understanding why that's the case. Only the
>> building and packaging may be more difficult, and that assumes you're
>> familiar with mingw32. But mingw32, AFAIK, doesn't make the Windows
>> runtime magically POSIX-compatible (Cygwin does, to some extent).
>>
>
> mingw32 is a more compliant C compiler (VS2008 does not implement much
> from C89)
>

That should read much C99, of course, otherwise VS 2008 would have been a
completely useless C compiler !

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Who's using VS/Windows to work on Python?

2014-11-13 Thread David Bolen
Steve Dower  writes:

> Also, who currently owns the Windows buildbots and are you
> willing/able to add a VS 2015 Preview installation (or give me
> access so I can do it)?  (...)

I've got several of the Windows buildbots, and could add this.  Is there
benefit to just starting with one (I've got XP, Win7 and Win8) first to
save some time?

My only real concern is to verify that this will co-exist properly with
the existing VS installations and buildbot build process.  I note that
the download page says not to use on production computers.  That's
probably just a CYA, and I don't mind if VS itself has issues, but if it
risks compromising other aspects of the buildbot, then I think testing
on a different machine would be preferred.

Is anything aside from VS 2015 itself (e.g., the Ultimate 2015 download
on the preview page) needed?  I'm assuming not.

-- David



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Who's using VS/Windows to work on Python?

2014-11-13 Thread David Bolen
Steve Dower  writes:

> Starting with just the Win7 or Win8 one would be fine. Python 3.5
> won't support XP, and VS 2015 doesn't support XP (though I believe
> it will still be able to build for XP, just not *on* XP).

Ok, I'll probably try the Win8 buildbot first then.  I'll let you know
when it's available.

FYI, the VS 2015 requirements page seems to include XP as a supported
client OS (if I'm reading it correctly), and says the requirements are
the same as for VS 2013.  Of course, as you say it won't matter if
we're dropping XP for Python 3.5.

> VS 2010 should be fine. The most likely issues are with VS 2013 (for
> teams that haven't updated their setup authoring yet), but these
> should have been ironed out already.

Ok, the buildbots are just VS 2008 and VS 2010, so no 2013 to worry about.

> Nope, that's it. (...)

Sounds good.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?

2014-11-23 Thread David Wilson
On Sun, Nov 23, 2014 at 07:39:30PM -0500, Donald Stufft wrote:

> I don’t think this is really all that big of a deal. If we want to
> move off of Github doing so is easy. There are lots of (not nearly as
> good as but probably still better than what we have now) OSS software
> that gives you a github like flow. The only *data* that is really in
> there is what’s stored in the repository itself (since I don’t think
> for anything major we’d ever put issues there or use the wiki) which
> is trivial to move around.

Assuming PRs are enabled on the new repo, how will that interact with
patch review/the issue tracker? Is the goal here to replace the existing
process wholesale in one step? It doesn't seem fair to say that moving
to GitHub will be easy unless interrupting every contributor's flow is
considered "free", or some Rietveld bridge script magically writes
itself.  The former impacts review bandwidth, the latter infrastructure
bandwidth (which already seems quite contended, e.g. given the job board
is still MIA).


David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ubuntu buildbot

2014-11-23 Thread David Bolen
Yeah, it definitely needs it.  Historically it was intentional as my own
servers were all on 8.04, but the last of those moved 12.04 last year.

I think there's already a 12.04 buildbot, so perhaps 14.04 would be
better?  I do prefer sticking with an LTS.

It'll need to move to 64-bit given the hosting environment, at least for
the kernel.  I could do 32-bit userspace via multiarch if keeping a 32-bit
buildbot in the mix is attractive.

-- David


On Sun, Nov 23, 2014 at 10:18 PM, Benjamin Peterson 
wrote:

> Hi David,
> I noticed you run the "Builder x86 Ubuntu Shared" buildbot. It seems
> it's running a very old version of Ubuntu. Is there any chance of
> getting that updated?
>
> Regards,
> Benjamin
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New Windows installer for Python 3.5

2015-01-09 Thread David Anthoff
> Nothing is merged in yet and everything can still change, so I'm keen to
hear whatever
> feedback people have. I've tried to make improvements fairly for
first-time users
> through to sysadmins, but if I've missed something big I'd like to hear
about it before
> we get too close to alpha 1.

It would be great if the new installer supported silent, portable installs.
What I have in mind is a silent installation that drops all the necessary
files for a working python into a folder, but does not put ANY file anywhere
else and does not register anything anywhere on the system. So no PATH
modification, no registering of this install as one of the available python
interpreters, no uninstall entry or anything like that. By "all the
necessary files" I mean that it drops things like the MSVC runtime dlls etc
into that folder into which I'm installing python, but again doesn't try to
install things like that system or even user wide.

I'll give you my very specific use-case, but I think this might be useful
for others as well: we are trying to build an installer for some other
product X that internally requires a python instance to work. This python
instance should NOT be visible to anything other than product X, i.e. no
user should ever start it directly or anything like that, it really is an
implementation detail of product X. If we had an official python installer
that supported silent, portable installs, I could just call that python
installer inside the setup program for X, and it would drop a fully working
python installation into a sub-directory of the install directory of product
X. And we would be happy :)

The old MSI installer sort of had something like that with the MSI
administrative install option. But it never really worked because the
administrative install didn't drop the MSVC runtime dlls anywhere, as far as
I could tell. At some point I hacked around that by modifying the MSI file
in my setup program to also drop the MSVC runtime dlls, but that was VERY
hacky... 

Thanks,
David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New Windows installer for Python 3.5

2015-01-09 Thread David Anthoff
> I'll look into this, but it probably isn't going to work as part of the
installer. I
> have previously looked into being able to install arbitrary side-by-side
copies
> of Python, but that's near impossible as well. Windows Installer doesn't
really
> let you just copy files - it isn't part of its intended functionality. It
isn't too
> difficult to build custom MSIs with certain parts of Python (such as the
DLLs
> and the standard library, but no docs, headers or EXEs) in a way that
won't
> conflict with other installs, but you're still using an MSI here which is
not
> necessarily ideal.

Are administrative MSI installs an option, though? They don't register
anything but just drop files, right? But see my comments below about a zip
drop, which would be a much, much nicer option in my opinion. 

> We could release a ZIP file containing all the Python files.

That would be absolutely FANTASTIC and would solve all problems around this
topic. In theory we could handle this ourselves on our end, but this is for
a small open source project and we are really hesitant to take on another
software packaging job. Doing this right just for our product would be a
fair amount of work, we would have to do this with every new Python release,
we might mix things up etc., and really we are more interested in our piece
of software than packaging Python ;) I think it would be a much nicer model
if there was a zip to download from python.org. There is another issue that
keeps us from hosting our own files: we would have to figure out licensing
issues, both related to python and to the msvcr*.dll and msvcp*.dll. I would
feel much more comfortable if we didn't distribute python or other binaries,
but just downloaded stuff from python.org as part of the setup process.

> The only reason I
> hesitate on this is that it could cause significant confusion for someone
who
> doesn't really understand the implications, while people like yourself who
> have thought about this are also capable of finding workarounds and don't
> really need the ZIP file apart from convenience. 

I was not clear in my previous email. I have NOT found a way to work around
this. I have tried various hacks, but none really works. I got pretty far,
but none really worked in all cases in a robust way. So I would certainly
welcome a downloadable zip file a great deal. Is there maybe a compromise
for now to have such a zip on the server, but not advertise it widely, and
maybe put an "experimental"/"beta" moniker into the filename?

I assume you would include the MS VC runtime files msvcr100.dll and
msvcp100.dll in such a zip file?

Is there any chance this might even be done (as an experimental version) for
Python 3.4?

> Making some of the fixes to
> make python.exe more portable would relieve my concerns here.

I see that. For our cases things seem to work, but I agree, it would be good
if python.exe would try hard to work in a xcopy mode.

> > The old MSI installer sort of had something like that with the MSI
> > administrative install option. But it never really worked because the
> > administrative install didn't drop the MSVC runtime dlls anywhere, as
> > far as I could tell. At some point I hacked around that by modifying
> > the MSI file in my setup program to also drop the MSVC runtime dlls, but
> that was VERY hacky...
> 
> I'm a little surprised that worked at all for what you were trying to do.
You'd
> be better off installing it once and then copying the files yourself.

I got it to drop the msvcr100.dll, but haven't managed to get the
msvcp100.dll out via an administrative install. The whole approach is a
mess, I would MUCH prefer a zip file.
 
> But overall, this is the sort of thing I do want to enable. I firmly
believe that
> Python from python.org is for *developers*, and those who just want to run
> a Python application should be able to get a complete package. This is
very
> different from the *nix approach, but it makes far more sense to Windows
> users. A good example is TortoiseHg, which bundles everything so that
users
> never even know they have Python 2.7 or Mercurial installed on their
> machine. Making it easier for people to bundle Python into their own
> applications is a good thing, as far as I'm concerned.

Yes, those are good examples. Right now doing this in the way these guys do
is too much work for our small project... Anything that makes this easier
would be appreciated.

Thanks! And the new installer looks great in general.

Best,
David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ubuntu buildbot

2015-02-25 Thread David Bolen
On Mon, Nov 24, 2014 at 11:07 AM, Benjamin Peterson  wrote:
>
> On Mon, Nov 24, 2014, at 00:33, David Bolen wrote:
>> Yeah, it definitely needs it.  Historically it was intentional as my own
>> servers were all on 8.04, but the last of those moved 12.04 last year.
>>
>> I think there's already a 12.04 buildbot, so perhaps 14.04 would be
>> better?  I do prefer sticking with an LTS.
>
> 14.04 would be good.
>
>> It'll need to move to 64-bit given the hosting environment, at least for
>> the kernel.  I could do 32-bit userspace via multiarch if keeping a 32-bit
>> buildbot in the mix is attractive.
>
> It'd be nice to keep a 32-bit bot around.

Took a bit longer than anticipated, but the slave upgrade is complete.

The bolen-ubuntu slave is now a 32-bit Ubuntu 14.04.2 LTS system.
I've re-run the most recent 2.7, 3.4 and 3.x builds which all pass
(though with a few new compiler warnings in some cases).

-- David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Why does python use relative instead of absolute path when calling LoadLibrary*

2015-03-11 Thread David Cournapeau
Hi,

While looking at the import code of python for C extensions, I was
wondering why we pass a relative path instead of an absolute path to
LoadLibraryEx (see bottom for some context).

In python 2.7, the full path existence was even checked before calling into
LoadLibraryEx (
https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L189), but
it looks like this check was removed in python 3.x branch.

Is there any defined behaviour that depends on this path to be relative ?

Context
---

The reason why I am interested in this is the potential use of
SetDllDirectory to share dlls between multiple python extensions.
Currently, the only solutions I am aware of are:

1. putting the dlls in the PATH
2. bundling the dlls side by side the .pyd
3. patching packages to use preloading (using e.g. ctypes)

I am investigating a solution 4, where the dlls would be put in a separate
"private" directory only known of python itself, without the need to modify
PATH.

Patching python to use SetDllDirectory("some private paths specific to a
python interpreter") works perfectly, except that it slightly changes the
semantics of LoadLibraryEx not to look for dlls in the current directory.
This breaks importing extensions built in place, unless I modify the call
in ;https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L195
from:

hDLL = LoadLibraryEx(pathname, NULL LOAD_WITH_ALTERED_SEARCH_PATH)

to

hDLL = LoadLibraryEx(pathbuf, NULL LOAD_WITH_ALTERED_SEARCH_PATH)

That seems to work, but I am quite worried about changing any import
semantics by accident.

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why does python use relative instead of absolute path when calling LoadLibrary*

2015-03-13 Thread David Cournapeau
Thank you both for your answers.

I will go away with this modification, and see how it goes.

David

On Thu, Mar 12, 2015 at 2:41 AM, Wes Turner  wrote:

>
> On Mar 11, 2015 3:36 PM, "David Cournapeau"  wrote:
> >
> > Hi,
> >
> > While looking at the import code of python for C extensions, I was
> wondering why we pass a relative path instead of an absolute path to
> LoadLibraryEx (see bottom for some context).
> >
> > In python 2.7, the full path existence was even checked before calling
> into LoadLibraryEx (
> https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L189),
> but it looks like this check was removed in python 3.x branch.
> >
> > Is there any defined behaviour that depends on this path to be relative ?
>
> Just a guess: does it have to do with resolving symlinks (w/ POSIX
> filesystems)?
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Buildbot PPC64 AIX 3.x failures

2015-03-28 Thread David Edelsohn
On Sat, Mar 28, 2015 at 5:57 AM, Victor Stinner
 wrote:
> Hi,
>
> There are many failures on the AIX buildbot. Can someone try to fix
> them? Or would it be possible to turn off the buildbot to quickly see
> regressions?
>
> http://buildbot.python.org/all/builders/PPC64%20AIX%203.x/builds/3426/steps/test/logs/stdio
>
> The buildbot has not enough memory, and some tests are failing since
> more than 3 months.

I have cleaned out the build directories and restarted the buildslave.

There is more than enough memory and most memory process limits are
unlimited.  One of the tests presumably crashed in a bad way that
prevented normal cleanup by the buildslave.

- David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Buildbot x86 XP-4 3.x doesn't compile anymore: drop it?

2015-03-28 Thread David Bolen
Victor Stinner  writes:

> Would it be possible to fix this buildbot, or to turn it off?
(...)
> By the way, do we seriously want to support Windows XP? I mean, *who*
> will maintain it (no me sorry!).  I saw recent changes to explicitly
> *drop* support for Windows older than Visa (stop using GetTickCount,
> always call GetTickCount64, for time.monotonic).

Disabling the 3.x branch (or 3.5+ going forward) on the XP buildbot
seems plausible to me, assuming the issue is unlikely to be fixed (or
someone found with time to look into it).

If, as Tim suggests in his response, 3.5+ need not be officially
supported on XP anyway, it may not be worth much effort to address.
At some point (such as switching to VS 2015), we aren't going to be
able to compile newer releases under XP anyway.

I'm assuming you aren't suggesting turning off the XP buildbot
entirely, correct?  As long as any fixes are still getting applied to
2.x or earlier 3.x releases, I do think having a buildbot covering
those branches for XP would be useful, if only to catch possible
issues in back-ported fixes.  There's still an amazing number of XP
systems in service.  While I doubt very many of those running XP also
need the latest 3.x release of Python, I expect there are quite a few
active 2.x users.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A macro for easier rich comparisons

2015-04-28 Thread David Malcolm
On Tue, 2015-04-28 at 10:50 -0700, Glenn Linderman wrote:
> On 4/28/2015 2:13 AM, Victor Stinner wrote:
> 
> > > #define Py_RETURN_RICHCOMPARE(val1, val2, op) 
> > >   \
> > > > do {
> > > > \
> > > > switch (op) {   
> > > > \
> > > > case Py_EQ: if ((val1) == (val2)) Py_RETURN_TRUE; 
> > > > Py_RETURN_FALSE;  \
> > > > case Py_NE: if ((val1) != (val2)) Py_RETURN_TRUE; 
> > > > Py_RETURN_FALSE;  \
> > > > case Py_LT: if ((val1) < (val2)) Py_RETURN_TRUE; 
> > > > Py_RETURN_FALSE;   \
> > > > case Py_GT: if ((val1) > (val2)) Py_RETURN_TRUE; 
> > > > Py_RETURN_FALSE;   \
> > > > case Py_LE: if ((val1) <= (val2)) Py_RETURN_TRUE; 
> > > > Py_RETURN_FALSE;  \
> > > > case Py_GE: if ((val1) >= (val2)) Py_RETURN_TRUE; 
> > > > Py_RETURN_FALSE;  \
> > > > }   
> > > > \
> > > > Py_RETURN_NOTIMPLEMENTED;   
> > > > \
> > > > } while (0)
> > I would prefer a function for that:
> > 
> > PyObject *Py_RichCompare(long val1, long2, int op);
> Why would you prefer a function?  As a macro, when the op is a
> constant, most of the code would be optimized away by a decent
> compiler.
> 
> I suppose when the op is not a constant, then a function would save
> code space.
> 
> So I suppose it depends on the predominant use cases.

There's also the possibility of wrapping C++ code that uses overloaded
operators: having it as a macro could allow those C++ operators to be be
mapped into Python.

Hope this is constructive
Dave

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Ancient use of generators

2015-05-07 Thread David Mertz
I'm glad to see that everything old is new again.  All the stuff being
discussed nowadays, even up through PEP 492, was largely what I was trying
to show in 2002  the syntax just got nicer in the intervening 13 years
:-).

On Wed, May 6, 2015 at 10:57 AM, Guido van Rossum  wrote:

> For those interested in tracking the history of generators and coroutines
> in Python, I just found out that PEP 342
> <https://www.python.org/dev/peps/pep-0342/> (which introduced
> send/throw/close and made "generators as coroutines" a mainstream Python
> concept) harks back to PEP 288 <https://www.python.org/dev/peps/pep-0288/>,
> which was rejected. PEP 288 also proposed some changes to generators. The
> interesting bit though is in the references: there are two links to old
> articles by David Mertz that describe using generators in state machines
> and other interesting and unconventional applications of generators. All
> these well predated PEP 342, so yield was a statement and could not receive
> a value from the function calling next() -- communication was through a
> shared class instance.
>
> http://gnosis.cx/publish/programming/charming_python_b5.txt
> http://gnosis.cx/publish/programming/charming_python_b7.txt
>
> Enjoy!
>
> --
> --Guido van Rossum (python.org/~guido)
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tracker reviews look like spam

2015-05-12 Thread David Wilson
SPF only covers the envelope sender, so it should be possible to set
that to something that validates with SPF, keep the RFC822 From: header
as it is, and maybe(?) include a separate Sender: header matching the
envelope address.


David

On Tue, May 12, 2015 at 06:08:30PM -0400, Terry Reedy wrote:
> Gmail dumps patch review email in my junk box. The problem seems to be the
> spoofed From: header.
> 
> Received: from psf.upfronthosting.co.za ([2a01:4f8:131:2480::3])
> by mx.google.com with ESMTP id
> m1si26039166wjy.52.2015.05.12.00.20.38
> for ;
> Tue, 12 May 2015 00:20:38 -0700 (PDT)
> Received-SPF: softfail (google.com: domain of transitioning
> storch...@gmail.com does not designate 2a01:4f8:131:2480::3 as permitted
> sender) client-ip=2a01:4f8:131:2480::3;
> 
> Tracker reviews are the only false positives in my junk list. Otherwise, I
> might stop reviewing. Verizon does not even deliver mail that fails its junk
> test, so I would not be surprised if there are people who simply do not get
> emailed reviews.
> 
> Tracker posts are sent from Person Name 
> Perhaps reviews could come 'from' Person Name 
> 
> Even direct tracker posts just get a neutral score.
> Received-SPF: neutral (google.com: 2a01:4f8:131:2480::3 is neither permitted
> nor denied by best guess record for domain of
> roundup-ad...@psf.upfronthosting.co.za) client-ip=2a01:4f8:131:2480::3;
> 
> SPF is Sender Policy Framework
> https://en.wikipedia.org/wiki/Sender_Policy_Framework
> 
> Checkins mail, for instance, gets an SPF 'pass' because python.org
> designates mail.python.org as a permitted sender.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why aren't decorators just expressions?

2017-09-16 Thread David Mertz
I always realized the restriction was there, and once in a while mention it
in teaching. But I've NEVER had an actual desire to use anything other that
a simple decorator or a "decorator factory" (which I realize is a decorator
in the grammar, but it's worth teaching how to parameterize custom ones,
which is a factory).

I've used barry_as_FLUFL more often, actually... Albeit always joking
around for students, not in production covfefe.

On Sep 16, 2017 9:46 AM, "Barry Warsaw"  wrote:

On Sep 16, 2017, at 02:39, Larry Hastings  wrote:

> I'm not proposing that we allow arbitrary expressions as decorators...
well, I'm not doing that yet at least.  But like I said, the syntax has
been this way for 13 years and I don't recall anybody complaining.

Indeed, I can’t remember a single time where I’ve needed that, let alone
actually realized the restriction existed.  But now that you mention it, I
do remember discussions in favor of the more restricted syntax when the
feature was originally being debated.  I don’t remember the reasons though
- it well could have been an abundance of caution over how far to take the
new syntax (and understanding of course that it’s easier to relax than
restrict).

-Barry


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/
mertz%40gnosis.cx
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Investigating time for `import requests`

2017-10-08 Thread David Cournapeau
On Mon, Oct 2, 2017 at 6:42 PM, Raymond Hettinger <
raymond.hettin...@gmail.com> wrote:

>
> > On Oct 2, 2017, at 12:39 AM, Nick Coghlan  wrote:
> >
> >  "What requests uses" can identify a useful set of
> > avoidable imports. A Flask "Hello world" app could likely provide
> > another such sample, as could some example data analysis notebooks).
>
> Right.  It is probably worthwhile to identify which parts of the library
> are typically imported but are not ever used.  And likewise, identify a
> core set of commonly used tools that are going to be almost unavoidable in
> sufficiently interesting applications (like using requests to access a REST
> API, running a micro-webframework, or invoking mercurial).
>
> Presumably, if any of this is going to make a difference to end users, we
> need to see if there is any avoidable work that takes a significant
> fraction of the total time from invocation through the point where the user
> first sees meaningful output.  That would include loading from nonvolatile
> storage, executing the various imports, and doing the actual application.
>
> I don't expect to find anything that would help users of Django, Flask,
> and Bottle since those are typically long-running apps where we value
> response time more than startup time.
>
> For scripts using the requests module, there will be some fruit because
> not everything that is imported is used.  However, that may not be
> significant because scripts using requests tend to be I/O bound.  In the
> timings below, 6% of the running time is used to load and run python.exe,
> another 16% is used to import requests, and the remaining 78% is devoted to
> the actual task of running a simple REST API query. It would be interesting
> to see how much of the 16% could be avoided without major alterations to
> requests, to urllib3, and to the standard library.
>

It is certainly true that for a CLI tool that actually makes any network
I/O, especially SSL, import times will quickly be negligible. It becomes
tricky for complex tools, because of error management. For example, a
common pattern I have used in the past is to have a high level "catch all
exceptions" function that dispatch the CLI command:

try:
main_function(...)
except ErrorKind1:

except requests.exceptions.SSLError:
# gives complete message about options when receiving SSL errors, e.g.
invalid certificate

This pattern requires importing requests every time the command is run,
even if no network IO is actually done. For complex CLI tools, maybe most
command don't use network IO (the tool in question was a complete packages
manager), but you pay ~100 ms because of requests import for every command.
It is particularly visible because commands latency starts to be felt
around 100-150 ms, and while you can do a lot in python in 100-150 ms, you
can't do much in 0-50 ms.

David


> For mercurial, "hg log" or "hg commit" will likely be instructive about
> what portion of the imports actually get used.  A push or pull will likely
> be I/O bound so those commands are less informative.
>
>
> Raymond
>
>
> - Quick timing for a minimal script using the requests module
> ---
>
> $ cat > demo_github_rest_api.py
> import requests
> info = requests.get('https://api.github.com/users/raymondh').json()
> print('%(name)s works at %(company)s. Contact at %(email)s' % info)
>
> $ time python3.6 demo_github_rest_api.py
> Raymond Hettinger works at SauceLabs. Contact at None
>
> real0m0.561s
> user0m0.134s
> sys 0m0.018s
>
> $ time python3.6 -c "import requests"
>
> real0m0.125s
> user0m0.104s
> sys 0m0.014s
>
> $ time python3.6 -c ""
>
> real0m0.036s
> user0m0.024s
> sys 0m0.005s
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> cournape%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-22 Thread David Mertz
I worked at a molecular dynamics lab for a number of years. I advocated
switching all our code to using attosecond units (rather than fractional
picoseconds).

However, this had nothing whatsoever to do with the machine clock speeds,
but only with the physical quantities represented and the scaling/rounding
math.

It didn't happen, for various reasons. But if it had, I certainly wouldn't
have expected standard library support for this. The 'time' module is about
wall clock out calendar time, not about *simulation time*.

FWIW, a very long simulation might cover a millisecond of simulated
time we're a very long way from looking at molecular behavior over 104
days.

On Oct 22, 2017 8:10 AM, "Wes Turner"  wrote:



On Saturday, October 21, 2017, Nick Coghlan  wrote:

> On 22 October 2017 at 09:32, Victor Stinner 
> wrote:
>
>> Le 21 oct. 2017 20:31, "francismb"  a écrit :
>>
>> I understand that one can just multiply/divide the nanoseconds returned,
>> (or it could be a factory) but wouldn't it help for future enhancements
>> to reduce the number of functions (the 'pico' question)?
>>
>>
>> If you are me to predict the future, I predict that CPU frequency will be
>> stuck below 10 GHz for the next 10 years :-)
>>
>
> There are actually solid physical reasons for that prediction likely being
> true. Aside from the power consumption, heat dissipation, and EM radiation
> issues that arise with higher switching frequencies, you also start running
> into more problems with digital circuit metastability ([1], [2]): the more
> clock edges you have per second, the higher the chances of an asynchronous
> input changing state at a bad time.
>
> So yeah, for nanosecond resolution to not be good enough for programs
> running in Python, we're going to be talking about some genuinely
> fundamental changes in the nature of computing hardware, and it's currently
> unclear if or how established programming languages will make that jump
> (see [3] for a gentle introduction to the current state of practical
> quantum computing). At that point, picoseconds vs nanoseconds is likely to
> be the least of our conceptual modeling challenges :)
>

There are current applications with greater-than nanosecond precision:

- relativity experiments
- particle experiments

Must they always use their own implementations of time., datetime.
__init__, fromordinal, fromtimestamp ?!

- https://scholar.google.com/scholar?q=femtosecond
- https://scholar.google.com/scholar?q=attosecond
- GPS now supports nanosecond resolution
-

https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_
experimental_clocks

> In 2015 JILA  evaluated the
absolute frequency uncertainty of their latest strontium-87
 optical lattice
clock at 2.1 × 10−18, which corresponds to a measurable gravitational time
dilation  for
an elevation change of 2 cm (0.79 in)

What about bus latency (and variance)?

>From https://www.nist.gov/publications/optical-two-way-
time-and-frequency-transfer-over-free-space :

> Optical two-way time and frequency transfer over free space
> Abstract
> The transfer of high-quality time-frequency signals between remote
locations underpins many applications, including precision navigation and
timing, clock-based geodesy, long-baseline interferometry, coherent radar
arrays, tests of general relativity and fundamental constants, and future
redefinition of the second. However, present microwave-based time-frequency
transfer is inadequate for state-of-the-art optical clocks and oscillators
that have femtosecond-level timing jitter and accuracies below 1 × 10−17.
Commensurate optically based transfer methods are therefore needed. Here we
demonstrate optical time-frequency transfer over free space via two-way
exchange between coherent frequency combs, each phase-locked to the local
optical oscillator. We achieve 1 fs timing deviation, residual instability
below 1 × 10−18 at 1,000 s and systematic offsets below 4 × 10−19, despite
frequent signal fading due to atmospheric turbulence or obstructions across
the 2 km link. This free-space transfer can enable terrestrial links to
support clock-based geodesy. Combined with satellite-based optical
communications, it provides a path towards global-scale geodesy,
high-accuracy time-frequency distribution and satellite-based relativity
experiments.

How much wider must an epoch-relative time struct be for various realistic
time precisions/accuracies?

10-6 micro µ
10-9 nano n -- int64
10-12 pico p
10-15 femto f
10-18 atto a
10-21 zepto z
10-24 yocto y

I'm at a loss to recommend a library to prefix these with the epoch; but
future compatibility may be a helpful, realistic objective.

Natural keys with such time resolution are still unfortunately likely to
collide.


>
> Cheers,
> Nick.
>
> [1] https://en.wikipedia.org/wiki/Metastability_in_electronics
> [2] https

Re: [Python-Dev] Guarantee ordered dict literals in v3.7?

2017-11-06 Thread David Mertz
I strongly opposed adding an ordered guarantee to regular dicts. If the
implementation happens to keep that, great. Maybe OrderedDict can be
rewritten to use the dict implementation. But the evidence that all
implementations will always be fine with this restraint feels poor, and we
have a perfectly good explicit OrderedDict for those who want that.

On Nov 6, 2017 7:39 PM, "Brett Cannon"  wrote:

>
>
> On Mon, 6 Nov 2017 at 11:08 Paul Sokolovsky  wrote:
>
>> Hello,
>>
>> On Mon, 06 Nov 2017 17:58:47 +
>> Brett Cannon  wrote:
>>
>> []
>>
>> > > Why suddenly once in 25 years there's a need to do something to
>> > > dict's, violating computer science background behind them (one of
>> > > the reason enough people loved Python comparing to other "practical
>> > > hack" languages)?
>> >
>> > I don't understand what "computer science background" is being
>> > violated?
>>
>> I tried to explain that in the previous mail, can try a different
>> angle. So, please open you favorite CS book (better few) and look up
>> "abstract data types", then "mapping/associative array" and "list". We
>> can use Wikipedia too: https://en.wikipedia.org/wiki/Associative_array.
>> So, please look up: "Operations associated with this data type allow".
>> And you'll see, that there're no "ordering" related operations are
>> defined. Vice versa, looking at "sequence" operations, there will be
>> "prev/next", maybe "get n'th" element operations, implying ordering.
>>
>
> I don't think you meant for this to come off as insulting, but telling me
> how to look up the definition of an associative array or map feels like
> you're putting me down. I also have a Ph.D. in computer science so I'm
> aware of the academic definitions of these data structures.
>
>
>>
>> Python used to be a perfect application of these principles. Its dict
>> was a perfect CS implementation of an abstract associative array, and
>> list - of "sequence" abstract type (with additional guarantee of O(1)
>> random element access).
>
>
>> People knew and rejoiced that Python is built on solid science
>> principles, or could *learn* them from it.
>
> That no longer will be true,
>> with a sound concept being replaced with on-the-spot practical hack,
>> choosing properties of a random associative array algorithm
>> implementation over properties of a superset of such algorithms (many
>> of which are again don't offer any orderness guarantees).
>>
>>
> I don't think it's fair to call the current dict implementation a hack.
> It's a sound design that has a certain property that we are discussing the
> masking of. As I said previously, I think this discussion comes down to
> whether we think there are pragmatic benefits to exposing the ordered
> aspects to the general developer versus not.
>
> -Brett
>
>
>>
>>
>> I know though what will be replied (based on the replies below): "all
>> these are implementation details" - no, orderness vs non-orderness of a
>> mapping algorithm is an implementation detail; "users shouldn't know all
>> that" - they should, that's the real knowledge, and up until now, they
>> could learn that from *Python docs*, "we can't predict future" - we
>> don't need, we just need to know the past (25 years in our case), and
>> understand why it was done like that, I don't think Guido couldn't code
>> it ordered in 1991, it's just not natural for a mapping type to be so,
>> and in 2017, it's not more natural than it was in 1991.
>
>
>>
>> MicroPython in particular appeared because Python offered all the
>> CS-sound properties and freedom and alternative choices for
>> implementation (more so than any other scripting language). It's losing
>> it, and not just for MicroPython's surprise.
>>
>>
>> []
>>
>>
>> --
>> Best regards,
>>  Paul  mailto:pmis...@gmail.com
>>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mertz%40gnosis.cx
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Guarantee ordered dict literals in v3.7?

2017-11-07 Thread David Mertz
On Nov 6, 2017 9:11 PM, "Raymond Hettinger" 
wrote:

> On Nov 6, 2017, at 8:05 PM, David Mertz  wrote:
> I strongly opposed adding an ordered guarantee to regular dicts. If the
implementation happens to keep that, great. Maybe OrderedDict can be
rewritten to use the dict implementation. But the evidence that all
implementations will always be fine with this restraint feels poor, and we
have a perfectly good explicit OrderedDict for those who want that.

I think this post is dismissive of the value that users would get from
having reliable ordering by default.


Dismissive seems like an overly strong word. I recognize I disagree with
Raymond on best official semantics. Someone else points out that if someday
an "even more efficient unordered dict" is discovered, user-facing "dict"
doesn't strictly have to be the same data structure as "internal dict". The
fact they are is admittedly an implementation detail also.

I've had all those same uses about round-tripping serialization that
Raymond mentions. I know the standard work arounds (which are not
difficult, but DO require a little extra code if we don't have order).

But like Raymond, I make most of my living TEACHING Python. I feel like the
extra order guarantee would make teaching slightly harder. I'm sure he
feels contrarily. It is true that with 3.6 I can no longer show an example
where the dict display is oddly changed when printed. But then, unordered
sets also wind up sorting small integers on printing, even though that's
not a guarantee.

Ordering by insertion order (possibly "only until first deletion") is
simply not obvious to beginners. If we had, hypothetically, a dict that
"always alphabetized keys" that would be more intuitive to them, for
example. Insertion order feels obvious to us experts, but it really is an
extra cognitive burden to learners beyond understanding "key/Val
association".

Having worked with Python 3.6 for a while, it is repeatedly delightful to
encounter the effects of ordering.  When debugging, it is a pleasure to be
able to easily see what has changed in a dictionary.  When creating XML, it
is joy to see the attribs show in the same order you added them.  When
reading a configuration, modifying it, and writing it back out, it is a
godsend to have it written out in about the same order you originally typed
it in.  The same applies to reading and writing JSON.  When adding a VIA
header in a HTTP proxy, it is nice to not permute the order of the other
headers. When generating url query strings for REST APIs, it is nice have
the parameter order match documented examples.

We've lived without order for so long that it seems that some of us now
think data scrambling is a virtue.  But it isn't.  Scrambled data is the
opposite of human friendly.


Raymond


P.S. Especially during debugging, it is often inconvenient, difficult, or
impossible to bring in an OrderedDict after the fact or to inject one into
third-party code that is returning regular dicts.  Just because we have
OrderedDict in collections doesn't mean that we always get to take
advantage of it.  Plain dicts get served to us whether we want them or not.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tricky way of of creating a generator via a comprehension expression

2017-11-22 Thread David Mertz
Inasmuch as I get to opine, I'm +1 on SyntaxError. There is no behavior for
that spelling that I would find intuitive or easy to explain to students.
And as far as I can tell, the ONLY time anything has ever been spelled that
way is in comments saying "look at this weird edge case behavior in Python."

On Nov 22, 2017 10:57 AM, "Jelle Zijlstra"  wrote:



2017-11-22 9:58 GMT-08:00 Guido van Rossum :

> Wow, 44 messages in 4 hours. That must be some kind of record.
>
> If/when there's an action item, can someone summarize for me?
>
> The main disagreement seems to be about what this code should do:

g = [(yield i) for i in range(3)]

Currently, this makes `g` into a generator, not a list. Everybody seems to
agree this is nonintuitive and should be changed.

One proposal is to make it so `g` gets assigned a list, and the `yield`
happens in the enclosing scope (so the enclosing function would have to be
a generator). This was the way things worked in Python 2, I believe.

Another proposal is to make this code a syntax error, because it's
confusing either way. (For what it's worth, that would be my preference.)

There is related discussion about the semantics of list comprehensions
versus calling list() on a generator expression, and of async semantics,
but I don't think there's any clear point of action there.


> --
> --Guido van Rossum (python.org/~guido)
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/jelle.
> zijlstra%40gmail.com
>
>

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/
mertz%40gnosis.cx
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Allow tuple unpacking in return and yield statements

2017-11-25 Thread David Cuthbert
First time contributing back -- if I should be filing a PEP or something like 
that for this, please let me know.

Coming from https://bugs.python.org/issue32117, unparenthesized tuple unpacking 
is allowed in assignments:

rest = (4, 5, 6)
a = 1, 2, 3, *rest

but not in yield or return statements (these result in SyntaxErrors):

return 1, 2, 3, *rest
yield 1, 2, 3, *rest

The unpacking in assignments was enabled by a pre-3.2 commit that I haven't yet 
been able to track back to a discussion, but I suspect this asymmetry is 
unintentional. Here's the original commit:
https://github.com/python/cpython/commit/4905e80c3d2f6abb613d212f0313d1dfe09475dc

I've submitted a patch (CLA is signed and submitted, not yet processed), and 
Serihy said that since it changes the grammar I should have it reviewed here 
and have signoff by the BDFL.

While I haven't had a need for this myself, it was brought up by a user on 
StackOverflow 
(https://stackoverflow.com/questions/47272460/python-tuple-unpacking-in-return-statement/47326859).

Thanks!
Dave


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tricky way of of creating a generator via a comprehension expression

2017-11-25 Thread David Mertz
FWIW, on a side point. I use 'yield' and 'yield from' ALL THE TIME in real
code. Probably 80% of those would be fine with yield statements, but a
significant fraction use `gen.send()`.

On the other hand, I have yet once to use 'await', or 'async' outside of
pedagogical contexts. There are a whole lot of generators, including ones
utilizing state injection, that are useful without the scaffolding of an
event loop, in synchronous code.

Of course, I never use them in comprehensions or generator expressions. And
even after reading every post in this thread, the behavior (either existing
or desired by some) such constructs have is murky and difficult for me to
reason about. I strongly support deprecation or even just immediate
SyntaxError in 3.7.

On Nov 25, 2017 12:38 PM, "Guido van Rossum"  wrote:

> On Sat, Nov 25, 2017 at 12:17 PM Brett Cannon  wrote:
>>
> On Fri, Nov 24, 2017, 19:32 Guido van Rossum,  wrote:
>>>
 On Fri, Nov 24, 2017 at 4:22 PM, Guido van Rossum 
 wrote:

> The more I hear about this topic, the more I think that `await`,
> `yield` and `yield from` should all be banned from occurring in all
> comprehensions and generator expressions. That's not much different from
> disallowing `return` or `break`.
>

 From the responses it seems that I tried to simplify things too far.
 Let's say that `await` in comprehensions is fine, as long as that
 comprehension is contained in an `async def`. While we *could* save `yield
 [from]` in comprehensions, I still see it as mostly a source of confusion,
 and the fact that the presence of `yield [from]` *implicitly* makes the
 surrounding `def` a generator makes things worse. It just requires too many
 mental contortions to figure out what it does.

 I still propose to rule out all of the above from generator
 expressions, because those can escape from the surrounding scope.

>>>
>>> +1 from me
>>>
>>
> On Sat, Nov 25, 2017 at 9:21 AM, Yury Selivanov 
> wrote:
>
>> So we are keeping asynchronous generator expressions as long as they are
>> defined in an 'async def' coroutine?
>>
>
> I would be happy to declare that `await` is out of scope for this thread.
> It seems that it is always well-defined and sensible what it does in
> comprehensions and in genexprs. (Although I can't help noticing that PEP
> 530 does not appear to propose `await` in generator expressions -- it
> proposes `async for` in comprehensions and in genexprs, and `await` in
> comprehensions only -- but they appear to be accepted nevertheless.)
>
> So we're back to the original issue, which is that `yield` inside a
> comprehension accidentally makes it become a generator rather than a list,
> set or dict. I believe that this can be fixed. But I don't believe we
> should fix it. I believe we should ban `yield` from comprehensions and from
> genexprs. We don't need it, and it's confused most everyone. And the ban
> should extend to `yield from` in those same contexts. I think we have a
> hope for consensus on this.
>
> (I also think that if we had invented `await` earlier we wouldn't have
> gone down the path of `yield` expressions -- but historically it appears we
> wouldn't have invented `await` at all if we hadn't first tried `yield` and
> then `yield from` to build coroutines, so I don't think this so bad after
> all. :-)
>
> --
> --Guido van Rossum (python.org/~guido)
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mertz%40gnosis.cx
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tricky way of of creating a generator via a comprehension expression

2017-11-25 Thread David Mertz
On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum  wrote:

> Maybe you didn't realize async/await don't need an event loop? Driving an
> async/await-based coroutine is just as simple as driving a yield-from-based
> one (`await` does exactly the same thing as `yield from`).
>

I realize I *can*, but it seems far from straightforward.  I guess this is
really a python-list question or something, but what is the async/await
spelling of something toy like:

In [1]: def fib():
   ...: a, b = 1, 1
   ...: while True:
   ...: yield a
   ...: a, b = b, a+b
   ...:

In [2]: from itertools import takewhile

In [3]: list(takewhile(lambda x: x<200, fib()))
Out[3]: [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]



> Maybe the rest of the discussion should be about deprecation vs.
> SyntaxError in Python 3.7.
>

I vote SyntaxError, of course. :-)

-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Using async/await in place of yield expression

2017-11-26 Thread David Mertz
Changing subject line because this is way off to the side.  Guido and
Nathaniel point out that you can do everything yield expressions do with
async/await *without* an explicit event loop.  While I know that is true,
it feels like the best case is adding fairly considerable ugliness to the
code in the process.


> On Sat, Nov 25, 2017 at 3:37 PM, Guido van Rossum 
> wrote:
> > Maybe you didn't realize async/await don't need an event loop? Driving an
> > async/await-based coroutine is just as simple as driving a
> yield-from-based
> > one (`await` does exactly the same thing as `yield from`).
>


> On Sun, Nov 26, 2017 at 12:29 PM, Nathaniel Smith  wrote:
> Technically anything you can write with yield/yield from could also be
> written using async/await and vice-versa, but I think it's actually
> nice to have both in the language.
>

Here is some code which is definitely "toy", but follows a pattern pretty
similar to things I really code using yield expressions:

In [1]: from itertools import takewhile
In [2]: def injectable_fib(a=1, b=2):
   ...: while True:
   ...: new = yield a
   ...: if new is not None:
   ...: a, b = new
   ...: a, b = b, a+b
   ...:
In [3]: f = injectable_fib()
In [4]: list(takewhile(lambda x: x<200, f))
Out[4]: [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]
In [5]: f.send((100,200))
Out[5]: 200
In [6]: list(takewhile(lambda x: x<1000, f))
Out[6]: [300, 500, 800]


Imagining that 'yield' vanished from the language tomorrow, and I wanted to
write the same thing with async/await, I think the best I can come up with
is... actually, I just don't know who to do it without any `yield`.

I can get as far as a slightly flawed:

In [9]: async def atakewhile(pred, coro):
   ...: l = []
   ...: async for x in coro:
   ...: if pred(x):
   ...: return l
   ...: l.append(x)


But I just have no idea what would go in the body of

async def afib_injectable():


(that is, if I'm prohibited a `yield` in there)

-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's the status of PEP 505: None-aware operators?

2017-11-29 Thread David Mertz
I like much of the thinking in Random's approach. But I still think None
isn't quite special enough to warrant it's own syntax.

However, his '(or None: name.strip()[4:].upper())' makes me realize that
what is being asked in all the '?(', '?.', '?[' syntax ideas is a kind of
ternary expression.  Except the ternary isn't based on whether a predicate
holds, but rather on whether an exception occurs (AttributeError, KeyError,
TypeError).  And the fallback in the ternary is always None rather than
being general.

I think we could generalize this to get something both more Pythonic and
more flexible.  E.g.:

val = name.strip()[4:].upper() except None

This would just be catching all errors, which is perhaps too broad.  But it
*would* allow a fallback other than None:

val = name.strip()[4:].upper() except -1

I think some syntax could be possible to only "catch" some exceptions and
let others propagate.  Maybe:

val = name.strip()[4:].upper() except (AttributeError, KeyError): -1

I don't really like throwing a colon in an expression though.  Perhaps some
other word or symbol could work instead.  How does this read:

val = name.strip()[4:].upper() except -1 in (AttributeError, KeyError)

Where the 'in' clause at the end would be optional, and default to
'Exception'.

I'll note that what this idea DOES NOT get us is:

  val = timeout ?? local_timeout ?? global_timeout

Those values that are "possibly None" don't raise exceptions, so they
wouldn't apply to this syntax.

Yours, David...


On Wed, Nov 29, 2017 at 9:03 AM, Random832  wrote:

> On Tue, Nov 28, 2017, at 15:31, Raymond Hettinger wrote:
> >
> > > I also cc python-dev to see if anybody here is strongly in favor or
> against this inclusion.
> >
> > Put me down for a strong -1.   The proposal would occasionally save a few
> > keystokes but comes at the expense of giving Python a more Perlish look
> > and a more arcane feel.
> >
> > One of the things I like about Python is that I can walk non-programmers
> > through the code and explain what it does.  The examples in PEP 505 look
> > like a step in the wrong direction.  They don't "look like Python" and
> > make me feel like I have to decrypt the code to figure-out what it does.
> >
> > timeout ?? local_timeout ?? global_timeout
> > 'foo' in (None ?? ['foo', 'bar'])
> > requested_quantity ?? default_quantity * price
> > name?.strip()[4:].upper()
> > user?.first_name.upper()
>
> Since we're looking at different syntax for the ?? operator, I have a
> suggestion for the ?. operator - and related ?[] and ?() that appeared
> in some of the proposals. How about this approach?
>
> Something like (or None: ...) as a syntax block in which any operation
> [lexically within the expression, not within e.g. called functions, so
> it's different from simply catching AttributeError etc, even if that
> could be limited to only catching when the operand is None] on None that
> is not valid for None will yield None instead.
>
> This isn't *entirely* equivalent, but offers finer control.
>
> v = name?.strip()[4:].upper() under the old proposal would be more or
> less equivalent to:
>
> v = name.strip()[4:].upper() if name is not None else None
>
> Whereas, you could get the same result with:
> (or None: name.strip()[4:].upper())
>
> Though that would technically be equivalent to these steps:
> v = name.strip if name is not None else None
> v = v() if v """""
> v = v[4:] """""""
> v = v.upper """""""
> v = v() """""""
>
> The compiler could optimize this case since it knows none of the
> operations are valid on None. This has the advantage of being explicit
> about what scope the modified rules apply to, rather than simply
> implicitly being "to the end of the chain of dot/bracket/call operators"
>
> It could also be extended to apply, without any additional syntax, to
> binary operators (result is None if either operand is None) (or None: a
> + b), for example, could return None if either a or b is none.
>
> [I think I proposed this before with the syntax ?(...), the (or None:
> ...) is just an idea to make it look more like Python.]
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mertz%40gnosis.cx
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Allow tuple unpacking in return and yield statements

2017-11-30 Thread David Cuthbert
Henk-Jaap noted that the grammar section of the language ref for yield and 
return should also be updated from expression_list to starred_list with this 
change. As noted elsewhere, this isn't in-sync with the Grammar file 
(intentionally, if I understand correctly).

I took a look, and I believe that every instance of expression_list (which 
doesn't allow the unparenthesized tuple unpacking) should be changed to 
starred_list. Which might really mean that starred_list should have never 
existed, and the changes should have been put into expression_list in the first 
place (though I understand the desire to be conservative with syntax changes).

Here are the places where expression_list is still allowed (after fixing return 
and yield):

subscription ::= primary "[" expression_list "]"
augmented_assignment_stmt ::=  augtarget augop (expression_list | 
yield_expression)
for_stmt ::=  "for" target_list "in" expression_list ":" suite
  ["else" ":" suite]

In other words, the following all produce SyntaxErrors today (and enclosing 
them in parentheses avoids this):
a[1, *rest]
a += 1, *rest # and other augops: -= *= /= etc.
for i in 1, *rest:

My hunch is these cases should also be fixed to be consistent. While I can't 
see myself using something like "a += 1, *rest" in the immediate future, it 
seems weird to be inconsistent in these cases (and reinforces the oft-mistaken 
assumption, from Terry's earlier reply, that tuples are defined by parentheses 
instead of commas).

Any reason I shouldn't dig in and fix this while I'm here?

Dave


On 11/25/17, 9:03 PM, Nick Coghlan wrote:

On 26 November 2017 at 09:22, Terry Reedy  wrote:
> Since return and yield are often the first half of a cross-namespace
> assignment, requiring the () is a bit surprising.  Perhaps someone else 
has
> a good reason for the difference.

These kinds of discrepancies tend to arise because there are a few
different grammar nodes for "comma separated sequence of expressions",
which makes it possible to miss some when enhancing the tuple syntax.

Refactoring the grammar to eliminate the duplication isn't especially
easy,  and we don't change the syntax all that often, so it makes
sense to treat cases like this one as bugs in the implementation of
the original syntax change (except that the "don't change the Grammar
in maintenance releases" guideline means they still need to be handled
as new features when it comes to fixing them).

Cheers,
Nick.

P.S. That said, I do wonder if it might be feasible to write a
"Grammar consistency check" test that ensured the known duplicate
nodes at least have consistent definitions, such that missing one in a
syntax update will cause an automated test failure. Unfortunately, the
nodes typically haven't been combined because they have some
*intentional* differences in exactly what they allow, so I also
suspect that this is easier said than done.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is static typing still optional?

2017-12-17 Thread David Mertz
On Sun, Dec 17, 2017 at 8:22 AM, Guido van Rossum  wrote:

> On Sun, Dec 17, 2017 at 2:11 AM, Julien Salort  wrote:
>
>> Naive question from a lurker: does it mean that it works also if one
>> annotates with something that is not a type, e.g. a comment,
>>
>> @dataclass
>> class C:
>> a: "This represents the amplitude" = 0.0
>> b: "This is an offset" = 0.0
>
>
> I would personally not use the notation for this, but it is legal code.
> However static type checkers like mypy won't be happy with this.
>

Mypy definitely won't like that use of annotation, but documentation
systems might.  For example, in a hover tooltip in an IDE/editor, it's
probably more helpful to see the descriptive message than "int" or "float"
for the attribute.

What about data that isn't built-in scalars? Does this look right to people
(and will mypy be happy with it)?

@dataclass
class C:
a:numpy.ndarray = numpy.random.random((3,3))
b:MyCustomClass = MyCustomClass("foo", 37.2, 1+2j)

I don't think those look terrible, but I think this looks better:

@dataclass
class C:
a:Infer = np.random.random((3,3))
b:Infer = MyCustomClass("foo", 37.2, 1+2j)

Where the name 'Infer' (or some other spelling) was a name defined in the
`dataclasses` module.  In this case, I don't want to use `typing.Any` since
I really do want "the type of thing the default value has."

-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is static typing still optional?

2017-12-22 Thread David Mertz
There name Data seems very intuitive to me without suggesting type
declaration as Any does (but it can still be treated as a synonym by actual
type checkers)

On Dec 22, 2017 12:12 PM, "Paul Moore"  wrote:

> On 22 December 2017 at 19:50, Gregory P. Smith  wrote:
>
> > My preference for this is "just use Any" for anyone not concerned about
> the
> > type.  But if we wanted to make it more opaque so that people need not
> > realizing that they are actually type annotations, I suggest adding an
> alias
> > for Any in the dataclasses module (dataclasses.Data = typing.Any)
> >
> > from dataclasses import dataclass, Data
> >
> > @dataclass
> > class Swallow:
> > weight_in_oz: Data = 5
> > laden: Data = False
> > species: Data = SwallowSpecies.AFRICAN
> >
> > the word "Data" is friendlier than "Any" in this context for people who
> > don't need to care about the typing module.
> >
> > We could go further and have Data not be an alias for Any if desired (so
> > that its repr wouldn't be confusing, not that anyone should be looking at
> > its repr ever).
>
> That sounds like a nice simple proposal. +1 from me.
>
> Documentation can say that variables should be annotated with "Data"
> to be recognised by the decorator, and if people are using type
> annotations an actual type can be used in place of "Data" (which acts
> the same as typing.Any. That seems to me to describe the feature in a
> suitably type-hinting-neutral way, while still making it clear how
> data classes interact with type annotations.
>
> Paul
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Guido's Python 1.0.0 Announcement from 27 Jan 1994

2018-01-27 Thread David Mertz
Does anyone have an archive of the Python 1.0 documentation?  Sadly
http://www.cwi.nl/~guido/Python.html is not a live URL :-).

On Sat, Jan 27, 2018 at 9:08 AM, Chris Angelico  wrote:

> On Sun, Jan 28, 2018 at 3:58 AM, Senthil Kumaran 
> wrote:
> > Someone in HackerNews shared the Guido's Python 1.0.0 announcement from
> 27
> > Jan 1994. That is, on this day, 20 years ago.
> >
> > https://groups.google.com/forum/?hl=en#!original/comp.
> lang.misc/_QUzdEGFwCo/KIFdu0-Dv7sJ
> >
> > It is very entertaining to read.
>
> Yes, it is. In twenty years, some things have not changed at all:
>
> > Python is an interpreted language, and has the usual advantages of
> > such languages, such as run-time checks (e.g. bounds checking),
> > execution of dynamically generated code, automatic memory allocation,
> > high level operations on strings, lists and dictionaries (associative
> > arrays), and a fast edit-compile-run cycle.  Additionally, it features
> > modules, classes, exceptions, and dynamic linking of extensions
> > written in C or C++.  It has arbitrary precision integers.
>
> But some things have:
>
> > (Please don't ask me to mail it to you -- at 1.76 Megabytes it is
> > unwieldy at least...)
>
> hehe.
>
> Thanks for digging that up!
>
> ChrisA
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mertz%40gnosis.cx
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dataclasses and correct hashability

2018-02-02 Thread David Mertz
I agree with Ethan, Elvis, and a few others. I think 'hash=True,
frozen=False' should be disabled in 3.7.  It's an attractive nuisance.
Maybe not so attractive because its obscurity, but still with no clear
reason to exist.

If many users of of dataclass find themselves defining '__hash__' with
mutable dataclass, it's perfectly possible to allow the switch combination
later. But taking it out after previously allowing it—even if every use in
the wild is actually a bug in waiting—is harder.

On Feb 2, 2018 2:10 PM, "Ethan Furman"  wrote:

> On 02/02/2018 08:09 AM, Eric V. Smith wrote:
>
>> On 2/2/2018 10:56 AM, Elvis Pranskevichus wrote:
>>
>
> My point is exactly that there is _no_ valid use case, so (hash=True,
>>> frozen=False) should not be a thing!  Why are you so insistent on adding
>>> a dangerous option which you admit is nearly useless?
>>>
>>
>> Because it's not the default, it will be documented as being an advanced
>> use case, and it's useful in rare instances.
>>
>
> Personally, I don't think advanced use-cases need to be supported by flags
> as they can be supported by just writing the __dunder__ methods.
>
> --
> ~Ethan~
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/mertz%
> 40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dataclasses and correct hashability

2018-02-05 Thread David Mertz
Absolutely I agree. 'unsafe_hash' as a name is clear warning to users.

On Feb 4, 2018 10:43 PM, "Chris Barker"  wrote:



On Sun, Feb 4, 2018 at 11:57 PM, Gregory P. Smith  wrote:

> +1 using unsafe_hash as a name addresses my concern.
>
mine too -- anyone surprised by using this deserves what they get :-)

-CHB


On Sun, Feb 4, 2018, 9:50 PM Guido van Rossum  wrote:
>
>> Looks like this is turning into a major flamewar regardless of what I
>> say. :-(
>>
>> I really don't want to lose the ability to add a hash function to a
>> mutable dataclass by flipping a flag in the decorator. I'll explain below.
>> But I am fine if this flag has a name that clearly signals it's an unsafe
>> thing to do.
>>
>> I propose to replace the existing (as of 3.7.0b1) hash= keyword for the
>> @dataclass decorator with a simpler flag named unsafe_hash=. This would be
>> a simple bool (not a tri-state flag like the current hash=None|False|True).
>> The default would be False, and the behavior then would be to add a hash
>> function automatically only if it's safe (using the same rules as for
>> hash=None currently). With unsafe_hash=True, a hash function would always
>> be generated that takes all fields into account except those declared using
>> field(hash=False). If there's already a `def __hash__` in the function I
>> don't care what it does, maybe it should raise rather than quietly doing
>> nothing or quietly overwriting it.
>>
>> Here's my use case.
>>
>> A frozen class requires a lot of discipline, since you have to compute
>> the values of all fields before calling the constructor. A mutable class
>> allows other initialization patterns, e.g. manually setting some fields
>> after the instance has been constructed, or having a separate non-dunder
>> init() method. There may be good reasons for using these patterns, e.g. the
>> object may be part of a cycle (e.g. parent/child links in a tree). Or you
>> may just use one of these patterns because you're a pretty casual coder. Or
>> you're modeling something external.
>>
>> My point is that once you have one of those patterns in place, changing
>> your code to avoid them may be difficult. And yet your code may treat the
>> objects as essentially immutable after the initialization phase (e.g. a
>> parse tree). So if you create a dataclass and start coding like that for a
>> while, and much later you need to put one of these into a set or use it as
>> a dict key, switching to frozen=True may not be a quick option. And writing
>> a __hash__ method by hand may feel like a lot of busywork. So this is where
>> [unsafe_]hash=True would come in handy.
>>
>> I think naming the flag unsafe_hash should take away most objections,
>> since it will be clear that this is not a safe thing to do. People who
>> don't understand the danger are likely to copy a worse solution from
>> StackOverflow anyway. The docs can point to frozen=True and explain the
>> danger.
>>
>> --
>> --Guido van Rossum (python.org/~guido)
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/greg%
>> 40krypto.org
>>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.
> barker%40noaa.gov
>
>


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/
mertz%40gnosis.cx
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dataclasses and correct hashability

2018-02-06 Thread David Mertz
Honestly, the name I would most want for the keyword argument is '_hash'.
That carries the semantics I desire.

On Feb 6, 2018 10:13 AM, "Ethan Furman"  wrote:

> On 02/06/2018 09:38 AM, Guido van Rossum wrote:
>
> Where do you get the impression that one would have to explicitly request
>> __hash__ if frozen=True is set? To the
>> contrary, my proposal is for @dataclass to automatically add a __hash__
>> method when frozen=True is set. This is what the
>> code currently released as 3.7.0b1 does if hash=None (the default).
>>
>
> Which is my issue with the naming -- although, really, it's more with the
> parameter/argument:  in a hand-written class,
>
>   __hash__ = None
>
> means the object in is not hashable, but with the decorator:
>
>   @dataclass(..., hash=None, ...)
>
> it means something else.
>
> My preference for "fixing" the issue:
>
> 1) make the default be a custom object (not None), so that `hash=None`
>means disable hashing
>
> 2) change the param name -- maybe to `add_hash` (I agree with D'Aprano
>that `unsafe_hash` can be misleading)
>
> --
> ~Ethan~
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/mertz%
> 40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The `for y in [x]` idiom in comprehensions

2018-02-23 Thread David Mertz
On Feb 23, 2018 9:26 PM, "Steven D'Aprano"  wrote:

Given a potentially expensive DRY violation like:

[(function(x), function(x)+1) for x in sequence]

there are at least five ways to solve it.


A 6th way is to wrap the expensive function in @lru_cache() to make it
non-expensive.


[(a, a+1) for x in sequence for a in [function(x)]]


It's funny to me how many people, even the BDFL, have said this is tricky
to reason about or recognize. I didn't think of it all by myself, but saw
it somewhere years ago. It seemed obvious once I saw it. Since then it's
something I do occasionally in my code without much need for thought.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The `for y in [x]` idiom in comprehensions

2018-02-23 Thread David Mertz
FWIW, the nested loop over a single item is already in the language for 15
years or something. It's not that ugly, certainly not enough to need a new
'let' or 'where' keyword that basically does exactly the same thing with 3
fewer characters.

On Feb 23, 2018 10:04 PM, David Mertz  wrote:


On Feb 23, 2018 9:26 PM, "Steven D'Aprano"  wrote:

Given a potentially expensive DRY violation like:

[(function(x), function(x)+1) for x in sequence]

there are at least five ways to solve it.


A 6th way is to wrap the expensive function in @lru_cache() to make it
non-expensive.


[(a, a+1) for x in sequence for a in [function(x)]]


It's funny to me how many people, even the BDFL, have said this is tricky
to reason about or recognize. I didn't think of it all by myself, but saw
it somewhere years ago. It seemed obvious once I saw it. Since then it's
something I do occasionally in my code without much need for thought.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Symmetry arguments for API expansion

2018-03-12 Thread David Mertz
If anyone cares, my vote is to rip out both .as_integer_ratio() and
.is_integer() from Python. I've never used either and wouldn't want to.

Both seem like perfectly good functions for the `math` module, albeit the
former is simply the Fraction() constructor.

I can see no sane reason why anyone would ever call float.is_integer()
actually. That should always be spelled math.isclose(x, int(x)) because
IEEE-754. Attractive nuisance is probably too generous, I'd simply call the
method a bug.

On Mon, Mar 12, 2018, 2:21 PM Tim Peters  wrote:

> [Guido]
> >  as_integer_ratio() seems mostly cute (it has Tim Peters all
> > over it),
>
> Nope!  I had nothing to do with it.  I would have been -0.5 on adding
> it had I been aware at the time.
>
> - I expect the audience is tiny.
>
> - While, ya, _I_ have uses for it, I had a utility function for it
> approximately forever (it's easily built on top of math.frexp()).
>
> - Especially now, fractions.Fraction(some_float) is the same thing
> except for return type.
>
>
> > OTOH it looks like Decimal has it,
>
> Looks like ints got it first, and then spread to Decimal because "why
> not?" ;-)  The first attempt to spread it to Decimal I found was
> rejected (which would have been my vote too):
>
> https://bugs.python.org/issue8947
>
>
> > so I think this ship has sailed too and maybe it's best to add it to the
> > numeric tower just to be done with it.
>
> Or rip it out of everything.  Either way works for me ;-)
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Symmetry arguments for API expansion

2018-03-12 Thread David Mertz
On Mon, Mar 12, 2018, 3:25 PM Tim Peters  wrote:

> [David Mertz ]
> > ...
> > I can see no sane reason why anyone would ever call float.is_integer()
> > actually. That should always be spelled math.isclose(x, int(x)) because
> > IEEE-754. Attractive nuisance is probably too generous, I'd simply call
> the
> > method a bug.
>
> Sometimes it's necessary to know, and especially when _implementing_
> 754-conforming functions.  For example, what negative infinity raised
> to a power needs to return depends on whether the power is an integer
> (specifically on whether it's an odd integer):
>
> >>> (-math.inf) ** 3.1
> inf
>

Weird. I take it that's what IEEE-754 says. NaN would sure be more
intuitive here since inf+inf-j is not in the domain of Reals. Well,
technically neither is inf, but at least it's the limit of the domain. :-).

>>> (-math.inf) ** 3.0 # NOTE THIS ONE
> -inf
> >>> (-math.inf) ** 2.9
> inf
>
> But, ya, for most people most of the time I agree is_integer() is an
> attractive nuisance.  People implementing math functions are famous
> for cheerfully enduring any amount of pain needed to get the job done
> ;-)
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecating float.is_integer()

2018-03-21 Thread David Mertz
I've been using and teaching python for close to 20 years and I never
noticed that x.is_integer() exists until this thread. I would say the "one
obvious way" is less than obvious.

On the other hand, `x == int(x)` is genuinely obvious... and it immediately
suggests the probably better `math.isclose(x, int(x))` that is what you
usually mean.

On Wed, Mar 21, 2018, 2:08 PM Mark Dickinson  wrote:

> I'd prefer to see `float.is_integer` stay. There _are_ occasions when one
> wants to check that a floating-point number is integral, and on those
> occasions, using `x.is_integer()` is the one obvious way to do it. I don't
> think the fact that it can be misused should be grounds for deprecation.
>
> As far as real uses: I didn't find uses of `is_integer` in our code base
> here at Enthought, but I did find plenty of places where it _could_
> reasonably have been used, and where something less readable like `x % 1 ==
> 0` was being used instead. For evidence that it's generally useful: it's
> already been noted that the decimal module uses it internally. The mpmath
> package defines its own "isint" function and uses it in several places: see
> https://github.com/fredrik-johansson/mpmath/blob/2858b1000ffdd8596defb50381dcb83de2b6/mpmath/ctx_mp_python.py#L764.
> MPFR also has an mpfr_integer_p predicate:
> http://www.mpfr.org/mpfr-current/mpfr.html#index-mpfr_005finteger_005fp.
>
> A concrete use-case: suppose you wanted to implement the beta function (
> https://en.wikipedia.org/wiki/Beta_function) for real arguments in
> Python. You'll likely need special handling for the poles, which occur only
> for some negative integer arguments, so you'll need an is_integer test for
> those. For small positive integer arguments, you may well want the accuracy
> advantage that arises from computing the beta function in terms of
> factorials (giving a correctly-rounded result) instead of via the log of
> the gamma function. So again, you'll want an is_integer test to identify
> those cases. (Oddly enough, I found myself looking at this recently as a
> result of the thread about quartile definitions: there are links between
> the beta function, the beta distribution, and order statistics, and the
> (k-1/3)/(n+1/3) expression used in the recommended quartile definition
> comes from an approximation to the median of a beta distribution with
> integral parameters.)
>
> Or, you could look at the SciPy implementation of the beta function, which
> does indeed do the C equivalent of is_integer in many places:
> https://github.com/scipy/scipy/blob/11509c4a98edded6c59423ac44ca1b7f28fba1fd/scipy/special/cephes/beta.c#L67
>
> In sum: it's an occasionally useful operation; there's no other obvious,
> readable spelling of the operation that does the right thing in all cases,
> and it's _already_ in Python! In general, I'd think that deprecation of an
> existing construct should not be done lightly, and should only be done when
> there's an obvious and significant benefit to that deprecation. I don't see
> that benefit here.
>
> --
> Mark
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecating float.is_integer()

2018-03-21 Thread David Mertz
On Wed, Mar 21, 2018 at 3:02 PM, Tim Peters  wrote:

> [David Mertz]
> > I've been using and teaching python for close to 20 years and I never
> > noticed that x.is_integer() exists until this thread.
>
> Except it was impossible to notice across most of those years, because
> it didn't exist across most of those years ;-)
>

That's probably some of the reason.  I wasn't sure if someone used the time
machine to stick it back into Python 1.4.


> > On the other hand, `x == int(x)` is genuinely obvious..
>
> But a bad approach:  it can raise OverflowError (for infinite x); it
> can raise ValueError (for x a NaN);


These are the CORRECT answers! Infinity neither is nor is not an integer.
Returning a boolean as an answer is bad behavior; I might argue about
*which* exception is best, but False is not a good answer to
`float('inf').is_integer()`.  Infinity is neither in the Reals nor in the
Integers, but it's just as much the limit of either.

Likewise Not-a-Number isn't any less an integer than it is a real number
(approximated by a floating point number).  It's NOT a number, which is
just as much not an integer.


> and can waste relative mountains
> of time creating huge integers, e.g.,
>

True enough. But it's hard to see where that should matter.  No floating
point number on the order of 1e306 is sufficiently precise as to be an
integer in any meaningful sense.  If you are doing number theory with
integers of that size (or larger is perfectly fine too) the actual test is
`isinstance(x, int)`.  Using a float is just simply wrong for the task to
start with, whether or not those bits happen to represent something
Integral... the only case where you should see this is
"measuring/estimating something VERY big, very approximately."

For example, this can be true (even without reaching inf):

>>> x.is_integer()
True
>>> (math.sqrt(x**2)).is_integer()
False

The problem there isn't  how "is it an integer?" is spelled, it's that
> _any_ way of spelling "is it an integer?" doesn't answer the question
> they're trying to answer.  They're just plain confused about how
> floating point works.  The use of `.is_integer()` (however spelled!)
> isn't the cause of that, it's a symptom.
>

Agreed!

-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecating float.is_integer()

2018-03-21 Thread David Mertz
Ok. I'm wrong on that example.

On Wed, Mar 21, 2018, 9:11 PM Tim Peters  wrote:

> [David Mertz ]
> >> For example, this can be true (even without reaching inf):
> >>
> >> >>> x.is_integer()
> >> True
> >> >>> (math.sqrt(x**2)).is_integer()
> >> False
>
> [Mark Dickinson  ]
> > If you have a moment to share it, I'd be interested to know what value of
> > `x` you used to achieve this, and what system you were on. This can't
> happen
> > under IEEE 754 arithmetic.
>
> I expect it might happen under one of the directed rounding modes
> (like "to +infinity").
>
> But under 754 binary round-nearest/even arithmetic, it's been formally
> proved that sqrt(x*x) == x exactly for all non-negative finite x such
> that x*x neither overflows nor underflows (and .as_integer() has
> nothing to do with that very strong result):
>
> https://hal.inria.fr/hal-01148409/document
>
> OTOH, the paper notes that it's not necessarily true for IEEE decimal
> arithmetic; e.g.,
>
> >>> import decimal
> >>> decimal.getcontext().prec = 4
> >>> (decimal.Decimal("31.66") ** 2).sqrt()  # result is 1 ulp smaller
> Decimal('31.65')
>
> >>> decimal.getcontext().prec = 5
> >>> (decimal.Decimal("31.660") ** 2).sqrt() # result is 1 ulp larger
> Decimal('31.661')
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sets, Dictionaries

2018-03-29 Thread David Mertz
I agree with everything Steven says. But it's true that even as a 20-year
Python user, this is an error I make moderately often when I want an empty
set... Notwithstanding that I typed it thousands of times before sets even
existed (and still type it when I want an empty dictionary).

That said, I've sort of got in the habit of using the type initializers:

x = set()
y = dict()
z = list()

I feel like those jump out a little better visually. But I'm inconsistent
in my code.

On Thu, Mar 29, 2018, 2:03 AM Steven D'Aprano  wrote:

> Hi Julia, and welcome!
>
> On Wed, Mar 28, 2018 at 09:14:53PM -0700, Julia Kim wrote:
>
> > My suggestion is to change the syntax for creating an empty set and an
> > empty dictionary as following.
> >
> > an_empty_set = {}
> > an_empty_dictionary = {:}
> >
> > It would seem to make more sense.
>
> Indeed it would, and if sets had existed in Python since the beginning,
> that's probably exactly what we would have done. But unfortunately they
> didn't, and {} has meant an empty dict forever.
>
> The requirement to keep backwards-compatibility is a very, very hard
> barrier to cross. I think we all acknowledge that it is sad and a little
> bit confusing that {} means a dict not a set, but it isn't sad or
> confusing enough to justify breaking millions of existing scripts and
> applications.
>
> Not to mention the confusing transition period when the community would
> be using *both* standards at the same time, which could easily last ten
> years.
>
> Given that, I think we just have to accept that having to use set() for
> the empty set instead of {} is a minor wart on the language that we're
> stuck with.
>
> If you disagree, and think that you have a concrete plan that can make
> this transition work, we'll be happy to hear it, but you'll almost
> certainly need to write a PEP before it could be accepted.
>
> https://www.python.org/dev/peps/
>
>
> Thanks,
>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-17 Thread David Mertz
Strongly agree with Nick that only simple name targets should be permitted
(at least initially). NONE of the motivating cases use more complex
targets, and allowing them encourages obscurity and code golf.

On Tue, Apr 17, 2018, 8:20 AM Nick Coghlan  wrote:

> On 17 April 2018 at 17:46, Chris Angelico  wrote:
> > Syntax and semantics
> > 
> >
> > In any context where arbitrary Python expressions can be used, a **named
> > expression** can appear. This is of the form ``target := expr`` where
> > ``expr`` is any valid Python expression, and ``target`` is any valid
> > assignment target.
>
> The "assignment expressions should be restricted to names only"
> subthread from python-ideas finally crystallised for me (thanks in
> part to your own comment that 'With regular assignment (whether it's
> to a simple name or to a subscript/attribute), removing the "target
> :=" part will leave you with the same value - the value of "x := 1" is
> 1.'), and I now have a concrete argument for why I think we want to
> restrict the assignment targets to names only: all complex assignment
> targets create inherent ambiguity around the type of the expression
> result, and exactly which operations are performed as part of the
> assignment.
>
> Initially I thought the problem was specific to tuple unpacking
> syntax, but attempting to explain why subscript assignment and
> attribute assignments were OK made me realise that they're actually
> even worse off (since they can execute arbitrary code on both setting
> and retrieval, whereas tuple unpacking only iterates over iterables).
>
> Tackling those in order...
>
> Tuple unpacking:
>
> What's the result type for "a, b, c := range(3)"? Is it a range()
> object? Or is it a 3-tuple? If it's a 3-tuple, is that 3-tuple "(1, 2,
> 3)" or "(a, b, range(3))"?
> Once you have your answer, what about "a, b, c := iter(range(3))"
> or "a, b, *c := range(10)"?
>
> Whichever answers we chose would be surprising at least some of the
> time, so it seems simplest to disallow such ambiguous constructs, such
> that the only possible interpretation is as "(a, b, range(3))"
>
> Subscript assignment:
>
> What's the final value of "result" in "seq = list(); result =
> (seq[:] := range(3))"? Is it "range(3)"? Or is it "[1, 2, 3]"?
> As for tuple unpacking, does your preferred answer change for the
> case of "seq[:] := iter(range(3))"?
>
> More generally, if I write  "container[k] := value", does only
> "type(container).__setitem__" get called, or does
> "type(container).__getitem__" get called as well?
>
> Again, this seems inherently ambiguous to me, and hence best avoided
> (at least for now), such that the result is always unambiguously
> "range(3)".
>
> Attribute assignment:
>
> If I write  "obj.attr := value", does only "type(obj).__setattr__"
> get called, or does "type(obj).__getattribute__" get called as well?
>
> While I can't think of a simple obviously ambiguous example using
> builtins or the standard library, result ambiguity exists even for the
> attribute access case, since type or value coercion may occur either
> when setting the attribute, or when retrieving it, so it makes a
> difference as to whether a reference to the right hand side is passed
> through directly as the assignment expression result, or if the
> attribute is stored and then retrieved again.
>
> If all these constructs are prohibited, then a simple design principle
> serves to explain both their absence and the absence of the augmented
> assignment variants: "allowing the more complex forms of assignment as
> expressions makes the order of operations (as well as exactly which
> operations are executed) inherently ambiguous".
>
> That ambiguity generally doesn't exist with simple name bindings (I'm
> excluding execution namespaces with exotic binding behaviour from
> consideration here, as the consequences of trying to work with those
> are clearly on the folks defining and using them).
>
> > The value of such a named expression is the same as the incorporated
> > expression, with the additional side-effect that the target is assigned
> > that value::
> >
> > # Handle a matched regex
> > if (match := pattern.search(data)) is not None:
> > ...
> >
> > # A more explicit alternative to the 2-arg form of iter() invocation
> > while (value := read_next_item()) is not None:
> > ...
> >
> > # Share a subexpression between a comprehension filter clause and
> its output
> > filtered_data = [y for x in data if (y := f(x)) is not None]
>
> [snip]
>
> > Style guide recommendations
> > ===
> >
> > As this adds another way to spell some of the same effects as can
> already be
> > done, it is worth noting a few broad recommendations. These could be
> included
> > in PEP 8 and/or other style guides.
> >
> > 1. If either assignment statements or assignment expressions can be
> >used, prefer statements; they are a clear declarat

Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-20 Thread David Mertz
It's horrors like this:

g(items[idx] := idx := f())

That make me maybe +0 if the PEP only allowed simple name targets, but
decisively -1 for any assignment target in the current PEP.

I would much rather never have to read awful constructs like that than get
the minor convenience of:

if (val := some_expensive_func()) > 0:
x = call_something(val)

On Fri, Apr 20, 2018, 3:39 PM Chris Angelico  wrote:

> On Sat, Apr 21, 2018 at 2:17 AM, Christoph Groth
>  wrote:
> > Chris Barker - NOAA Federal wrote:
> >
> >> > Personally, I even slightly prefer
> >> >
> >> > a := 3
> >> >
> >> > to the commonplace
> >> >
> >> > a = 3
> >> > because it visually expresses the asymmetry of the operation.
> >>
> >> Careful here! That’s a fine argument for using := in a new language,
> >> but people using := when they don’t need an expression because they
> >> like the symbol better is a reason NOT to do this.
> >
> > Perhaps you are right and it is indeed unrealistic to expect people to
> > (eventually) shift to using := for simple assignments after 28 years of
> > Python...
>
> It's not just 28 years of Python. It's also that other languages use
> "=" for assignment. While this is by no means a clinching argument, it
> does have some weight; imagine if Python used "=" for comparison and
> ":=" for assignment - anyone who works simultaneously with multiple
> languages is going to constantly type the wrong operator. (I get this
> often enough with comment characters, but my editor will usually tell
> me straight away if I type "// blah" in Python, whereas it won't
> always tell me that I used "x = 1" when I wanted one of the other
> forms.)
>
> > One way or the other, I'd like to underline a point that I made
> > yesterday: I believe that it's important for sanity that taking any
> > existing assignment statement and replacing all occurrences of "=" by
> > ":=" does not have any effect on the program.
> >
> > PEP 572 currently proposes to make ":=" a binary operator that is
> > evaluated from right to left.
>
> This is one of the points that I was halfway through working on when I
> finally gave up on working on a reference implementation for a
> likely-doomed PEP. It might be possible to make := take an entire
> sequence of assignables and then set them left to right; however, this
> would be a lot more complicated, and I'm not even sure I want that
> behaviour. I don't want to encourage people to replace all "=" with
> ":=" just for the sake of it. The consistency is good if it can be
> achieved, but you shouldn't actually DO that sort of thing normally.
>
> Consider: one of the important reasons to define the assignment order
> is so you can reference a subscript and also use it. For instance:
>
> idx, items[idx] = new_idx, new_val
>
> But you don't need that with :=, because you can:
>
> items[idx := new_idx] = new_val
>
> (and you can use := for the second one if you wish). And actually,
> this one wouldn't even change, because it's using tuple unpacking, not
> the assignment order of chained assignments. I cannot think of any
> situation where you'd want to write this:
>
> idx = items[idx] = f()
>
> inside an expression, and thus need to write it as:
>
> g(items[idx] := idx := f())
>
> So I have no problem with a style guide saying "yeah just don't do
> that", and the PEP saying "if you do this, the semantics won't be
> absolutely identical to '='". Which it now does.
>
> Now, if someone else wants to work on the reference implementation,
> they're welcome to create this feature and then see whether they like
> it. But since I can't currently prove it's possible, I'm not going to
> specify it in the PEP.
>
> ChrisA
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-21 Thread David Mertz
It feels very strange that the PEP tries to do two almost entirely
unrelated things. Assignment expressions are one thing, with merits and
demerits discussed at length.

But "fixing" comprehension scoping is pretty much completely orthogonal.
Sure, it might be a good idea. And yes there are interactions between the
behaviors. However, trying to shoehorn the one issue into a PEP on a
different topic makes all of it harder to accept.

The "broken" scoping in some slightly strange edge cases can and has been
shown in lots of examples that don't use assignment expressions. Whether or
not that should be changed needn't be linked to the real purpose of this
PEP.

On Sat, Apr 21, 2018, 10:46 AM Chris Angelico  wrote:

> On Sat, Apr 21, 2018 at 10:26 PM, Steven D'Aprano 
> wrote:
> > On Sat, Apr 21, 2018 at 05:46:44PM +1000, Chris Angelico wrote:
> >> On Sat, Apr 21, 2018 at 5:11 PM, Steven D'Aprano 
> wrote:
> >
> >> > So can you explain specifically what odd function-scope behaviour you
> >> > are referring to? Give an example please?
> >>
> >> doubled_items = [x for x in (items := get_items()) if x * 2 in items]
> >>
> >> This will leak 'items' into the surrounding scope (but not 'x').
> >
> > The "not x" part is odd, I agree, but it's a popular feature to have
> > comprehensions run in a separate scope, so that's working as designed.
> >
> > The "leak items" part is the behaviour I desire, so that's not odd, it's
> > sensible *wink*
> >
> > The reason I want items to "leak" into the surrounding scope is mostly
> > so that the initial value for it can be set with a simple assignment
> > outside the comprehension:
> >
> > items = (1, 2, 3)
> > [ ... items := items*2 ... ]
> >
> > and the least magical way to do that is to just make items an ordinary
> > local variable.
>
> You can't have your cake and eat it too. Iteration variables and names
> bound by assignment expressions are both set inside the comprehension.
> Either they both are local, or they both leak - or else we have a
> weird rule like "the outermost iterable is magical and special".
>
> >> [x for x in x if x] # This works
> >
> > The oddity is that this does work, and there's no assignment expression
> > in sight.
> >
> > Given that x is a local variable of the comprehension `for x in ...` it
> > ought to raise UnboundLocalError, as the expanded equivalent does:
> >
> >
> > def demo():
> > result = []
> > for x in x: # ought to raise UnboundLocalError
> > if x:
> > result.append(x)
> > return result
> >
> >
> > That the comprehension version runs (rather than raising) is surprising
> > but I wouldn't call it a bug. Nor would I say it was a language
> > guarantee that we have to emulate in similar expressions.
>
> See, that's the problem. That is NOT how the comprehension expands. It
> actually expands to this:
>
> def demo(it):
> result = []
> for x in it:
> if x:
> result.append(x)
> return result
> demo(iter(x))
>
> PEP 572 corrects this by making it behave the way that you, and many
> other people, expect. Current behaviour is surprising because the
> outermost iterable is special and magical.
>
> >> (x for x in 5) # TypeError
> >> (x for _ in [1] for x in 5) # Works
> >
> > Now that last one is more than just odd, it is downright bizarre. Or at
> > least it would, if it did work:
> >
> > py> list((x for _ in [1] for x in 5))
> > Traceback (most recent call last):
> >   File "", line 1, in 
> >   File "", line 1, in 
> > TypeError: 'int' object is not iterable
> >
> >
> > Are you sure about this example?
>
> Yes, I'm sure. You may notice that I didn't iterate over the genexps
> in my example. The first one will bomb out, even without iteration;
> the second one gives a valid generator object which, if iterated over
> (or even stepped once), will bomb. This is because, again, the
> outermost iterable is special and magical.
>
> > In any case, since this has no assignment expression in it, I don't see
> > why it is relevant.
>
> Because an assignment expression in the outermost iterable would, if
> the semantics are preserved, bind in the surrounding scope. It would
> be FAR more logical to have it bind in the inner scope. Consider these
> two completely different results:
>
> def f(*prefix):
> print([p + name for p in prefix for name in locals()])
> print([p + name for name in locals() for p in prefix])
>
> >>> f("* ", "$ ")
> ['* .0', '* p', '$ .0', '$ p', '$ name']
> ['* prefix', '$ prefix']
>
> The locals() as seen by the outermost iterable are f's locals, and any
> assignment expression there would be part of f's locals. The locals()
> as seen by any other iterable, by a condition, or by the primary
> expression, are the list comp's locals, and any assignment expression
> there would be part of the list comp's locals.
>
> ChrisA
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/lis

Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-21 Thread David Mertz
It could also be postponed simply by saying assignment expressions follow
the same semantics as other bindings in comprehensions... which are subject
to change pending PEP  (i.e. some different number).

On the other hand, I am one who doesn't really care about assignment
expressions in comprehensions and only see the real benefit for 'if' and
'while' statements. I'm sure if it's added, I'll wind up using them in
comprehensions, but I've been perfectly happy with this for years:

stuff = [[y, x/y] for x in range(5) for y in [f(x)]]

There's nothing quite analogous in current Python for:

while (command := input("> ")) != "quit":
print("You entered:", command)



On Sat, Apr 21, 2018, 11:57 AM Nick Coghlan  wrote:

> On 22 April 2018 at 01:44, David Mertz  wrote:
> > It feels very strange that the PEP tries to do two almost entirely
> unrelated
> > things. Assignment expressions are one thing, with merits and demerits
> > discussed at length.
> >
> > But "fixing" comprehension scoping is pretty much completely orthogonal.
> > Sure, it might be a good idea. And yes there are interactions between the
> > behaviors. However, trying to shoehorn the one issue into a PEP on a
> > different topic makes all of it harder to accept.
> >
> > The "broken" scoping in some slightly strange edge cases can and has been
> > shown in lots of examples that don't use assignment expressions. Whether
> or
> > not that should be changed needn't be linked to the real purpose of this
> > PEP.
>
> The reason it's covered in the PEP is because the PEP doesn't want to
> lock in the current "binds the name in the surrounding scope"
> semantics when assignment expressions are used in the outermost
> iterable in a comprehension.
>
> However, resolving that question *could* be postponed more simply by
> making that a SyntaxError, rather than trying to move the expression
> evaluation inside the implicitly nested scope.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-24 Thread David Mertz
I do think the pronunciation issue that Greg notices is important.  I teach
Python for most of my living, and reading and discussing code segments is
an important part of that.  When focussing on how Python actually *spells*
something, you can't always jump to the higher-level meaning of a
construct.  For some complex expression—whether or not "binding
expressions" are added—sometimes it makes sense to give a characterization
of the *meaning* of the expression, but other times you want to say aloud
the entire spelling of the expression.

Although feelings are mixed about this, I like the "dunder" contraction for
this purpose.  It's less of a mouthful to say "dunder-init" than
"underscore-underscore-init-underscore-underscore" aloud.  And once you
learn that shorthand, it's unambiguous.

I think I'd pronounce:

if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
return g

As:

"If diff bound to x minus x_base (is non-zero), and g bound to gcd of diff
comma n is greater than 1, return g"

But having a convention for pronouncing this would be nice, rather than it
being my idiosyncrasy.


On Mon, Apr 23, 2018 at 8:23 PM, Tim Peters  wrote:

> [Tim]
> >> if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
> >> return g
>
> [Greg Ewing ]
> > My problem with this is -- how do you read such code out loud?
>
> In the message in which I first gave that example:
>
> if the diff isn't 0 and gcd(diff, n) > 1, return the gcd.
>That's how I _thought_ of it from the start.
>
> In my mind, `x - x_base` doesn't even exist except as a low-level
> definition of what "diff" means.  It's different for the other test:
> _there_ `g` doesn't exist except as a shorthand for "the gcd".  In one
> case it's the name that's important to me, and in the other case the
> expression.  The entire function from which this came is doing all
> arithmetic modulo `n`, so `n` isn't in my mind either - it's a
> ubiquitous part of the background in this specific function.
>
> But you did ask how_I_ would read that code ;-)  Anyone else is free
> to read it however they like.  I naturally read it in the way that
> makes most sense to me in its context.
>
>
> > From my Pascal days I'm used to reading ":=" as "becomes". So
> > this says:
> >
> >"If diff becomes x - base and g becomes gcd(diff, n) is
> > greater than or equal to 1 then return g."
> >
> > But "diff becomes x - base" is not what we're testing!
>
> I don't really follow that.  In Python,
>
> if f() and g > 1:
>
> first tests whether `f()` "is truthy", regardless of whether it does
> or doesn't appear in a binding expression.  Because this code is
> working with integers, there's an _implied_ "!= 0" comparison.
>
>
> > That makes it sound like the result of x - base may or may not
> > get assigned to diff, which is not what's happening at all.
>
> Then I suggest the problem you're having doesn't stem from the binding
> expression, but from that you're omitting to fill in the != 0 part:
> if you're not thrown by "greater than 1", I can't see how you can be
> thrown by "not zero".
>
>
> > The "as" variant makes more sense when you read it as an
> > English sentence:
> >
> >if ((x - x_base) as diff) and ...
> >
> >"If x - x_base (and by the way, I'm going to call that
> > diff so I can refer to it later) is not zero ..."
>
> So read the original as "if diff (which is x - x_base) is not zero ...".
>
> Regardless, Guido has already said "as" is DOA (Dead On Arrival)
> (illustrating that it's also common enough in English to give a short
> name before its long-winded meaning ;-) ).
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mertz%40gnosis.cx
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Assignment Expressions

2018-04-25 Thread David Shawley
On Apr 24, 2018, at 2:10 PM, MRAB  wrote:
> 
> On 2018-04-21 03:15, Tim Peters wrote:
>> [Tim]
>> >> And I'll take this opportunity to repeat the key point for me:  I
>> >> tried hard, but never found a single case based on staring at real
>> >> code where allowing _fancier_ (than "plain name") targets would be a
>> >> real improvement.  In every case I thought it _might_ help, it turned
>> >> out that it really didn't unless Python _also_ grew an analog to C's
>> >> "comma operator" (take only the last result from a sequence of
>> >> expressions).  I'll also note that I asked if anyone else had a
>> >> real-life example, and got no responses.
>> 
>> [MRAB ]
>> > Could a semicolon in a parenthesised expression be an equivalent to C's
>> > "comma operator"?
>> 
>> I expect it could, but I it's been many years since I tried hacking
>> Python's grammar, and I wouldn't want a comma operator anyway ;-)
> [snip]
> Just reading this:
> 
> https://www.bfilipek.com/2018/04/refactoring-with-c17-stdoptional.html
> 
> about C++17, and what did I see? An example with a semicolon in parentheses!

A similar pattern shows up in Go's if statement syntax.  It is interesting to 
note that it is part of the grammar specifically for the if statement and *not* 
general expression syntax.

IfStmt = "if" [ SimpleStmt ";" ] Expression Block [ "else" ( IfStmt | Block 
) ] .

Bindings that occur inside of `SimpleStmt` are only available within the 
`Expression` and blocks that make up the if statement.

https://golang.org/ref/spec#If_statements

This isn't a good reason to parrot the syntax in Python though.  IMO, I 
consider the pattern to be one of the distinguishing features of golang and 
would be happy leaving it there.

I have often wondered if adding the venerable for loop syntax from C (and many 
other languages) would solve some of the needs here though.  The for loop 
syntax in golang is interesting in that it serves as both a standard multipart 
for statement as well as a while statement.

Changing something like this is more of a Python 4 feature and I think that I 
would be -0 on the concept.  I did want to mention the similarities for the 
posterity though.

ChrisA - we might want to add explicit mentions of golang's if statement and 
for loop as "considered" syntaxes since they are in a sibling programing 
language (e.g., similar to async/await in PEP 492).

- dave
--
"Syntactic sugar causes cancer of the semicolon" - Alan Perlis___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (name := expression) doesn't fit the narrative of PEP 20

2018-04-26 Thread David Mertz
>
> [Raymond Hettinger ]
> > Python is special, in part, because it is not one of those languages.
> > It has virtues that make it suitable even for elementary school children.
> > We can show well-written Python code to non-computer folks and walk
> > them through what it does without their brains melting (something I can't
> > do with many of the other languages I've used).  There is a virtue
> > in encouraging simple statements that read like English sentences
> > organized into English-like paragraphs, presenting itself like
> > "executable pseudocode".


While this is true and good for most Python code, can you honestly explain
asyncio code with async/await to these non-programmers?! What about the
interfaces between async and synchronous portions?

I've been programming for 40 years, in Python for 20 of them. I cannot read
any block of async code without thinking VERY SLOWLY about what's going on,
then getting it wrong half the time. I even teach Python almost as much as
Raymond does.

There's a certain hand-waving approach to teaching async/await where you
say not to worry about those keywords, and just assume the blocks are
coordinated "somehow, behind the scenes." That's not awful for reading
*working* code, but doesn't let you write it.

I'm not saying binding expressions are likewise reserved for a special but
important style of programming. If included, I expect them to occur
more-or-less anywhere. So Raymond's concern about teachability is more
pressing (I've only taught async twice, and I know Raymond's standard
course doesn't do it, all the other code is unaffected by that unused
'await' lurking in the syntax). Still, there are good reasons why not all
Python code is aimed at non-computer folks.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (name := expression) doesn't fit the narrative of PEP 20

2018-04-26 Thread David Mertz
FWIW, the combination of limiting the PEP to binding expressions and the
motivating example of sequential if/elif tests that each need to utilize an
expression in their body (e.g. matching various regexen by narrowing,
avoiding repeated indent) gets me to +1.

I still think the edge case changes to comprehension semantics is needless
for this PEP. However, it concerns a situation I don't think I've ever
encountered in the wild, and certainly never relied on the old admittedly
odd behavior.

On Thu, Apr 26, 2018, 2:01 AM Tim Peters  wrote:

> Yes, binding expressions in the current PEP support an extremely
> limited subset of what Python's assignment statements support.[...]
> Guido's if/elif/elif/elif/ ... complex text-processing example didn't,
> but because the current lack of an ability to bind-and-test in one
> gulp forced the `elif` parts to be ever-more-deeply-indented `if`
> blocks instead.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dealing with tone in an email

2018-05-05 Thread David Mertz
The below is really just making this whole situation worse.

On Sat, May 5, 2018 at 8:22 PM, Ivan Pozdeev via Python-Dev <
python-dev@python.org> wrote:

> As I suspected. This is a classic scenario that is occasionally seen
> anywhere: "everyone is underestimating a problem until a disaster strikes".
> The team's perception of Tkinter is basically: "well, there are slight
> issues, and the docs are lacking, but no big deal."
>
Well, this _is_ a big deal. As in, "with 15+ years of experience, 5+ with
> Python, I failed to produce a working GUI in a week; no-one on the Net,
> regardless of experience, (including Terry) is ever sure how to do things
> right; every online tutorial says: "all the industry-standard and expected
> ways are broken/barred, we have to resort to ugly workarounds to accomplish
> just about anything"" big deal. This is anything but normal, and all the
> more shocking in Python where the opposite is the norm.
>

This is simply objectively wrong, and still rather insulting to the core
developers.

The real-world fact is that many people—including the authors of IDLE,
which is included with Python itself—use Tkinter to develop friendly,
working, GUIs.  Obviously, there *is* a way to make Tkinter work.  I
confess I haven't worked with it for a while, and even when I had, it was
fairly toy apps.  I never saw any terrible problems, but I confess I also
never pushed the edges of it.

It's quite possible, even likely, that some sufficiently complicated GUI
apps are better off eschewing Tkinter and using a different GUI library.
It's also quite possible that the documentation around Tkinter could be
improved to convey more accurate messaging around this (and to convey the
common pattern of "GUI in one thread, workers in other threads."


> And now, a disaster striked. Not knowing this, I've relied on Tkinter with
> very much at stake (my income for the two following months, basically), and
> lost. If that's not a testament just how much damage Tkinter's current
> state actually does, I dunno what is.
>

I've sunk two months each into trying to wrestle quite a large number of
frameworks or libraries to do what I want.  Sometimes I finally made it
work, other times not.  That's the reality of software development.
Sometimes the problems were bugs per se, other times limits of my
understanding.  Often the problems were with extremely widely used and
"solid" libraries (not just in Python, across numerous languages).

There are a few recurring posters here and on python-ideas of whom I roll
my eyes when I see a post is from them... I think most actual core
contributors simply have them on auto-delete filters by now.  I don't know
where the threshold is exactly, but I suspect you're getting close to that
with this post.

Yours, David...


-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Slow down...

2018-05-08 Thread David Mertz
This seems like a rather bad idea. None of the core changes in the last few
versions were on the radar 10 years in advance. And likewise, no one really
knows what new issues will become critical over the next 10.

The asyncio module and the async/await keywords only developed as important
concerns for a year or two before they became part of the language.
Likewise for type annotations. Neither is used by most Python programmers,
but for a subset, they are very important.

I supposed f-strings are incidental. We could have lived without them (I
myself doughty opposed "another way to do it"). But they do make code nicer
at the cost of incompatible syntax. Likewise underscores in numbers like
17_527_103. Not everyone needs the __mmul__() operator. But for linear
algebra, 'a @ b.T' is better than 'np.dot(a, b.T)'.

On Mon, May 7, 2018, 3:10 PM Craig Rodrigues  wrote:

>
>
> On Sun, May 6, 2018 at 7:35 PM Nick Coghlan  wrote:
>
>>
>> I'm inclined to agree that a Python 3.8 PEP in the spirit of the PEP 3003
>> language moratorium could be a very good idea. Between matrix
>> multiplication, enhanced tuple unpacking, native coroutines, f-strings, and
>> type hinting for variable assignments, we've had quite a bit of syntactic
>> churn in the past few releases, and the rest of the ecosystem really hasn't
>> caught up on it all yet (and that's not just other implementations - it's
>> training material, online courses, etc, etc).
>>
>> If we're going to take such a step, now's also the time to do it, since
>> 3.8 feature development is only just getting under way, and if we did
>> decide to repeat the language moratorium, we could co-announce it with the
>> Python 3.7 release.
>>
>>
> Would it be reasonable to request a 10 year moratorium on making changes
> to the core Python language,
> and for the next 10 years only focus on things that do not require core
> language changes,
> such as improving/bugfixing existing libraries, writing new libraries,
> improving tooling, improving infrastructure (PyPI),
> improving performance, etc., etc.?
>
> There are still many companies still stuck on Python 2, so giving 10 years
> of breathing room
> for these companies to catch up to Python 3 core language, even past 2020
> would be very helpful.
>
> --
> Craig
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-25 Thread David Mertz
On Mon, Jun 25, 2018 at 5:14 PM Steve Holden  wrote:

> I'd like to ask: how many readers of ​
>
> ​this email have ever deliberately taken advantage of the limited Python 3
> scope in comprehensions and generator expressions to use what would
> otherwise be a conflicting local variable name?​
>

I have never once *deliberately* utilized the Python 3 local scoping in
comprehensions.  There were a few times in Python 2 where I made an error
of overwriting a surrounding name by using it in a comprehension, and
probably Python 3 has saved me from that a handful of times.

Where I ever made such an error, it was with names like 'x' and 'i' and
'n'.  They are useful for quick use, but "more important" variables always
get more distinctive names anyway.  Had the Python 2 behavior remained, I
would have been very little inconvenienced; and I suppose comprehensions
would have been slightly less "magic" (but less functional-programming).


>
>
> I appreciate that the scope limitation can sidestep accidental naming
> errors, which is a good thing.
>
> Unfortunately, unless we anticipate Python 4 (or whatever) also making for
> loops have an implicit scope, I am left wondering whether it's not too
> large a price to pay. After all, special cases aren't special enough to
> break the rules, and unless the language is headed towards implicit scope
> for all uses of "for" one could argue that the scope limitation is a
> special case too far. It certainly threatens to be yet another confusion
> for learners, and while that isn't the only consideration, it should be
> given due weight.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>


-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python and Linux Standard Base

2018-06-27 Thread David Mertz
The main wiki page was last touched at all in 2016. The mailing list in Jan
2018 had about 8 comments, none of them actually related to LSB. They
stopped archiving the ML altogether in Feb 2018. I think it's safe to say
the parrot is dead.

On Wed, Jun 27, 2018, 9:50 AM Antoine Pitrou  wrote:

> On Wed, 27 Jun 2018 09:18:24 -0400 (EDT)
> Charalampos Stratakis  wrote:
> >
> > My question is, if there is any incentive to try and ask for
> modernization/amendment  of the standards?
> > I really doubt that any linux distro at that point can be considered lsb
> compliant at least from the
> > python side of things.
>
> One question: who maintains the LSB?
>
> The fact that the Python portion was never updated may hint that nobody
> uses it...
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can we clean up the buildbots please?

2015-05-23 Thread David Bolen
Larry Hastings  writes:

>> Is MSVS 2015 the only supported compiler for Python 3.5 on Windows?
>> What's the other buildbot using MSVS 2015?

For a while I think the only buildbot was my 8.1 slave, but I believe
at this point Jeremy may also have it on his 7 slave.  The latest on
my 7 slave is still 2010 (which is still working, sans recent test
failures).

> I'll answer my own question here.  According to PCbuild/readme.txt:
>
>This script will use the env.bat script to detect one of Visual
>Studio 2015, 2013, 2012, or 2010, any of which may be used to build
>Python, though only Visual Studio 2015 is officially supported.
>
> I'll admit I'm puzzled by the wisdom of using unsupported compilers on
> buildbots.  I guess it's a historical thing.  But I gently suggest
> that we should either upgrade those buildbots to a supported compiler
> or remove them entirely.  Definitely we should remove unsupported the
> two unsupported platforms from the buildbots--that's just crazy.

To be fair, VS 2015 hasn't been officially released yet.  It only
recently (as in a few weeks ago) reached RC stage.  Given the size of
installing it, and earlier uncertainty about upgrading during the
pre-release cycle, plus some early issues with the build process, for
my part I've opted to hold off with my older slaves until it hits
release status, using only the 8.1 slave until then.  (Arguably the
current RC is supposed to be at most a minor update away from full
release, so we're probably close)

Along the way it was concluded that XP just wasn't worth making work
for the 3.5+ development, but the slave was still valuable for the 2.7
branch, so would be left around for now for that purpose.  It is a bit
misleading to still be trying to build the 3.x branch on it but I
suspect eliminating the branch from that slave is just an oversight,
or nobody with the proper access has had time yet.

-- David

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Single-file Python executables (was: Computed Goto dispatch for Python 2)

2015-05-28 Thread David Cournapeau
On Fri, May 29, 2015 at 1:28 AM, Chris Barker  wrote:

> On Thu, May 28, 2015 at 9:23 AM, Chris Barker 
> wrote:
>
>> Barry Warsaw wrote:
>> >> I do think single-file executables are an important piece to Python's 
>> >> long-term
>> competitiveness.
>>
>> Really? It seems to me that desktop development is dying. What are the
>> critical use-cases for a single file executable?
>>
>
> oops, sorry -- I see this was addressed in another thread. Though I guess
> I still don't see why "single file" is critical, over "single thing to
> install" -- like a OS-X app bundle that can just be dragged into the
> Applications folder.
>

It is much simpler to deploy in an automated, recoverable way (and also
much faster), because you can't have parts of the artefact "unsynchronized"
with another part of the program. Note also that moving a python
installation in your fs is actually quite unlikely to work in interesting
usecases on unix because of the relocatability issue.

Another advantage: it makes it impossible for users to tamper an
application's content and be surprised things don't work anymore (a very
common source of issues, familiar to anybody deploying complex python
applications in the "enterprise world").

I recently started using some services written in go, and the single file
approach is definitely a big +. It makes *using* applications written in it
so much easier than python, even though I am complete newbie in go and
relatively comfortable with python.

One should keep in mind that go has some inherent advantages over python in
those contexts even if python were to gain single file distribution
tomorrow. Most of go stdlib is written in go now I believe, and it is much
more portable across linux systems on a given CPU arch compared to python.
IOW, it is more robust against ABI variability.

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's New editing

2015-07-05 Thread David Mertz
On Sun, Jul 5, 2015 at 6:06 PM, Nick Coghlan  wrote:

> On 6 July 2015 at 03:52, R. David Murray  wrote:
> > Just so people aren't caught unawares, it is very unlikely that I will
> have
> > time to be the final editor on "What's New for 3.5" they way I was for
> 3.3 and
> > 3.4.
>
> And thank you again for your work on those!
>
> > I've tried to encourage people to keep What's New up to date, but
> > *someone* should make a final editing pass.  Ideally they'd do at least
> the
> > research Serhiy did last year on checking that there's a mention for all
> of the
> > versionadded and versionchanged 3.5's in the docs.  Even better would be
> to
> > review the NEWS and/or commit history...but *that* is a really big job
> these
> > days
>
> What would your rough estimate of the scope of work be? As you note,
> the amount of effort involved in doing a thorough job of that has
> expanded beyond what can reasonably be expected of volunteer
> contributors, so I'm wondering if it might make sense for the PSF to
> start offering a contract technical writing gig to finalise the What's
> New documentation for each new release.
>

I think I might be able to "volunteer" for the task of writing/editing the
"What's New in 3.5" docs.  I saw David's comment on it today, so obviously
haven't yet had a chance to run it by my employer (Continuum Analytics),
but I have a hunch they would allow me to do it at least in large part as
paid time.  I am experienced as a technical writer, follow python-dev,
write about new features, but am *not*, however, my self an existing core
developer.

If there is interest in this, or at least it seems plausible, I can run it
by my employer tomorrow to see about getting enough time allocated (using
David Murray's past experience as a guideline for what's likely to be
needed).

Yours, David...

-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's New editing

2015-07-06 Thread David Mertz
Hi Folks,

I hereby volunteer to write "What's New for Python 3.5?" if folks on
python-dev are fine with me taking the job (i.e. I ran it by Travis, my
boss at Continuum, and he's happy to allow me to do that work within my
salaried hours... so having time isn't a problem).

If this is OK with the powers-that-be, I'll coordinate with David Murray on
how best to take over this task from him.

Thanks, David...

On Sun, Jul 5, 2015 at 8:51 PM, Nick Coghlan  wrote:

> On 6 July 2015 at 12:42, David Mertz  wrote:
> > I think I might be able to "volunteer" for the task of writing/editing
> the
> > "What's New in 3.5" docs.  I saw David's comment on it today, so
> obviously
> > haven't yet had a chance to run it by my employer (Continuum Analytics),
> but
> > I have a hunch they would allow me to do it at least in large part as
> paid
> > time.  I am experienced as a technical writer, follow python-dev, write
> > about new features, but am *not*, however, my self an existing core
> > developer.
>
> I think the last point may be a positive rather than a negative when
> it comes to effectively describing new features :)
>
> > If there is interest in this, or at least it seems plausible, I can run
> it
> > by my employer tomorrow to see about getting enough time allocated (using
> > David Murray's past experience as a guideline for what's likely to be
> > needed).
>
> That would be very helpful! I'd definitely be able to find the time to
> review and merge updates, it's the research-and-writing side that
> poses a problem for me (appreciating a task is worth doing isn't the
> same thing as wanting to do it myself!).
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-17 Thread David Mertz
Nothing huge to add, and I'm not experienced using mock.  But the special
handling of 'assret' as a "misspelling of 'assert'" definitely strikes me
as a wart also.  That sort of thing really has no place in a library
itself, but rather only in a linter.

On Fri, Jul 17, 2015 at 9:20 AM, Steven D'Aprano 
wrote:

> On Fri, Jul 17, 2015 at 04:37:04PM +1000, Nick Coghlan wrote:
>
> > The specific typo that is checked is the only one that changes the
> > spelling without also changing the overall length and shape of the
> > word.
>
> I don't think your comment above is correct.
>
> assert => aasert aseert azzert essert assort
>
> all have the same overall length and shape.
>
> Not all spelling errors are typos (hitting the wrong key). I've seen
> spelling errors this bad, or worse, from native English writers. Poor
> spelling, bad keyboards, distraction, and dyslexia can all contribute.
> And those who aren't fluent in English will make their own spelling
> errors, and may not even notice if the length of the word changes:
>
> assert => asert
>
> For those who are dyslexic, there are spelling errors and typos that may
> be difficult to tell apart even though the shape of the word changes:
>
> assert => assery asserh
>
> (perhaps -- I'm not dyslexic, I'm just going by what I've read about
> their experience).
>
> In my opinion, this sets a bad precedent for adding special case after
> special case, and the risk is that people will feel slighted if they are
> told that their typos aren't important enough to be made a special case
> too.
>
> If Michael wishes to argue that this is a useful feature rather than an
> ugly DWIM wart, that's his perogative, but the justification that
> "assret" is the *only* such plausible typo is just plain wrong. We've
> already heard from Robert Collins that he found a bunch of silently
> failing assertions in his mocks, and none of them started with "assret".
>
>
> All-spelling-errors-are-deliberate-to-provide-new-and-exciting-ways-to-spell-old-words-ly
> y'rs,
>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-25 Thread David Mertz
At the risk of being off-topic, I realize I really DO NOT currently
understand datetime in its current incarnation.  It's too bad PEP 431
proves so difficult to implement.

Even using `pytz` is there any way currently to get sensible answers to,
e.g.:

from datetime import *
from pytz import timezone
pacific = timezone('US/Pacific')
pacific.localize(datetime(2015, 11, 1, 1, 30))  # Ambiguous time
pacific.localize(datetime(2015, 3, 8, 2, 30))   # Non-existent time


That is, what if I had *not* just looked up when the time change happens,
and was innocently trying to define one of those datetimes above?  Is there
ANY existing way to have an error raised—or check in some other way—for the
fact that one of the times occurs twice on my clock, and the other never
occurs at all?


On Sat, Jul 25, 2015 at 8:31 PM, Lennart Regebro  wrote:

> On Sun, Jul 26, 2015 at 2:56 AM, Tim Peters  wrote:
> > However, the _body_ of the PEP said nothing whatsoever about altering
> > arithmetic.  The body of the PEP sounds like it's mainly just
> > proposing to fold the pytz package into the core.  Perhaps doing
> > _just_ that much would get this project unstuck?  Hope springs eternal
> > :-)
>
> The pytz package has an API and a usage that is different from the
> datetime() module. One of the things you need to do is that after each
> time you do arithmetic, you have to normalize the result. This is done
> because the original API design did not realize the difficulties and
> complexities of timezone handling and therefore left out things like
> ambiguous times.
>
> The PEP attemps to improved the datetime modules API so that it can
> handle the ambiguous times. It also says that the implementation will
> be based on pytz, because it was my assumption that this would be
> easy, since pytz already handles ambiguous times. During my attempt of
> implementing it I realized it wasn't easy at all, and it wasn't as
> easy as folding pytz into the core.
>
> Yes, the PEP gives that impression, because that was the assumption
> when I wrote the draft. Just folding pytz into the core without
> modifying the API defeats the whole purpose of the PEP, since
> installing pytz is a trivial task.
>
> > Like what?  I'm still looking for a concrete example of what "the
> > problem" is (or even "a" problem).
>
> A problem is that you have a datetime, and add a timedelata to it, and
> it should then result in a datetime that is actually that timedelta
> later. And if you subtract the same timedelta from the result, it
> should return a datetime that is equal to the original datetime.
>
> This sounds ridiculously simple, and is ridiculously difficult to make
> happen in all cases that we want to support (Riyahd time zone and leap
> seconds not included). That IS the specific, concrete problem, and if
> you don't believe me, there is nothing I can do to convince you.
> Perhaps I am a complete moron and simply incompetent to do this, and
> in that case I'm sure you could implement this over a day, and then
> please do so, but for the love of the founders of computing I'm not
> going to spend more time repeating it on this mailing list, because
> then we would do better in having you implement this instead of
> reading emails. Me repeating this a waste of time for everyone
> involved, and I will now stop.
>
> > 
>
> I was not involved in the discussion then, and even if I had been,
> that's still before I knew anything about the topic. I don't know what
> the arguments were, and I don't think it's constructive to try to
> figure out exactly why that decision was made. That is all to similar
> to assigning blame, which only makes people feel bad. Those who get
> blamed feel bad, and those who blame feel like dicks and onlookers get
> annoyed. Let us look forward instead.
>
> I am operating both without any need to defend that decision, as I was
> not involved in it, and I am operating with 20/20 hindsight as I am
> one of the few people having tried to implement a timezone
> implementation that supports ambiguous datetimes based on that
> decision. And then it is perfectly clear and obvious that the decision
> was a mistake and that we should rectify it.
>
> The only question for me is how and when.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40m

Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-09 Thread David Mertz
On Sun, Aug 9, 2015 at 11:22 AM, Eric V. Smith  wrote:
>
> I think it has to do with the nature of the programs that people write.
> I write software for internal use in a large company. In the last 13
> years there, I've written literally hundreds of individual programs,
> large and small. I just checked: literally 100% of my calls to
> %-formatting (older code) or str.format (in newer code) could be
> replaced with f-strings. And I think every such use would be an
> improvement.
>

I'm sure that pretty darn close to 100% of all the uses of %-formatting and
str.format I've written in the last 13 years COULD be replaced by the
proposed f-strings (I suppose about 16 years for me, actually).  But I
think that every single such replacement would make the programs worse.
I'm not sure if it helps to mention that I *did* actually "write the book"
on _Text Processing in Python_ :-).

The proposal just continues to seem far too magical to me.  In the training
I now do for Continuum Analytics (I'm in charge of the training program
with one other person), I specifically have a (very) little bit of the
lessons where I mention something like:

  print("{foo} is {bar}".format(**locals()))

But I give that entirely as a negative example of abusing code and
introducing fragility.  f-strings are really the same thing, only even more
error-prone and easier to get wrong.  Relying on implicit context of the
runtime state of variables that are merely in scope feels very break-y to
me still.  If I had to teach f-strings in the future, I'd teach it as a
Python wart.

That said, there *is* one small corner where I believe f-strings add
something helpful to the language.  There is no really concise way to spell:

  collections.ChainMap(locals(), globals(), __builtins__.__dict__).

If we could spell that as, say `lgb()`, that would let str.format() or
%-formatting pick up the full "what's in scope".  To my mind, that's the
only good thing about the f-string idea.

Yours, David...

-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-09 Thread David Mertz
Y'know, I just read a few more posts over on python-ideas that I had missed
somehow.  I saw Guido's point about `**locals()` being too specialized and
magical for beginners, which I agree with.  And it's the other aspect of
"magic" that makes me not like f-strings.  The idea of *implicitly* getting
values from the local scope (or really, the global_local_builtin scope)
makes me worry about readers of code very easily missing what's really
going on within an f-string.

I don't actually care about the code injection issues and that sort of
thing.  I mean, OK I care a little bit, but my actual concern is purely
explicitness and readability.

Which brought to mind a certain thought.  While I don't like:

f'My name is {name}, my age next year is {age+1}'

I wouldn't have any similar objection to:

   'My name is {name}, my age next year is {age+1}'.scope_format()

Or

  scope_format('My name is {name}, my age next year is {age+1}')

I realize that these could be completely semantically equivalent... but the
function or method call LOOKS LIKE a runtime operation, while a one letter
prefix just doesn't look like that (especially to beginners whom I might
teach).

The name 'scope_format' is ugly, and something shorter would be nicer, but
I think this conveys my idea.

Yours, David...

On Sun, Aug 9, 2015 at 6:14 PM, David Mertz  wrote:

> On Sun, Aug 9, 2015 at 11:22 AM, Eric V. Smith  wrote:
>>
>> I think it has to do with the nature of the programs that people write.
>> I write software for internal use in a large company. In the last 13
>> years there, I've written literally hundreds of individual programs,
>> large and small. I just checked: literally 100% of my calls to
>> %-formatting (older code) or str.format (in newer code) could be
>> replaced with f-strings. And I think every such use would be an
>> improvement.
>>
>
> I'm sure that pretty darn close to 100% of all the uses of %-formatting
> and str.format I've written in the last 13 years COULD be replaced by the
> proposed f-strings (I suppose about 16 years for me, actually).  But I
> think that every single such replacement would make the programs worse.
> I'm not sure if it helps to mention that I *did* actually "write the book"
> on _Text Processing in Python_ :-).
>
> The proposal just continues to seem far too magical to me.  In the
> training I now do for Continuum Analytics (I'm in charge of the training
> program with one other person), I specifically have a (very) little bit of
> the lessons where I mention something like:
>
>   print("{foo} is {bar}".format(**locals()))
>
> But I give that entirely as a negative example of abusing code and
> introducing fragility.  f-strings are really the same thing, only even more
> error-prone and easier to get wrong.  Relying on implicit context of the
> runtime state of variables that are merely in scope feels very break-y to
> me still.  If I had to teach f-strings in the future, I'd teach it as a
> Python wart.
>
> That said, there *is* one small corner where I believe f-strings add
> something helpful to the language.  There is no really concise way to spell:
>
>   collections.ChainMap(locals(), globals(), __builtins__.__dict__).
>
> If we could spell that as, say `lgb()`, that would let str.format() or
> %-formatting pick up the full "what's in scope".  To my mind, that's the
> only good thing about the f-string idea.
>
> Yours, David...
>
> --
> Keeping medicines from the bloodstreams of the sick; food
> from the bellies of the hungry; books from the hands of the
> uneducated; technology from the underdeveloped; and putting
> advocates of freedom in prisons.  Intellectual property is
> to the 21st century what the slave trade was to the 16th.
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-10 Thread David Mertz
I know. I elided including the nonexistent `nonlocals()` in there. But it
*should* be `lngb()`.  Or call it scope(). :-)
On Aug 10, 2015 10:09 AM, "Steven D'Aprano"  wrote:

> On Sun, Aug 09, 2015 at 06:14:18PM -0700, David Mertz wrote:
>
> [...]
> > That said, there *is* one small corner where I believe f-strings add
> > something helpful to the language.  There is no really concise way to
> spell:
> >
> >   collections.ChainMap(locals(), globals(), __builtins__.__dict__).
>
> I think that to match the normal name resolution rules, nonlocals()
> needs to slip in there between locals() and globals(). I realise that
> there actually isn't a nonlocals() function (perhaps there should be?).
>
> > If we could spell that as, say `lgb()`, that would let str.format() or
> > %-formatting pick up the full "what's in scope".  To my mind, that's the
> > only good thing about the f-string idea.
>
> I like the concept, but not the name. Initialisms tend to be hard
> to remember and rarely self-explanatory. How about scope()?
>
>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What's New editing

2015-09-05 Thread David Mertz
I have to apologize profusely here.  Just after I offered to do this (and
work even said it was OK in principle to do it on work time), my work load
went through the roof.  And now it's really already later than most of it
should have been done.  I'd still very much like to work on this, but I
wonder if maybe someone else would like to be co-author since the increased
workload doesn't actually seem likely to diminish soon.

On Wed, Sep 2, 2015 at 7:03 PM, Yury Selivanov 
wrote:

>
>
> On 2015-07-06 11:38 AM, David Mertz wrote:
>
>> Hi Folks,
>>
>> I hereby volunteer to write "What's New for Python 3.5?" if folks on
>> python-dev are fine with me taking the job (i.e. I ran it by Travis, my
>> boss at Continuum, and he's happy to allow me to do that work within my
>> salaried hours... so having time isn't a problem).
>>
>> If this is OK with the powers-that-be, I'll coordinate with David Murray
>> on how best to take over this task from him.
>>
>
> Hi David,
>
> Are you still going to work on what's new for 3.5?
>
> Yury
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Conversion of a standard unicode string to a bit string in Python

2015-10-17 Thread David Mertz
This list is for discussion of development of the Python core language and
standard libraries, not for development *using* Python.  It sounds like you
should probably do your homework problem on your own, actually, but if you
seek advice, something like StackOverflow or python-list are likely to be
more appropriate.

On Sat, Oct 17, 2015 at 6:22 AM, Nemo Nautilius 
wrote:

> Hi All,
> I'm currently programming a set of crypto challenges in order to get a
> deeper understanding of python and crypto. The problem is to break a
> repeating key xor data (in a file). In order to do that I need a function
> to calculate the hamming distance between two strings.  To find that one
> needs to find the differing number of *bits* in a string. Any ideas on how
> to manipulate the string at bit level?
>
> This is my first time in writing a question to the mailing list so please
> let me know anything that I need to keep in mind while asking questions.
> Thanks in advance.
>
> Gracias
> Nemo
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
>


-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] type(obj) vs. obj.__class__

2015-10-18 Thread David Mertz
This recipe looks like a bad design to me to start with.  It's
too-clever-by-half, IMO.

If I were to implement RingBuffer, I wouldn't futz around with the
__class__ attribute to change it into another thing when it was full.  A
much more obvious API for users would be simply to implement a
RingBuffer.isfull() method, perhaps supported by an underlying
RingBuffer._full boolean attribute.  That's much friendlier than expecting
people to introspect the type of the thing for a question that only
occasionally matters; and when it does matter, the question is always
conceived exactly as "Is it full?" not "What class is this currently?"

So I think I'm still waiting for a compelling example where type(x) !=
x.__class__ would be worthwhile (yes, of course it's *possible*)

On Sat, Oct 17, 2015 at 10:55 PM, Steven D'Aprano 
wrote:

> On Sat, Oct 17, 2015 at 03:45:19PM -0600, Eric Snow wrote:
> > In a recent tracker issue about OrderedDict [1] we've had some
> > discussion about the use of type(od) as a replacement for
> > od.__class__.
> [...]
> > The more general question of when we use type(obj) vs. obj.__class__
> > applies to both the language and to all the stdlib as I expect
> > consistency there would result in fewer surprises.  I realize that
> > there are some places where using obj.__class__ makes more sense (e.g.
> > for some proxy support).  There are other places where using type(obj)
> > is the way to go (e.g. special method lookup).  However, the
> > difference is muddled enough that usage is inconsistent in the stdlib.
> > For example, C-implemented types use Py_TYPE() almost exclusively.
> >
> > So, would it make sense to establish some concrete guidelines about
> > when to use type(obj) vs. obj.__class__?  If so, what would those be?
> > It may also be helpful to enumerate use cases for "type(obj) is not
> > obj.__class__".
>
> I for one would like to see a definitive explanation for when they are
> different, and when you should use one or the other. The only
> obvious example I've seen is the RingBuffer from the Python Cookbook:
>
> http://code.activestate.com/recipes/68429-ring-buffer/
>
>
>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] type(obj) vs. obj.__class__

2015-10-18 Thread David Mertz
I'm not sure what benchmark you used to define the speed of RingBuffer.
I'm sure you are reporting numbers accurately for your tests, but there are
"lies, damn lies, and benchmarks", so "how fast" has a lot of nuance to it.

In any case, redefining a method in a certain situation feels a lot less
magic to me than redefining .__class__, and clarity and good API are much
more important than micro-optimization for something unlikely to be on a
critical path.

That's interesting about the `self._full` variable slowing it down, I think
I'm not surprised (but obviously it depends on just how it's used).  But
one can also simply define RingBuffer.isfull() using
`self.max==len(self.data)` if you prefer that approach.  I doubt
`myringbuffer.isfull()` is something you need to call in an inner loop.

That said, I think my implementation of RingBuffer would probably look more
like (completely untested):

class RingBuffer(object):
def __init__(self, size_max):
self.data = [None] * size_max
self.size_max = size_max
self.used = 0
self.cur = 0
def append(self, val):
self.data[self.cur] = val
self.cur = (self.cur+1) % self.size_max
self.used = max(self.used, self.cur+1)
def isfull(self):
self.used == self.size_max

Feel free to try this version against whatever benchmark you have in mind.


On Sun, Oct 18, 2015 at 5:09 PM, Peter Ludemann 
wrote:

> I re-coded the "too clever by half" RingBuffer to use the same design but
> with delegation ... and it ran 50% slower. (Code available on request)
> Then I changed it to switch implementations of append() and get() when it
> got full (the code is below) and it ran at essentially the same speed as
> the original. So, there's no need to be so clever with __class__. Of
> course, this trick of replacing a method is also "too clever by half"; but
> an instance variable for "full" slows it down by 15%.
>
> class RingBuffer(object):
> def __init__(self, size_max):
> self.max = size_max
> self.data = []
> self.cur = 0
> def append(self, x):
> self.data.append(x)
> if len(self.data) == self.max:
> self.append = self.append_full
> def append_full(self, x):
> self.data[self.cur] = x
> self.cur = (self.cur + 1) % self.max
> def get(self):
> return self.data[self.cur:] + self.data[:self.cur]
>
>
>
> On 18 October 2015 at 08:45, David Mertz  wrote:
>
>> This recipe looks like a bad design to me to start with.  It's
>> too-clever-by-half, IMO.
>>
>> If I were to implement RingBuffer, I wouldn't futz around with the
>> __class__ attribute to change it into another thing when it was full.  A
>> much more obvious API for users would be simply to implement a
>> RingBuffer.isfull() method, perhaps supported by an underlying
>> RingBuffer._full boolean attribute.  That's much friendlier than expecting
>> people to introspect the type of the thing for a question that only
>> occasionally matters; and when it does matter, the question is always
>> conceived exactly as "Is it full?" not "What class is this currently?"
>>
>> So I think I'm still waiting for a compelling example where type(x) !=
>> x.__class__ would be worthwhile (yes, of course it's *possible*)
>>
>> On Sat, Oct 17, 2015 at 10:55 PM, Steven D'Aprano 
>> wrote:
>>
>>> On Sat, Oct 17, 2015 at 03:45:19PM -0600, Eric Snow wrote:
>>> > In a recent tracker issue about OrderedDict [1] we've had some
>>> > discussion about the use of type(od) as a replacement for
>>> > od.__class__.
>>> [...]
>>> > The more general question of when we use type(obj) vs. obj.__class__
>>> > applies to both the language and to all the stdlib as I expect
>>> > consistency there would result in fewer surprises.  I realize that
>>> > there are some places where using obj.__class__ makes more sense (e.g.
>>> > for some proxy support).  There are other places where using type(obj)
>>> > is the way to go (e.g. special method lookup).  However, the
>>> > difference is muddled enough that usage is inconsistent in the stdlib.
>>> > For example, C-implemented types use Py_TYPE() almost exclusively.
>>> >
>>> > So, would it make sense to establish some concrete guidelines about
>>> > when to use type(obj) vs. obj.__class__?  If so, what would those be?
>>> > It may also be helpful to enumerate use cases for "type(obj) is not
>>> > obj.__class__".
>>>
>&

Re: [Python-Dev] type(obj) vs. obj.__class__

2015-10-18 Thread David Mertz
My intuition differs from Steven's here.  But that's fine.  In any case, my
simple implementation of RingBuffer in this thread avoids either rebinding
methods or changing .__class__.

And yes, of course collections.deque is better than any of these
implementations.  I was just trying to show that any such magic is unlikely
to be necessary... and in particular that the recipe given as an example
doesn't show it is.

But still, you REALLY want your `caterpillar = Caterpillar()` to become
something of type "Butterfly" later?! Obviously I understand the biological
metaphor.  But I'd much rather have an API that provided me with
.has_metamorphosed() then have to look for the type as something new.

Btw. Take a look at Alex' talk with Anna at PyCon 2015.  They discuss
various "best practices" that have been superseded by improved language
facilities.  They don't say anything about this "mutate the __class__
trick", but I somehow suspect he'd put that in that category.

On Sun, Oct 18, 2015 at 6:47 PM, Steven D'Aprano 
wrote:

> On Sun, Oct 18, 2015 at 05:35:14PM -0700, David Mertz wrote:
>
> > In any case, redefining a method in a certain situation feels a lot less
> > magic to me than redefining .__class__
>
> That surprises me greatly. As published in the Python Cookbook[1], there
> is a one-to-one correspondence between the methods used by an object and
> its class. If you want to know what instance.spam() method does, you
> look at the class type(instance) or instance.__class__, and read the
> source code for spam.
>
> With your suggestion of re-defining the methods on the fly, you no
> longer have that simple relationship. If you want to know what
> instance.spam() method does, first you have to work out what it actually
> is, which may not be that easy. In the worst case, it might not be
> possible at all:
>
> class K:
> def method(self):
> if condition:
> self.method = random.choice([lambda self: ...,
>  lambda self: ...,
>  lambda self: ...])
>
>
> Okay, that's an extreme example, and one can write bad code using any
> technique. But even with a relatively straight-forward version:
>
> def method(self):
> if condition:
> self.method = self.other_method
>
>
> I would classify "change the methods on the fly" as self-modifying code,
> which strikes me as much more hacky and hard to maintain than something
> as simple as changing the __class__ on the fly.
>
> Changing the __class__ is just a straight-forward metamorphosis: what
> was a caterpillar, calling methods defined in the Caterpillar class, is
> now a butterfly, calling methods defined in the Butterfly class.
>
> (The only change I would make from the published recipe would be to make
> the full Ringbuffer a subclass of the regular one, so isinstance() tests
> would work as expected. But given that the recipe pre-dates the
> wide-spread use of isinstance, the author can be forgiven for not
> thinking of that.)
>
> If changing the class on the fly is a metamorphosis, then it seems to me
> that self-modifying methods are like something from The Fly, where a
> horrible teleporter accident grafts body parts and DNA from one object
> into another object... or at least *repurposes* existing methods, so
> that what was your leg is now your arm.
>
> I've done that, and found it harder to reason about than the
> alternative:
>
> "okay, the object is an RingBuffer, but is the append method the
> RingBuffer.append method or the RingBuffer.full_append method?"
>
> versus
>
> "okay, the object is a RingBuffer, therefore the append method is the
> RingBuffer.append method".
>
>
> In my opinion, the only tricky thing about the metamorphosis tactic is
> that:
>
> obj = Caterpillar()
> # later
> assert type(obj) is Caterpillar
>
> may fail. You need a runtime introspection to see what the type of obj
> actually is. But that's not exactly unusual: if you consider Caterpillar
> to be a function rather than a class constructor (a factory perhaps?),
> then it's not that surprising that you can't know what *specific*
> type a function returns until runtime. There are many functions with
> polymorphic return types.
>
>
>
>
>
> [1] The first edition of the Cookbook was edited by Python luminaries
> Alex Martelli and David Ascher, so this recipe has their stamp of
> approval. This isn't some dirty hack.
>
>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.pyth

Re: [Python-Dev] PEP-8 wart... it recommends short names because of DOS

2015-10-20 Thread David Mertz
DOS Python programmers probably can't use `concurrent` or
`multiprocessing`. ☺
On Oct 20, 2015 6:26 PM, "Gregory P. Smith"  wrote:

> https://www.python.org/dev/peps/pep-0008/#names-to-avoid
>
> *"Since module names are mapped to file names, and some file systems are
> case insensitive and truncate long names, it is important that module names
> be chosen to be fairly short -- this won't be a problem on Unix, but it may
> be a problem when the code is transported to older Mac or Windows versions,
> or DOS."*
>
> There haven't been computers with less than 80 character file or path name
> element length limits in wide use in decades... ;)
>
> -gps
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 8 recommends short module names because FAT is still common today (was: PEP-8 wart... it recommends short names because of DOS)

2015-10-20 Thread David Mertz
Even thumb drives use VFAT. Yes it's an ugly hack, but the names aren't
limited to 8.3.
On Oct 20, 2015 6:59 PM, "Ben Finney"  wrote:

> "Gregory P. Smith"  writes:
>
> > There haven't been computers with less than 80 character file or path
> > name element length limits in wide use in decades... ;)
>
> Not true, your computer will happily mount severely-limited filesystems.
> Indeed, I'd wager it has done so many times this year.
>
> It is *filesystems* that limit the length of filesystem entries, and the
> FAT filesystem is still in very widespread use — on devices mounted by
> the computers you use today.
>
> Yes, we have much better filesystems today, and your primary desktop
> computer will almost certainly use something better than FAT for its
> primary storage's filesystem.
>
> That does not mean Python programmers should assume your computer will
> never mount a FAT filesystem (think small flash storage), nor that a
> program you run will never need to load Python modules from that
> filesystem.
>
>
> You'd like FAT to go away forever? Great, me too. Now we need to
> convince all the vendors of every small storage device – USB thumb
> drives, network routers, all manner of single-purpose devices – to use
> modern filesystems instead.
>
> Then, maybe after another human generation has come and gone, we can
> finally expect every filesystem, in every active device that might run
> any Python code, to be using something with a reasonably-large limit for
> filesystem entries.
>
> Until then, the advice in PEP 8 to keep module names short is reasonable.
>
> --
>  \   “The most common of all follies is to believe passionately in |
>   `\the palpably not true. It is the chief occupation of mankind.” |
> _o__)—Henry L. Mencken |
> Ben Finney
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] 2.7.11 Windows Installer issues on Win2008R2

2016-01-15 Thread Rader, David
Problem 1:
The .manifest information for the VC runtime dll's has been changed in the
recent versions of the 2.7.x 64-bit installers for Windows. Python fails to
run on a clean Win2008R2 install after running the Python installer to
install "Just for me". The installation succeeds if "Install for all users"
is selected.

After install completes, trying to run python results in:
The application has failed to start because it's side-by-side configuration
is incorrect. Please see the application event log or use the command-line
sxstrace.exe tool for more detail.

The event viewer log shows:
Activation context generation failed for "C:\Python27\python.exe".Error in
manifest or policy file "C:\Python27\Microsoft.VC90.CRT.MANIFEST" on line
4. Component identity found in manifest does not match the identity of the
component requested. Reference is
Microsoft.VC90.CRT,processorArchitecture="amd64",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
Definition is
Microsoft.VC90.CRT,processorArchitecture="amd64",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.1".
Please use sxstrace.exe for detailed diagnosis.

This means that that VC2008 SP1 dll and manifest are included in the
installer, but the Python.exe is compiled against VC2008 (_not_ SP1).
Replacing the installed manifest and VC90 with one pulled from an older
distribution with the correct 9.0.21022.8 version enables python to run.

Problem 2:
The compiled DLLs in the DLLs folder incorrectly have the VC manifest
included in them as well. This breaks the side-by-side look up, since the
VC90 dll is not in the DLLs folder. So if you try to import socket, you get
an error message like:
Traceback (most recent call last):
  File "hub\scripts\pgc.py", line 9, in 
import socket
  File "C:\Python27\lib\socket.py", line 47, in 
import _socket
ImportError: DLL load failed: The application has failed to start because
its si
de-by-side configuration is incorrect. Please see the application event log
or u
se the command-line sxstrace.exe tool for more detail.

Previous versions of Python for windows have had this problem but it was
corrected. It looks like it has crept back in.

--
David Rader
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update PEP 7 to require curly braces in C

2016-01-19 Thread David Malcolm
On Mon, 2016-01-18 at 19:18 -0500, Terry Reedy wrote:
> On 1/18/2016 6:20 PM, Brett Cannon wrote:
> >
> >
> > On Sun, 17 Jan 2016 at 11:10 Brett Cannon  > > wrote:
> >
> > While doing a review of http://bugs.python.org/review/26129/ I asked
> > to have curly braces put around all `if` statement bodies. Serhiy
> > pointed out that PEP 7 says curly braces are optional:
> > https://www.python.org/dev/peps/pep-0007/#id5. I would like to
> > change that.
> >
> > My argument is to require them to prevent bugs like the one Apple
> > made with OpenSSL about two years ago:
> > https://www.imperialviolet.org/2014/02/22/applebug.html. Skipping
> > the curly braces is purely an aesthetic thing while leaving them out
> > can lead to actual bugs.
> >
> > Anyone object if I update PEP 7 to remove the optionality of curly
> > braces in PEP 7?
> >
> >
> > Currently this thread stands at:
> >
> > +1
> >Brett
> >Ethan
> >Robert
> >Georg
> >Nick
> >Maciej Szulik
> > +0
> >Guido
> > -0
> >Serhiy
> >MAL
> > -1
> >Victor (maybe; didn't specifically vote)
> >Larry
> >Stefan
> 
> Though I don't write C anymore, I occasionally read our C sources.  I 
> dislike mixed bracketing in a multiple clause if/else statement,  and 
> would strongly recommend against that.  On the other hand, to my 
> Python-trained eye, brackets for one line clauses are just noise.  +-0.
> 
> If coverity's scan does not flag the sort of misleading bug bait 
> formatting that at least partly prompted this thread
> 
> if (a):
> b;
> c;
> 
> then I think we should find or write something that does and run it over 
> existing code as well as patches.

FWIW, for the forthcoming gcc 6, I've implemented a new
-Wmisleading-indentation warning that catches this.  It's currently
enabled by -Wall:

sslKeyExchange.c: In function 'SSLVerifySignedServerKeyExchange':
sslKeyExchange.c:631:8: warning: statement is indented as if it were guarded 
by... [-Wmisleading-indentation]
goto fail;
^~~~
sslKeyExchange.c:629:4: note: ...this 'if' clause, but it is not
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
^~


(not that I've had time for core Python development lately, but FWIW in
gcc-python-plugin I mandate braces for single-statement clauses).

Dave

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 515: Underscores in Numeric Literals

2016-02-11 Thread David Mertz
Great PEP overall. We definitely don't want the restriction to grouping
numbers only in threes. South Asian crore use grouping in twos.

https://en.m.wikipedia.org/wiki/Crore
On Feb 11, 2016 7:04 PM, "Glenn Linderman"  wrote:

> On 2/11/2016 4:16 PM, Steven D'Aprano wrote:
>
> On Thu, Feb 11, 2016 at 06:03:34PM +, Brett Cannon wrote:
>
> On Thu, 11 Feb 2016 at 02:13 Steven D'Aprano  
>  wrote:
>
>
> On Wed, Feb 10, 2016 at 08:41:27PM -0800, Andrew Barnert wrote:
>
>
> And honestly, are you really claiming that in your opinion, "123_456_"
> is worse than all of their other examples, like "1_23__4"?
>
>
> Yes I am, because 123_456_ looks like you've forgotten to finish typing
> the last group of digits, while 1_23__4 merely looks like you have no
> taste.
>
>
>
> OK, but the keyword in your sentence is "taste".
>
>
> I disagree. The key *idea* in my sentence is that the trailing
> underscore looks like a programming error. In my opinion, avoiding that
> impression is important enough to make trailing underscores a syntax
> error.
>
> I've seen a few people vote +1 for things like 123_j and 1.23_e99, but I
> haven't seen anyone in favour of trailing underscores. Does anyone think
> there is a good case for allowing trailing underscores?
>
>
>
> If we update PEP 8 for our
> needs to say "Numerical literals should not have multiple underscores in a
> row or have a trailing underscore" then this is taken care of. We get a
> dead-simple rule for when underscores can be used, the implementation is
> simple, and we get to have more tasteful usage in the stdlib w/o forcing
> our tastes upon everyone or complicating the rules or implementation.
>
>
> I think this is a misrepresentation of the alternative. As I see it, we
> have two alternatives:
>
> - one or more underscores can appear AFTER the base specifier or any digit;
> - one or more underscores can appear BETWEEN two digits.
>
> To describe the second alternative as "complicating the rules" is, I
> think, grossly unfair. And if Serhiy's proposal is correct, the
> implementation is also no more complicated:
>
> # underscores after digits
> octinteger: "0" ("o" | "O") "_"* octdigit (octdigit | "_")*
> hexinteger: "0" ("x" | "X") "_"* hexdigit (hexdigit | "_")*
> bininteger: "0" ("b" | "B") "_"* bindigit (bindigit | "_")*
>
>
> # underscores after digits
> octinteger: "0" ("o" | "O") (octdigit | "_")*
> hexinteger: "0" ("x" | "X") (hexdigit | "_")*
> bininteger: "0" ("b" | "B") (bindigit | "_")*
>
>
> An extra side effect is that there are more ways to write zero.  0x, 0b,
> 0o, 0X, 0B, 0O, 0x_, 0b_, 0o_, etc.
> But most people write   0   anyway, so those would be bad style, anyway,
> but it makes the implementation simpler.
>
>
>
> # underscores between digits
> octinteger: "0" ("o" | "O") octdigit (["_"] octdigit)*
> hexinteger: "0" ("x" | "X") hexdigit (["_"] hexdigit)*
> bininteger: "0" ("b" | "B") bindigit (["_"] bindigit)*
>
>
> The idea that the second alternative "forc[es] our tastes on everyone"
> while the first does not is bogus. The first alternative also prohibits
> things which are a matter of taste:
>
> # prohibited in both alternatives
> 0_xDEADBEEF
> 0._1234
> 1.2e_99
> -_1
> 1j_
>
>
> I think that there is broad agreement that:
>
> - the basic idea is sound
> - leading underscores followed by digits are currently legal
>   identifiers and this will not change
> - underscores should not follow the sign - +
> - underscores should not follow the decimal point .
> - underscores should not follow the exponent e|E
> - underscores will not be permitted inside the exponent (even if
>   it is harmless, it's silly to write 1.2e9_9)
> - underscores should not follow the complex suffix j
>
> and only minor disagreement about:
>
> - whether or not underscores will be allowed after the base
>   specifier 0x 0o 0b
>
>
> +1 to allow underscores after the base specifier.
>
> - whether or not underscores will be allowed before the decimal
>   point, exponent and complex suffix.
>
>
> +1 to allow them. There may be cases where they are useful, and if it is
> not useful, it would not be used.  I really liked someone's style guide
> proposal: use of underscore within numeric constants should only be done to
> aid readability.  However, pre-judging what aids readability to one
> person's particular taste is inappropriate.
>
> Can we have a show of hands, in favour or against the above two? And
> then perhaps Guido can rule on this one way or the other and we can get
> back to arguing about more important matters? :-)
>
> In case it isn't obvious, I prefer to say No to allowing underscores
> after the base specifier, or before the decimal point, exponent and
> complex suffix.
>
> I think it was obvious :)  And I think we disagree. And yes, there are
> more important matters. But it was just a couple days ago when I wrote a
> big constant in some new code that I was thinking how nice it would be if I
> could put a delimiter in there... so I'll be glad for the feat

Re: [Python-Dev] PEP 514: Python environment registration in the Windows Registry

2016-03-01 Thread David Cournapeau
Hi Steve,

I have looked into this PEP to see what we need to do on Enthought side of
things. I have a few questions:

1. Is it recommended to follow this for any python version we may provide,
or just new versions (3.6 and above). Most of our customers still heavily
use 2.7, and I wonder whether it would cause more trouble than it is worth
backporting this to 2.7.
2. The main issue for us in practice has been `PythonPath` entry as used to
build `sys.path`. I understand this is not the point of the PEP, but would
it make sense to give more precise recommendations for 3rd party providers
there ?

IIUC, the PEP 514 would recommend for us to do the following:

1. Use HKLM for "system install" or HKCU for "user install" as the root key
2. Register under "\Software\Python\Enthought"
3. We should patch our pythons to look in 2. and not in
"\Software\Python\PythonCore", especially for `sys.path`
constructions.
4. When a python from enthought is installed, it should never register
anything in the key defined in 2.

Is this correct ?

I am not clear about 3., especially on what should be changed. I know that
for 2.7, we need to change PC\getpathp.c for sys.path, but are there any
other places where the registry is used by python itself ?

Thanks for working on this,

David

On Sat, Feb 6, 2016 at 9:01 PM, Steve Dower  wrote:

> I've posted an updated version of this PEP that should soon be visible at
> https://www.python.org/dev/peps/pep-0514.
>
> Leaving aside the fact that the current implementation of Python relies on
> *other* information in the registry (that is not specified in this PEP),
> I'm still looking for feedback or concerns from developers who are likely
> to create or use the keys that are described here.
>
> 
>
> PEP: 514
> Title: Python registration in the Windows registry
> Version: $Revision$
> Last-Modified: $Date$
> Author: Steve Dower 
> Status: Draft
> Type: Informational
> Content-Type: text/x-rst
> Created: 02-Feb-2016
> Post-History: 02-Feb-2016
>
> Abstract
> 
>
> This PEP defines a schema for the Python registry key to allow third-party
> installers to register their installation, and to allow applications to
> detect
> and correctly display all Python environments on a user's machine. No
> implementation changes to Python are proposed with this PEP.
>
> Python environments are not required to be registered unless they want to
> be
> automatically discoverable by external tools.
>
> The schema matches the registry values that have been used by the official
> installer since at least Python 2.5, and the resolution behaviour matches
> the
> behaviour of the official Python releases.
>
> Motivation
> ==
>
> When installed on Windows, the official Python installer creates a
> registry key
> for discovery and detection by other applications. This allows tools such
> as
> installers or IDEs to automatically detect and display a user's Python
> installations.
>
> Third-party installers, such as those used by distributions, typically
> create
> identical keys for the same purpose. Most tools that use the registry to
> detect
> Python installations only inspect the keys used by the official installer.
> As a
> result, third-party installations that wish to be discoverable will
> overwrite
> these values, resulting in users "losing" their Python installation.
>
> By describing a layout for registry keys that allows third-party
> installations
> to register themselves uniquely, as well as providing tool developers
> guidance
> for discovering all available Python installations, these collisions
> should be
> prevented.
>
> Definitions
> ===
>
> A "registry key" is the equivalent of a file-system path into the
> registry. Each
> key may contain "subkeys" (keys nested within keys) and "values" (named and
> typed attributes attached to a key).
>
> ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in
> user,
> and this user can generally read and write all settings under this root.
>
> ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally,
> any
> user can read these settings but only administrators can modify them. It is
> typical for values under ``HKEY_CURRENT_USER`` to take precedence over
> those in
> ``HKEY_LOCAL_MACHINE``.
>
> On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a
> special key
> that 32-bit processes transparently read and write to rather than
> accessing the
> ``Software`` key directly.
>
> Structure
> =
>
> We consider there to be a single collection of Python environments on a
> mac

Re: [Python-Dev] PEP 514: Python environment registration in the Windows Registry

2016-03-01 Thread David Cournapeau
On Tue, Mar 1, 2016 at 5:46 PM, Steve Dower  wrote:

> On 01Mar2016 0524, Paul Moore wrote:
>
>> On 1 March 2016 at 11:37, David Cournapeau  wrote:
>>
>>> I am not clear about 3., especially on what should be changed. I know
>>> that
>>> for 2.7, we need to change PC\getpathp.c for sys.path, but are there any
>>> other places where the registry is used by python itself ?
>>>
>>
>> My understanding from the earlier discussion was that you should not
>> patch Python at all. The sys.path building via PythonPath is not
>> covered by the PEP and you should continue as at present. The new keys
>> are all for informational purposes - your installer should write to
>> them, and read them if looking for your installations. But the Python
>> interpreter itself should not know or care about your new keys.
>>
>> Steve can probably clarify better than I can, but that's how I recall
>> it being intended to work.
>> Paul
>>
>
> Yes, the intention was to not move sys.path building out of the PythonCore
> key. It's solely about discovery by external tools.
>

Right. For us, continuing populating sys.path from the registry "owned" by
python.org official installers is more and more untenable, because every
distribution writes there, and this is especially problematic when you have
both 32 bits and 64 bits distributions in the same machine.


> If you want to patch your own distribution to move the paths you are
> welcome to do that - there is only one string literal in getpathp.c that
> needs to be updated - but it's not a requirement and I deliberately avoided
> making a recommendation either way. (Though as discussed earlier in the
> thread, I'm very much in favour of deprecating and removing any use of the
> registry by the runtime itself in 3.6+, but still working out the
> implications of that.)
>

Great, I just wanted to make sure removing it ourselves do not put us in a
corner or further away from where python itself is going.

Would it make sense to indicate in the PEP that doing so is allowed
(neither recommended or frowned upon) ?

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counting references to Py_None

2016-03-20 Thread David Wilson
On Sun, Mar 20, 2016 at 01:43:27PM -0300, Facundo Batista wrote:
> Hello!
> 
> I'm seeing that our code increases the reference counting to Py_None,
> and I find this a little strange: isn't Py_None eternal and will never
> die?
> 
> What's the point of counting its references?

Avoiding a branch on every single Py_DECREF / Py_INCREF?

> 
> Thanks!
> 
> -- 
> .Facundo
> 
> Blog: http://www.taniquetil.com.ar/plog/
> PyAr: http://www.python.org/ar/
> Twitter: @facundobatista
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/dw%2Bpython-dev%40hmmz.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Challenge: Please break this! (a.k.a restricted mode revisited)

2016-04-12 Thread David Wilson
On Tue, Apr 12, 2016 at 11:12:27PM +1000, Steven D'Aprano wrote:

> I can think of one possible threat. Suppose that the locale library
> has a bug, so that calling "aardvark".isdigit seg faults, potentially
> executing arbitrary C code, but at the very least crashing the
> application. Is that the sort of attack you're concerned by?

This thread already covered the need to address SEGV at length. For a
truly evil user, almost any kind of crash is an opportunity to take
control of the system, and a security solution ignoring this is no
security solution at all.


> Maybe so. And then Jon will fix that vulnerability. And somebody will
> find a new one. And he'll fix that too, or decide that it is too hard
> to fix and give up.
> 
> That's how security works. Even software designed for security can
> have exploitable bugs:
> 
> It seems unfair to me to hold Jon to a higher standard than we hold 
> people like Apple, or the Linux kernal devs.

I don't believe that's what is happening here. In the OS analogy, Jon is
generating busywork trying to secure an environment similar to Windows
3.1 that was simply never designed with e.g. memory protection in mind
to begin with, and there is no evidence after numerous attempts spanning
many years by multiple people that such an environment can be secured
meaningfully while still remaining generally useful.


> I fully accept and respect your personal opinion, based on your
> experience, that Jon's tactic is doomed to failure. But if he needs to
> learn this for himself, just as you had to learn it for yourself
> (otherwise you wouldn't have started your own sandbox project), I can
> respect that too. Progress depends on the unreasonable person who
> thinks they can overturn the conventional wisdom.

I'd deeply prefer it is this turned into an investigation or patchset
making CPython work nicely with seccomp, sandbox(7), pledge(2) or
whatever capability minimization mechanisms exist on Windows, they are
all mechanisms to make it much safer for random code to be executing on
your system, designed by folk who at all times expressively had security
in mind.

But that's not what's happening, instead a dead horse is being flogged
over a hundred messages in our inboxes and IMHO it is excruciating to
watch.


> Even if the only thing we learn from Jon's experiment is a new set of
> tricks for breaking out of the sandbox, that's still interesting, if
> not useful.

Don't forget the worst case: a fundamentally broken security module
heavily marketed to the naive using claims the core team couldn't break
it.


David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Yearly PyPI breakage

2016-05-05 Thread David Wilson
This is mostly just observational, and not meant primarily as criticism
of the fabulous work of Donald and others (ignoring pypa, also the
prompt, reliable, and skilled support responses common on such places as
IRC), however I can't help but notice that PyPI governance seems to come
under fire vastly more often than similar and much more popular
packaging systems, and some choices that have been made particularly in
recent years have caused a noticeable amount of dissent with what might
be considered the typical developer.

When a contributor to the core language is having repeat issues
maintaining some basic element of the function of the packaging system,
might it be fair to reflect on how changes to those functions are being
managed?

There are PEPs covering a great deal of the work done to PyPI recently,
but, and I say this as someone who has bumped into friction with the
packaging tooling in the relatively recent past, even I despite my
motivations to the contrary, I have not read most of them. It seems the
current process is observed by few, does not sufficiently address the
range of traditional use cases that were possible in the past, and the
first the common user learns of a change is when pip (after insisting it
must be upgraded) fails to function as it previously did.

The usual course then is some complaint, that leads to distutils-sig,
which ultimately leads to pointing at some design work that was only
observed by perhaps 50 people max that turns out had some edge cases
that hurt in a common use case.

Is there something to contemplate in here? I dislike posting questions
instead of answers, but it seems apparent there is a problem here and it
continues to remain unaddressed.


David


On Tue, May 03, 2016 at 07:06:12PM +, Stefan Krah wrote:
> 
> Hello,
> 
> Could someone enlighten me which hoops I have to jump through
> this year in order to keep pip downloads working?
> 
> Collecting cdecimal
>   Could not find a version that satisfies the requirement cdecimal (from
> versions: )
> No matching distribution found for cdecimal
> You are using pip version 7.1.2, however version 8.1.1 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> 
> 
> If this continues, I'm going to release a premium version that's
> 50% faster and only available from bytereef.org or Anaconda.
> 
> 
> 
> Stefan Krah
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/dw%2Bpython-dev%40hmmz.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Yearly PyPI breakage

2016-05-05 Thread David Wilson
On Fri, May 06, 2016 at 12:03:48AM +, Brett Cannon wrote:

> Is there something to contemplate in here? I dislike posting questions
> instead of answers, but it seems apparent there is a problem here and it
> continues to remain unaddressed.

> This is whole thread is off-topic precisely because all of this is
> discussed -- in the open -- on distutils-sig and decided there. If
> people feel changes need to be made like broadcasting to a wider
> audience when a change occurs, then please bring it up on
> distutils-sig.

I respectfully disagree, as this has been the default response applied
for years, and it seems friction and dissemination have not been
improved by it. Packaging is not some adjunct technicality, anyone
learning Python in the past few years at least has been taught pip
within the first week.


> But if people choose not to participate then they are implicitly
> delegating decision powers to those who do participate

I believe this is also practically rhetorical in nature. I've watched
the wars on distutils-sig for many years now, and the general strategy
is that beyond minor outside influence, the process there is occupied by
few individuals who are resistant to outside change. Outside influence
is regularly met with essay-length reponses and tangential minutia until
the energy of the challenge is expended.

As an example, one common argument is that "Donald is overworked",
however as an example, I offered a very long time ago to implement full
text indexing for PyPI search. At the time I belive I was told such
things weren't necessary, only to learn a few years later that Donald
himself implemented the same function, and it suffers from huge latency
and accuracy issues in the meantime. The solution to those problems is
of course the ever-delayed rewrite.

Over on distutils-sig, one will learn that a large amount of effort has
been poured into a rewrite of PyPI (an effort going on years now),
however the original codebase was not far from rescue (I had a local
copy almost entirely ported to Flask in a few days). There is no reason
why this effort nor any other (like full text search) should be used, as
it often is, as an argument in the decisionmaking process that largely
governs how PyPI and pip have worked in the recent years, yet it only
takes a few glances at the archives to demonstrate that it regularly is.


David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add Gentoo packagers of external modules to Misc/ACKS

2013-12-09 Thread David Malcolm
On Sun, 2013-12-08 at 05:29 -0500, R. David Murray wrote:
> As far as we have been able to determine, Tae Wong is in fact a bot
> (note the 'seo' in the email address...a tip of the hand, as far as
> I can see).  We have removed all access permissions (including email)
> from the related account on the bug tracker already.  IMO this address
> should be blocked from posting to all python lists.

FWIW the address has also been posting to the gcc lists "helpfully"
asking for spam posts to be removed (with *links* to the posts), plus
some (apparently) random-harvested paragraphs of text from various other
mailing lists, presumably to try to get past filters.

See e.g. the URL obtained by running:
 echo uggc://tpp.tah.bet/zy/tpp/2013-12/zft00097.ugzy | rot13


Hope this is constructive
Dave

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How long the wrong type of argument should we limit (or not) in the error message (C-api)?

2013-12-13 Thread David Hutto
Being that python is, to me, a prototyping language, then every possible
outcome should be presented to the end user.

A full variety of explanations should be presented to the programmer.



On Fri, Dec 13, 2013 at 11:56 PM, Vajrasky Kok
wrote:

> Greetings,
>
> When fixing/adding error message for wrong type of argument in C code,
> I am always confused, how long the wrong type is the ideal?
>
> The type of error message that I am talking about:
>
> "Blabla() argument 1 must be integer not wrong_type".
>
> We have inconsistency in CPython code, for example:
>
> Python/sysmodule.c
> ===
> PyErr_Format(PyExc_TypeError,
> "can't intern %.400s", s->ob_type->tp_name);
>
> Modules/_json.c
> 
> PyErr_Format(PyExc_TypeError,
>  "first argument must be a string, not %.80s",
>  Py_TYPE(pystr)->tp_name);
>
>
> Objects/typeobject.c
> ===
> PyErr_Format(PyExc_TypeError,
>  "can only assign string to %s.__name__, not '%s'",
>  type->tp_name, Py_TYPE(value)->tp_name);
>
> So is it %.400s or %.80s or %s? I vote for %s.
>
> Other thing is which one is more preferable? Py_TYPE(value)->tp_name
> or value->ob_type->tp_name? I vote for Py_TYPE(value)->tp_name.
>
> Or this is just a matter of taste?
>
> Thank you.
>
> Vajrasky Kok
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/dwightdhutto%40gmail.com
>



-- 
Best Regards,
David Hutto
*CEO:* *http://www.hitwebdevelopment.com <http://www.hitwebdevelopment.com>*
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How long the wrong type of argument should we limit (or not) in the error message (C-api)?

2013-12-14 Thread David Hutto
Susskinds...Me too, but the refinement of the error messages is the point.
We should be looking at the full assessment of the error, which the
prototyping of python should present.

I've seen others reply that python wouldn't be around, or that theree are
other forms I've seen before that will take the forefront.

The point should be to align the prototyping of python with the updates in
technology taking place.

It should be like it usually is, line for lineerror assessments, even
followed by further info to inform the prototyper that is looking to
translate to a lower level language.


On Sat, Dec 14, 2013 at 4:07 PM, Greg Ewing wrote:

> David Hutto wrote:
>
>> Being that python is, to me, a prototyping language, then every possible
>> outcome should be presented to the end user.
>>
>
> So we should produce a quantum superposition of
> error messages? :-)
>
> (Sorry, I've been watching Susskind's lectures on
> QM and I've got quantum on the brain at the moment.)
>
> --
> Greg
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> dwightdhutto%40gmail.com
>



-- 
Best Regards,
David Hutto
*CEO:* *http://www.hitwebdevelopment.com <http://www.hitwebdevelopment.com>*
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How long the wrong type of argument should we limit (or not) in the error message (C-api)?

2013-12-14 Thread David Hutto
We all strive to be python programmers, and some of the responses are that
it might not be around in the future.

Now we all probably speak conversational in other langs, but I'm thinking
of keeping around a great prototyping language.


So the topic becomes how too integrate it with the not just the expected,
but the unexpected technologiesDespite the topic is error messages, it
should apply to all possibilities that could be derived from a prototyping
language like python.


On Sat, Dec 14, 2013 at 11:09 PM, David Hutto wrote:

> Susskinds...Me too, but the refinement of the error messages is the point.
> We should be looking at the full assessment of the error, which the
> prototyping of python should present.
>
> I've seen others reply that python wouldn't be around, or that theree are
> other forms I've seen before that will take the forefront.
>
> The point should be to align the prototyping of python with the updates in
> technology taking place.
>
> It should be like it usually is, line for lineerror assessments, even
> followed by further info to inform the prototyper that is looking to
> translate to a lower level language.
>
>
> On Sat, Dec 14, 2013 at 4:07 PM, Greg Ewing 
> wrote:
>
>> David Hutto wrote:
>>
>>> Being that python is, to me, a prototyping language, then every possible
>>> outcome should be presented to the end user.
>>>
>>
>> So we should produce a quantum superposition of
>> error messages? :-)
>>
>> (Sorry, I've been watching Susskind's lectures on
>> QM and I've got quantum on the brain at the moment.)
>>
>> --
>> Greg
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
>> dwightdhutto%40gmail.com
>>
>
>
>
> --
> Best Regards,
> David Hutto
> *CEO:* *http://www.hitwebdevelopment.com
> <http://www.hitwebdevelopment.com>*
>



-- 
Best Regards,
David Hutto
*CEO:* *http://www.hitwebdevelopment.com <http://www.hitwebdevelopment.com>*
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASED] Python 3.4.0 release candidate 1

2014-02-11 Thread David Robinow
On Tue, Feb 11, 2014 at 5:45 AM, Terry Reedy  wrote:
> On 2/11/2014 5:13 AM, Terry Reedy wrote:
>> ...
>> I installed 64 bit 3.3.4 yesterday with no problem. I reran it today in
>> repair mode and again, no problem.
>>
>> With 64 bit 3.4.0, I get
>> "There is a problem with this Windows Installer package. A program
>> required for the install to complete could not be run."
>>
>> No, the generic message does not bother to say *which* program :-(.
>>
>> 34 bit 3.4.0 installed fine.
>
>
> I wrote too soon.
>
> Python 3.4.0rc1 (v3.4.0rc1:5e088cea8660, Feb 11 2014, 05:54:25) [MSC
 import tkinter
> Traceback ...
> import _tkinter
> ImportError: DLL load failed: %1 is not a valid Win32 application.
>
> So tkinter, Idle, turtle fail and the corresponding tests get skipped.
32 bit and 64 bit both work for me.  Windows 7.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   5   6   7   8   9   10   >