[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Steven D'Aprano
Although I am cautiously and tentatively in favour of setting limits 
if the benefits Mark suggests are correct, I have thought of at least 
one case where a million classes may not be enough.

I've seen people write code like this:

for attributes in list_of_attributes:
obj = namedtuple("Spam", "fe fi fo fum")(*attributes)
values.append(obj)


not realising that every obj is a singleton instance of a unique class. 
They might end up with a million dynamically created classes, each with 
a single instance, when what they wanted was a single class with a 
million instances.

Could there be people doing this deliberately? If so, it must be nice 
to have so much RAM that we can afford to waste it so prodigiously: a 
namedtuple with ten items uses 64 bytes, but the associated class uses 
444 bytes, plus the sizes of the methods etc. But I suppose there could 
be a justification for such a design.

(Quoted sizes on my system running 3.5; YMMV.)



-- 
Steven
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/VIK7QKORCYRJOF5EQZGYBNE6L62J5M6L/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Paul Moore
On Fri, 6 Dec 2019 at 09:33, Steven D'Aprano  wrote:
>
> Although I am cautiously and tentatively in favour of setting limits
> if the benefits Mark suggests are correct, I have thought of at least
> one case where a million classes may not be enough.
>
> I've seen people write code like this:
>
> for attributes in list_of_attributes:
> obj = namedtuple("Spam", "fe fi fo fum")(*attributes)
> values.append(obj)
>
>
> not realising that every obj is a singleton instance of a unique class.
> They might end up with a million dynamically created classes, each with
> a single instance, when what they wanted was a single class with a
> million instances.

But isn't that the point here? A limit would catch this and prompt
them to rewrite the code as

cls = namedtuple("Spam", "fe fi fo fum")
for attributes in list_of_attributes:
obj = cls(*attributes)
values.append(obj)

> Could there be people doing this deliberately? If so, it must be nice
> to have so much RAM that we can afford to waste it so prodigiously: a
> namedtuple with ten items uses 64 bytes, but the associated class uses
> 444 bytes, plus the sizes of the methods etc. But I suppose there could
> be a justification for such a design.

You're saying that someone might have a justification for deliberately
creating a million classes, based on an example that on the face of it
is a programmer error (creating multiple classes when a single shared
class would be better) and presuming that there *might* be a reason
why this isn't an error? Well, yes - but I could just as easily say
that someone might have a justification for creating a million classes
in one program, and leave it at that. Without knowing (roughly) what
the justification is, there's little we can take from this example.

Having said that, I don't really have an opinion on this change.
Basically, I feel that it's fine, as long as it doesn't break any of
my code (which I can't imagine it would) but that's not very helpful!
https://xkcd.com/1172/ ("Every change breaks someone's workflow")
comes to mind here.

Paul
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/YDMTJXAPYRHG3HQZ7ERYBTPYXNLU67ZZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Nathaniel Smith
On Thu, Dec 5, 2019 at 5:38 AM Mark Shannon  wrote:
>  From my limited googling, linux has a hard limit of about 600k file
> descriptors across all processes. So, 1M is well past any reasonable
> per-process limit. My impression is that the limits are lower on
> Windows, is that right?

Linux does limit the total number of file descriptors across all
processes, but the limit is configurable at runtime. 600k is the
default limit, but you can always make it larger (and people do).

In my limited experimentation with Windows, it doesn't seem to impose
any a priori limit on how many sockets you can have open. When I wrote
a simple process that opens as many sockets as it can in a loop, I
didn't get any error; eventually the machine just locked up. (I guess
this is another example of why it can be better to have explicit
limits!)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/5Z3CQQK6QDH3L466BIF7HAGCRV5SXBNW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Thomas Wouters
On Fri, Dec 6, 2019 at 12:14 PM Paul Moore  wrote:

> On Fri, 6 Dec 2019 at 09:33, Steven D'Aprano  wrote:
> >
> > Although I am cautiously and tentatively in favour of setting limits
> > if the benefits Mark suggests are correct, I have thought of at least
> > one case where a million classes may not be enough.
> >
> > I've seen people write code like this:
> >
> > for attributes in list_of_attributes:
> > obj = namedtuple("Spam", "fe fi fo fum")(*attributes)
> > values.append(obj)
> >
> >
> > not realising that every obj is a singleton instance of a unique class.
> > They might end up with a million dynamically created classes, each with
> > a single instance, when what they wanted was a single class with a
> > million instances.
>
> But isn't that the point here? A limit would catch this and prompt
> them to rewrite the code as
>
> cls = namedtuple("Spam", "fe fi fo fum")
> for attributes in list_of_attributes:
> obj = cls(*attributes)
> values.append(obj)
>

This assumes two things: you actually hit the limit as you are testing or
developing the code (how likely is that?), and the person hitting the limit
has control over the code that does this. If it's in a library where it's
usually not a problem, a library you don't directly control but that you're
using in a way that triggers the limit -- for example, mixing the library
with *other* library code you don't control -- the limit is just a burden.


>
> > Could there be people doing this deliberately? If so, it must be nice
> > to have so much RAM that we can afford to waste it so prodigiously: a
> > namedtuple with ten items uses 64 bytes, but the associated class uses
> > 444 bytes, plus the sizes of the methods etc. But I suppose there could
> > be a justification for such a design.
>
> You're saying that someone might have a justification for deliberately
> creating a million classes, based on an example that on the face of it
> is a programmer error (creating multiple classes when a single shared
> class would be better) and presuming that there *might* be a reason
> why this isn't an error? Well, yes - but I could just as easily say
> that someone might have a justification for creating a million classes
> in one program, and leave it at that. Without knowing (roughly) what
> the justification is, there's little we can take from this example.
>
> Having said that, I don't really have an opinion on this change.
> Basically, I feel that it's fine, as long as it doesn't break any of
> my code (which I can't imagine it would) but that's not very helpful!
> https://xkcd.com/1172/ ("Every change breaks someone's workflow")
> comes to mind here.
>
> Paul
> ___
> Python-Dev mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/[email protected]/message/YDMTJXAPYRHG3HQZ7ERYBTPYXNLU67ZZ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>


-- 
Thomas Wouters 

Hi! I'm an email virus! Think twice before sending your email to help me
spread!
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/D4PIUUJJG2UUVG4JEC2BAWQU2G6WPLNM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 611: The one million limit.

2019-12-06 Thread Rhodri James

Apologies again for commenting in the wrong place.

On 05/12/2019 16:38, Mark Shannon wrote:

Memory access is usually a limiting factor in the performance of 
modern CPUs. Better packing of data structures enhances locality and>
reduces memory bandwith, at a modest increase in ALU usage (for 
shifting and masking).


I don't think this assertion holds much water:

1. Caching make memory access much less of a limit than you would expect.
2. Non-aligned memory access vary from inefficient to impossible 
depending on the processor.
3. Shifting and masking isn't free, and again on some processors can be 
very expensive.


Mark wrote:
>>> here is also the potential for a more efficient instruction format,
>>> speeding up interpreter dispatch.
I replied:
>> This is the ARM/IBM mistake all over again.
Mark challenged:
> Could you elaborate? Please bear in mind that this is software
> dispatching and decoding, not hardware.

Hardware generally has a better excuse for instruction formats, because 
for example you know that an ARM only has sixteen registers, so you only 
need four bits for any register operand in an instruction.  Except that 
when they realised that they needed the extra address bits in the PC 
after all, they had to invent a seventeenth register to hold the status 
bits, and had to pretend it was a co-processor to get opcodes to access 
it.  Decades later, status manipulation on modern ARMs is, in 
consequence, neither efficient nor pretty.


You've talked some about not making the 640k mistake (and all the others 
we could and have pointed to) and that one million is a ridiculous 
limit.  You don't seem to have taken on board that when those limits 
were set, they *were* ridiculous.  I remember when we couldn't source 
20Mb hard discs any more, and were worried that 40Mb was far too much... 
to share between twelve computers.  More recently were the serious 
discussions of how to manage transferring terabyte-sized datasets (by 
van, it turned out).


Sizes in computing projects have a habit of going up by orders of 
magnitude.  Text files were just a few kilobytes, so why worry about 
only using sixteen bit sizes?  Then flabby word processors turned that 
into megabytes, audio put another order of magnitude or two on that, 
video is up in the multiple gigabytes, and the amount of data involved 
in the Human Genome Project is utterly ludicrous.  Have we hit the limit 
with Big Data?  I'm not brave enough to say that, and when you start 
looking at the numbers involved, one million anythings doesn't look so 
ridiculous at all.


--
Rhodri James *-* Kynesim Ltd
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/I5KOXNVB3RCDP3D3CYKUBWCAWGZLWLOF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-06 Thread Victor Stinner
Hi,

Le mer. 4 déc. 2019 à 00:58, Barry Warsaw  a écrit :
>
> At the Steering Council’s November 12th meeting, we unanimously agreed to 
> reject PEPs 606 and 608:
>
> * 606 - Python Compatibility Version
> (...)
> It was our opinion that neither PEP will effectively improve compatibility as 
> proposed.  Additionally, PEP 606 had the problem of using global state to 
> request compatibility, (...)

Yeah, I hesitated to assign a PEP number. I created a PR just to have a place
to put comments if people prefer to comment a PR rather than replying on
python-ideas, but someone merged my PR :-) It's my fault, it was
too early to propose a PR (and assign a PEP number).

I concur with feedback received on python-ideas: it's not a good idea
:-) The blocker point is really the global state and being to change
the compatibility version between imports :-(


> * 608 - Coordinated Python Release
> (...)
> It was our opinion that neither PEP will effectively improve compatibility as 
> proposed.  Additionally, (...), and PEP 608 puts external projects in the 
> critical path for Python releases.

We tried to put concrete constraints in the PEP to make avoid
proposing an "empty PEP" which could be completely ignored in practice
if nothing would be mandatory.

But the feedback was a strong rejection against the main idea proposed
in the PEP: block a release until "selected projects" are not
compatible.

My plan is now to work on the CI idea to provide a feedback on broken projects.


> We want to thank Victor Stinner and Miro Hrončok for their contributions and 
> authorship of these PEPs.

You're welcome ;-)

Victor
--
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/WXBSWBBV6PYYCRASOQ4T52QOCWMXYXVK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Victor Stinner
Hello,

Le mar. 26 nov. 2019 à 20:40, Brett Cannon  a écrit :
> I have turned Travis off as a required check on the 3.7, 3.8, and master 
> branches until someone is able to get a fix in.

That makes me sad :-( Is there an issue at bugs.python.org to track
it? What's the status? Right now, I see Travis CI jobs passing on 3.7,
3.8 and master branches so I don't understand the problem. Maybe the
issue has been fixed and Travis CI can be made mandatory again?

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/4JYPHQV2INVTXS3HNDBNEB73DFPPGJ2P/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-06 Thread Guido van Rossum
Let's try to avoid having PEP discussions in the peps tracker, period. That
repo's tracker is only meant to handle markup and grammar.

But if you want to prevent a PR from being merged, in general you should
put [WIP] in the subject.

On Fri, Dec 6, 2019 at 06:21 Victor Stinner  wrote:

> Hi,
>
> Le mer. 4 déc. 2019 à 00:58, Barry Warsaw  a écrit :
> >
> > At the Steering Council’s November 12th meeting, we unanimously agreed
> to reject PEPs 606 and 608:
> >
> > * 606 - Python Compatibility Version
> > (...)
> > It was our opinion that neither PEP will effectively improve
> compatibility as proposed.  Additionally, PEP 606 had the problem of using
> global state to request compatibility, (...)
>
> Yeah, I hesitated to assign a PEP number. I created a PR just to have a
> place
> to put comments if people prefer to comment a PR rather than replying on
> python-ideas, but someone merged my PR :-) It's my fault, it was
> too early to propose a PR (and assign a PEP number).
>
> I concur with feedback received on python-ideas: it's not a good idea
> :-) The blocker point is really the global state and being to change
> the compatibility version between imports :-(
>
>
> > * 608 - Coordinated Python Release
> > (...)
> > It was our opinion that neither PEP will effectively improve
> compatibility as proposed.  Additionally, (...), and PEP 608 puts external
> projects in the critical path for Python releases.
>
> We tried to put concrete constraints in the PEP to make avoid
> proposing an "empty PEP" which could be completely ignored in practice
> if nothing would be mandatory.
>
> But the feedback was a strong rejection against the main idea proposed
> in the PEP: block a release until "selected projects" are not
> compatible.
>
> My plan is now to work on the CI idea to provide a feedback on broken
> projects.
>
>
> > We want to thank Victor Stinner and Miro Hrončok for their contributions
> and authorship of these PEPs.
>
> You're welcome ;-)
>
> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/[email protected]/message/WXBSWBBV6PYYCRASOQ4T52QOCWMXYXVK/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-- 
--Guido (mobile)
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/VOXOGYSPY7KKGFHID52SBL57TS5MPOGG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-06 Thread Chris Angelico
On Sat, Dec 7, 2019 at 2:01 AM Guido van Rossum  wrote:
>
> Let's try to avoid having PEP discussions in the peps tracker, period. That 
> repo's tracker is only meant to handle markup and grammar.
>
> But if you want to prevent a PR from being merged, in general you should put 
> [WIP] in the subject.
>

Yes, and you can also create a draft PR on GitHub, which you can then
choose to mark as ready-for-review later.

ChrisA
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/EZ7X4VYDTFOPV2UBJTOGYUTYKP3TWAVG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] PEP 587

2019-12-06 Thread Ayush Das
Hii.I want to download Python 3.8.0 at my mobile.Please help me to download.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/T6RC6ZOJ764RFV42FJJMIMNN6J52IBGW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we require all deprecations to have a removal version that we follow through on?

2019-12-06 Thread Victor Stinner
Hi,

Le mer. 27 nov. 2019 à 19:40, Brett Cannon  a écrit :
> What do people think of the idea of requiring all deprecations specifying a 
> version that the feature will be removed in (which under our annual release 
> cadence would be at least the third release from the start of the 
> deprecation, hence the deprecation being public for 2 years)? And that we 
> also follow through with that removal?

Isn't it already the current unwritten deprecation process? The common
case is to schedule the removal. Previously, it was common to wait 2
releases before removing anything: pending deprecatation in 3.6,
deprecation in 3.7, removal in 3.8. It means that developers are
warned during 2 releases: 3 years (I'm optimistic and consider that
developers pay attention to PendingDeprecationWarning). So deprecation
during 3 releases with the new release cadence would fit the previous
practice: 3 years ;-)

Are you suggesting to start with a PendingDeprecationWarning? I recall
a debate about the value of this warning which is quiet by default,
whereas DeprecationWarning is now displayed in the __main__ module.

I'm not sure about enforcing the removal. Some APIs were deprecated,
but it was unsure if we really wanted to remove them. Or because we
had no idea how long it would take for developers to upgrade to the
new functions.

For example, the C API for the old Unicode API using "Py_UNICODE*"
type is deprecated since Python 3.3 but there is no concrete plan to
remove it (just a vague "Python 4.0 removal" plan).

I deprecated the support for bytes filenames on Windows in Python 3.3,
but Steve Dower implemented his PEP 529 (use UTF-8 for bytes on
Windows) in Python 3.6 and so the deprecation warnings have been
simply removed!

I would suggest to schedule removal, but not require it too strongly ;-)


> Now I'm not suggesting it **must** be in three feature releases. A 
> deprecation could be set to Python 4 (...)

I know that it's an unpopular opinion, but I would strongly prefer
Python 4 to behave exactly as any another Python 3.x release in term
of deprecation. If I exaggerate, I would prefer that Python 4.0 would
have *less* incompatible changes than the last Python 3.x release. I
hope that upgrading from Python 3 to Python 4 will be as smooth as
possible. Changing sys.version_info.major and "python4" program name
are already going to cause enough troubles.

Recently, I looked at all deprecation changes scheduled for removal in
Python 4.0. I removed these features in Python 3.9, especially for
features deprecated for more than 2 releases. See:
https://docs.python.org/dev/whatsnew/3.9.html#removed

For example, I removed the "U" mode of open() which was deprecated
since Python 3.4 and scheduled for removal in Python 4.0.

My plan is to revert these changes if too many users complain (see my
rejected PEP 606 and 608 :-D).

--

I like to remove deprecated features at the beginning of a dev cycle
to get users feedback earlier. If they are removed late in the dev
cycle, there is a higher risk that a problematic incompatible change
goes through a 3.x.0 final release and so cannot be reverted anymore.

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/DLHMA5BZAHJ5QKWJKK6WX33R5EJWN5G5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-06 Thread Victor Stinner
Le ven. 6 déc. 2019 à 16:00, Guido van Rossum  a écrit :
> Let's try to avoid having PEP discussions in the peps tracker, period. That 
> repo's tracker is only meant to handle markup and grammar.

I recall that some PEPs have been discussed in length in GitHub PRs.
But I'm fine with keeping the discussion on mailing lists. Whatever
works :-)

Victor
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/RVPMP5XUE4Y4KXWKR3XU37DXRPOM25IC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Michael
On 03/12/2019 17:15, Mark Shannon wrote:
> Hi Everyone,
>
> I am proposing a new PEP, still in draft form, to impose a limit of
> one million on various aspects of Python programs, such as the lines
> of code per module.
>
> Any thoughts or feedback?
>
> The PEP:
> https://github.com/markshannon/peps/blob/one-million/pep-100.rst
>
> Cheers,
> Mark. 

Shortened the mail - as I want my comment to be short. There are many
longish ones, and have not gotten through them all.

One guiding principle I learned from a professor (forgot his name sadly).

A program has exactly - zero (0) of something, one (1) of something, or
infinite. The moment it gets set to X, the case for X+1 appears.

Since we are not talking about zero, or one - I guess my comment is make
sure it can be used to infinity.

Regards,

Michael

p.s. If this has already been suggested - my apologies for any noise.




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/IDACDRYRQDYUB3YZFANONLKZFR6LPOE7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Steve Dower

On 06Dec2019 0620, Victor Stinner wrote:

What's the status? Right now, I see Travis CI jobs passing on 3.7,
3.8 and master branches so I don't understand the problem. Maybe the
issue has been fixed and Travis CI can be made mandatory again?


They've been passing fine for me too, I'm not quite sure what the issue was.

As a related aside, I've been getting GitHub Actions support together 
(which I started at the sprints). My test PR is at 
https://github.com/zooba/cpython/pull/7 if anyone wants to check out 
what it could look like. Feel free to leave comments there.


(One of these days I'll have to join core-workflow I guess...)

Cheers,
Steve
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/MTD2EQC3BWEGEV3LNKPF3KEJBSQN24LS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Joao S. O. Bueno
* The number of live coroutines in a running interpreter: Implicitly
limited by operating system limits until at least 3.11.

DOes the O.S. limit anything on a coroutine?
What for? As far as I know it is a minimal Python-only object, unless you
have each co-routine holding a reference
to a TCP socket - but that has nothing to do with Python's limits itself: a
couroutine by itself is a small Python object
with no external resources referenced - unlike a thread, and code with tens
of thousands of coroutines can run perfectly
without  a single glitch.

Of all limits mentioned in the PEP, this is the one I find no reason to
exist, and that could eventually lead to needles breaking of otherwise
perfectly harmless code.
(The limit on the number of classes also is strange for me, as I've written
in other mails)



On Fri, 6 Dec 2019 at 13:39, Michael  wrote:

> On 03/12/2019 17:15, Mark Shannon wrote:
> > Hi Everyone,
> >
> > I am proposing a new PEP, still in draft form, to impose a limit of
> > one million on various aspects of Python programs, such as the lines
> > of code per module.
> >
> > Any thoughts or feedback?
> >
> > The PEP:
> > https://github.com/markshannon/peps/blob/one-million/pep-100.rst
> >
> > Cheers,
> > Mark.
>
> Shortened the mail - as I want my comment to be short. There are many
> longish ones, and have not gotten through them all.
>
> One guiding principle I learned from a professor (forgot his name sadly).
>
> A program has exactly - zero (0) of something, one (1) of something, or
> infinite. The moment it gets set to X, the case for X+1 appears.
>
> Since we are not talking about zero, or one - I guess my comment is make
> sure it can be used to infinity.
>
> Regards,
>
> Michael
>
> p.s. If this has already been suggested - my apologies for any noise.
>
>
> ___
> Python-Dev mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/[email protected]/message/IDACDRYRQDYUB3YZFANONLKZFR6LPOE7/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/ZSTUKNVN5546YHEX75UK5ROOW3LUMYPB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Brett Cannon
Victor Stinner wrote:
> Hello,
> Le mar. 26 nov. 2019 à 20:40, Brett Cannon [email protected] a écrit :
> > I have turned Travis off as a required check on the
> > 3.7, 3.8, and master branches until someone is able to get a fix in.
> > That makes me sad :-( Is there an issue at bugs.python.org to track
> it?

Nope. I had people personally asking me to deal with it and I didn't have time 
to investigate so to unblock folks I just flipped it off.

> What's the status?

🤷‍♂️

> Right now, I see Travis CI jobs passing on 3.7,
> 3.8 and master branches so I don't understand the problem. Maybe the
> issue has been fixed and Travis CI can be made mandatory again?

🤷‍♂️

-Brett

> Victor
> Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/M4IQ4DT727YT4QNO373JLN5X5R5AJI5B/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 587

2019-12-06 Thread Brett Cannon
This mailing list is for discussing making _of_ Python, not _with_ it. You are 
better to ask this over on python-list.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/7MR5ARUBGQOOP2KTDIESGALX3ORGQ5C7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we require all deprecations to have a removal version that we follow through on?

2019-12-06 Thread Brett Cannon
Victor Stinner wrote:
> Hi,
> Le mer. 27 nov. 2019 à 19:40, Brett Cannon [email protected] a écrit :
> > What do people think of the idea of requiring all
> > deprecations specifying a version that the feature will be removed in 
> > (which under our
> > annual release cadence would be at least the third release from the start 
> > of the
> > deprecation, hence the deprecation being public for 2 years)? And that we 
> > also follow
> > through with that removal?
> > Isn't it already the current unwritten deprecation process?

If it is we are not all doing a great job of following it. ;)

> The common
> case is to schedule the removal. Previously, it was common to wait 2
> releases before removing anything: pending deprecation in 3.6,
> deprecation in 3.7, removal in 3.8.

And this is why unwritten rule are bad: for me the rule was deprecation, then 
removal which was 1.5 years and will now be 2 years as we will be shifting to 
at least a two release deprecation cycle.

> It means that developers are
> warned during 2 releases: 3 years (I'm optimistic and consider that
> developers pay attention to PendingDeprecationWarning). So deprecation
> during 3 releases with the new release cadence would fit the previous
> practice: 3 years ;-)

> Are you suggesting to start with a PendingDeprecationWarning?

No.

> I recall
> a debate about the value of this warning which is quiet by default,
> whereas DeprecationWarning is now displayed in the __main__ module.
> I'm not sure about enforcing the removal. Some APIs were deprecated,
> but it was unsure if we really wanted to remove them. Or because we
> had no idea how long it would take for developers to upgrade to the
> new functions.

Deprecations can always be extended, but leaving it perpetually open-ended 
seems like a bad idea to me.

> For example, the C API for the old Unicode API using "Py_UNICODE*"
> type is deprecated since Python 3.3 but there is no concrete plan to
> remove it (just a vague "Python 4.0 removal" plan).
> I deprecated the support for bytes filenames on Windows in Python 3.3,
> but Steve Dower implemented his PEP 529 (use UTF-8 for bytes on
> Windows) in Python 3.6 and so the deprecation warnings have been
> simply removed!
> I would suggest to schedule removal, but not require it too strongly ;-)
> > Now I'm not suggesting it must be in
> > three feature releases. A deprecation could be set to Python 4 (...)
> > I know that it's an unpopular opinion, but I would strongly prefer
> Python 4 to behave exactly as any another Python 3.x release in term
> of deprecation.

It will, it just might have a lot more code removed than a typical release.

> If I exaggerate, I would prefer that Python 4.0 would
> have less incompatible changes than the last Python 3.x release.

Have "less incompatible changes" compared to what? By definition a shift in 
major version means there will be a difference.

> I
> hope that upgrading from Python 3 to Python 4 will be as smooth as
> possible.

I think we all do. And if people follow standard procedures to check for 
deprecations in the last release before Py4K then it will be like any other 
release for upgrading (at least in that regard).

-Brett

> Changing sys.version_info.major and "python4" program name
> are already going to cause enough troubles.
> Recently, I looked at all deprecation changes scheduled for removal in
> Python 4.0. I removed these features in Python 3.9, especially for
> features deprecated for more than 2 releases. See:
> https://docs.python.org/dev/whatsnew/3.9.html#removed
> For example, I removed the "U" mode of open() which was deprecated
> since Python 3.4 and scheduled for removal in Python 4.0.
> My plan is to revert these changes if too many users complain (see my
> rejected PEP 606 and 608 :-D).
> --
> I like to remove deprecated features at the beginning of a dev cycle
> to get users feedback earlier. If they are removed late in the dev
> cycle, there is a higher risk that a problematic incompatible change
> goes through a 3.x.0 final release and so cannot be reverted anymore.
> Victor
> Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/EWB4UM3MT23Y2FCVEVOUBIF7ZZJW35OS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Kyle Stanley
Steve Dower wrote:
> As a related aside, I've been getting GitHub Actions support together
> (which I started at the sprints).

Would adding support for GitHub Actions make it easier/faster to
temporarily disable and re-enable specific CI services when they're having
external issues? IIUC, that seems to be the primary concern to address.

Note that I'm not particularly well acquainted with GitHub Actions, other
than briefly looking over https://github.com/features/actions.

On Fri, Dec 6, 2019 at 1:01 PM Steve Dower  wrote:

> On 06Dec2019 0620, Victor Stinner wrote:
> > What's the status? Right now, I see Travis CI jobs passing on 3.7,
> > 3.8 and master branches so I don't understand the problem. Maybe the
> > issue has been fixed and Travis CI can be made mandatory again?
>
> They've been passing fine for me too, I'm not quite sure what the issue
> was.
>
> As a related aside, I've been getting GitHub Actions support together
> (which I started at the sprints). My test PR is at
> https://github.com/zooba/cpython/pull/7 if anyone wants to check out
> what it could look like. Feel free to leave comments there.
>
> (One of these days I'll have to join core-workflow I guess...)
>
> Cheers,
> Steve
> ___
> Python-Dev mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/[email protected]/message/MTD2EQC3BWEGEV3LNKPF3KEJBSQN24LS/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/PEHYMEYY2VGW5SKU6SR6RUTIQQ3C7JP4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-06 Thread Brett Cannon
Victor Stinner wrote:
> Le ven. 6 déc. 2019 à 16:00, Guido van Rossum [email protected] a écrit :
> > Let's try to avoid having PEP discussions in the peps
> > tracker, period. That repo's tracker is only meant to handle markup and 
> > grammar.
> > I recall that some PEPs have been discussed in length in GitHub PRs.

That doesn't mean they should have been discussed there to begin with. ;)

-Brett

> But I'm fine with keeping the discussion on mailing lists. Whatever
> works :-)
> Victor
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/U62XSMCP62Y4J4T2UQ3R227OSAU2CMV7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Summary of Python tracker Issues

2019-12-06 Thread Python tracker

ACTIVITY SUMMARY (2019-11-29 - 2019-12-06)
Python tracker at https://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open7180 (+10)
  closed 43559 (+37)
  total  50739 (+47)

Open issues with patches: 2832 


Issues opened (32)
==

#38942: Possible assertion failures in csv.Dialect()
https://bugs.python.org/issue38942  opened by ZackerySpytz

#38943: Idle autocomplete window doesn't show up
https://bugs.python.org/issue38943  opened by JohnnyNajera

#38944: Idle autocomplete window doesn't close on Escape key
https://bugs.python.org/issue38944  opened by JohnnyNajera

#38945: Remove newline characters from uu encoding methods
https://bugs.python.org/issue38945  opened by stealthcopter

#38946: IDLE on macOS 10.15 Catalina does not open double-clicked file
https://bugs.python.org/issue38946  opened by RM

#38947: dataclass defaults behave inconsistently for init=True/init=Fa
https://bugs.python.org/issue38947  opened by Kevin Shweh

#38948: os.path.ismount() returns False for current working drive
https://bugs.python.org/issue38948  opened by jainankur

#38949: incorrect prefix, exec_prefix in distutils.command.install
https://bugs.python.org/issue38949  opened by xdegaye

#38952: asyncio cannot handle Python3 IPv4Address
https://bugs.python.org/issue38952  opened by Max Coplan

#38953: Untokenize and retokenize does not round-trip
https://bugs.python.org/issue38953  opened by Zac Hatfield-Dodds

#38955: Non indemnpotent behavior of asyncio.get_event_loop and asynci
https://bugs.python.org/issue38955  opened by mbussonn

#38956: argparse.BooleanOptionalAction should not add the default valu
https://bugs.python.org/issue38956  opened by Antony.Lee

#38958: asyncio REPL swallows KeyboardInterrupt while editing
https://bugs.python.org/issue38958  opened by iomintz

#38959: Parboil -- OpenMP CUTCP performance regression when upgrade fr
https://bugs.python.org/issue38959  opened by jiebinsu

#38960: DTrace FreeBSD build fix
https://bugs.python.org/issue38960  opened by David Carlier

#38961: Flaky detection of compiler vendor
https://bugs.python.org/issue38961  opened by jmaargh

#38963: multiprocessing processes seem to "bleed" user information (GI
https://bugs.python.org/issue38963  opened by romanofski

#38964: Output of syntax error in f-string contains wrong filename
https://bugs.python.org/issue38964  opened by Erik Cederstrand

#38967: Improve error message in enum for member name surrounded by un
https://bugs.python.org/issue38967  opened by Rubén Jesús García Hernández

#38970: [PDB] NameError in list comprehension in PDB
https://bugs.python.org/issue38970  opened by castix

#38971: codecs.open leaks file descriptor when invalid encoding is pas
https://bugs.python.org/issue38971  opened by Brock Mendel

#38972: Link to instructions to change PowerShell execution policy for
https://bugs.python.org/issue38972  opened by brett.cannon

#38973: Shared Memory List Returns 0 Length
https://bugs.python.org/issue38973  opened by Derek Frombach

#38974: using tkinter.filedialog.askopenfilename() freezes python 3.8
https://bugs.python.org/issue38974  opened by Daniel Preston

#38975: Add direct anchors to regex syntax documentation
https://bugs.python.org/issue38975  opened by bmispelon

#38976: Add support for HTTP Only flag in MozillaCookieJar
https://bugs.python.org/issue38976  opened by Jacob Taylor

#38978: Implement __class_getitem__ for Future, Task, Queue
https://bugs.python.org/issue38978  opened by asvetlov

#38979: ContextVar[str] should return ContextVar class, not None
https://bugs.python.org/issue38979  opened by asvetlov

#38980: Compile libpython with -fno-semantic-interposition
https://bugs.python.org/issue38980  opened by vstinner

#38986: Suppport TaskWakeupMethWrapper.__self__ to conform asyncio _fo
https://bugs.python.org/issue38986  opened by asvetlov

#38987: 3.8.0 on GNU/Linux fails to find shared library
https://bugs.python.org/issue38987  opened by buchs

#38988: Killing asyncio subprocesses on timeout?
https://bugs.python.org/issue38988  opened by dontbugme



Most recent 15 issues with no replies (15)
==

#38988: Killing asyncio subprocesses on timeout?
https://bugs.python.org/issue38988

#38987: 3.8.0 on GNU/Linux fails to find shared library
https://bugs.python.org/issue38987

#38976: Add support for HTTP Only flag in MozillaCookieJar
https://bugs.python.org/issue38976

#38975: Add direct anchors to regex syntax documentation
https://bugs.python.org/issue38975

#38963: multiprocessing processes seem to "bleed" user information (GI
https://bugs.python.org/issue38963

#38961: Flaky detection of compiler vendor
https://bugs.python.org/issue38961

#38960: DTrace FreeBSD build fix
https://bugs.python.org/issue38960

#38958: asyncio REPL swallows KeyboardInterrupt while editing
https://bugs.python.org/issue38958

#38953: Un

[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Steve Dower

On 06Dec2019 1023, Kyle Stanley wrote:

Steve Dower wrote:
 > As a related aside, I've been getting GitHub Actions support together
 > (which I started at the sprints).

Would adding support for GitHub Actions make it easier/faster to 
temporarily disable and re-enable specific CI services when they're 
having external issues? IIUC, that seems to be the primary concern to 
address.


Note that I'm not particularly well acquainted with GitHub Actions, 
other than briefly looking over https://github.com/features/actions.


GitHub Actions *is* a CI service now, so my PR is actually using their 
machines for Windows/macOS/Ubuntu build and test.


It doesn't change the ease of enabling/disabling anything - that's still 
very easy if you're an administrator on our repo and impossible if 
you're not :)


However, it does save jumping to an external website to view logs, and 
over time we can improve the integration so that error messages are 
captured properly (but you can easily view each step). And you don't 
need a separate login to rerun checks - it's just a simple button in the 
GitHub UI.


Cheers,
Steve

___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/4UYJBJ3KQYASAHPXROFZHXIJAHHPLWPK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Victor Stinner
In that case, would you mind to make Travis CI mandatory again?

Victor

Le ven. 6 déc. 2019 à 19:10, Brett Cannon  a écrit :
>
> Victor Stinner wrote:
> > Hello,
> > Le mar. 26 nov. 2019 à 20:40, Brett Cannon [email protected] a écrit :
> > > I have turned Travis off as a required check on the
> > > 3.7, 3.8, and master branches until someone is able to get a fix in.
> > > That makes me sad :-( Is there an issue at bugs.python.org to track
> > it?
>
> Nope. I had people personally asking me to deal with it and I didn't have 
> time to investigate so to unblock folks I just flipped it off.
>
> > What's the status?
>
> 🤷‍♂️
>
> > Right now, I see Travis CI jobs passing on 3.7,
> > 3.8 and master branches so I don't understand the problem. Maybe the
> > issue has been fixed and Travis CI can be made mandatory again?
>
> 🤷‍♂️
>
> -Brett
>
> > Victor
> > Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/[email protected]/message/M4IQ4DT727YT4QNO373JLN5X5R5AJI5B/
> Code of Conduct: http://python.org/psf/codeofconduct/



-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/4R2GWOC6XPSPNUEIXMDW6LU7RLJ4S2Y7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we require all deprecations to have a removal version that we follow through on?

2019-12-06 Thread Kyle Stanley
Victor Stinner wrote:
> Isn't it already the current unwritten deprecation process?

Personally, I don't think we should rely on an unwritten process for
something as important and potentially breaking as a deprecation process.
Regardless of the outcome of this discussion, I think we should try to
fully write out the process in the devguide; while still providing some
amount of flexibility.

Victor Stinner wrote:
> I'm not sure about enforcing the removal. Some APIs were deprecated,
> but it was unsure if we really wanted to remove them. Or because we
> had no idea how long it would take for developers to upgrade to the
> new functions.
...
> I would suggest to schedule removal, but not require it too strongly ;-)

Would it be reasonable to require an minimum amount of versions to be
specified (such as n versions ahead), but provide flexibility in terms to
delaying the removal, as needed? IMO, it would be more convenient for users
to have a "minimum removal" version in mind, rather than an unscheduled
deprecation that could have a version+2 (deprecation and removal in 2
versions) assigned at any point in time.

This can be difficult sometimes when we're not sure what n versions from
now will actually be called (such as with 3.10 and 4.0), but I don't think
it would be an issue to state something like this:
"func() has been deprecated, and will be removed in no sooner than 4
versions after 3.9"

instead of:
"func() has been deprecated, and will be removed in 3.13"

for long term deprecations. By "long term", I mean more than two versions.

Victor Stinner wrote:
> I know that it's an unpopular opinion, but I would strongly prefer
> Python 4 to behave exactly as any another Python 3.x release in term
> of deprecation. If I exaggerate, I would prefer that Python 4.0 would
> have *less* incompatible changes than the last Python 3.x release.

My understanding was that we were specifically waiting on considering a
Python 4.0 until there were major enough improvements/changes (such as the
"GILectomy", for example) to justify some degree of incompatibility. If it
would behave exactly the same as a standard 3.x release in terms to
incompatible changes (or have less), what would be the purpose of making a
major version change in the first place? I don't see an issue with
indefinitely continuing the 3.x line in the meantime.

Victor Stinner wrote:
> I like to remove deprecated features at the beginning of a dev cycle
> to get users feedback earlier. If they are removed late in the dev
> cycle, there is a higher risk that a problematic incompatible change
> goes through a 3.x.0 final release and so cannot be reverted anymore.

I agree, the earlier a deprecation occurs in a given release cycle, the
easier it is for users to provide feedback and then potentially prepare for
the change. Perhaps we could establish some form of guideline, for example:
"When possible, the removal of previously deprecated features should occur
as early as possible in a version's dev cycle, preferably during the alpha
phases. This provides users more time to provide feedback and plan for the
potential removal".

On Fri, Dec 6, 2019 at 10:58 AM Victor Stinner  wrote:

> Hi,
>
> Le mer. 27 nov. 2019 à 19:40, Brett Cannon  a écrit :
> > What do people think of the idea of requiring all deprecations
> specifying a version that the feature will be removed in (which under our
> annual release cadence would be at least the third release from the start
> of the deprecation, hence the deprecation being public for 2 years)? And
> that we also follow through with that removal?
>
> Isn't it already the current unwritten deprecation process? The common
> case is to schedule the removal. Previously, it was common to wait 2
> releases before removing anything: pending deprecatation in 3.6,
> deprecation in 3.7, removal in 3.8. It means that developers are
> warned during 2 releases: 3 years (I'm optimistic and consider that
> developers pay attention to PendingDeprecationWarning). So deprecation
> during 3 releases with the new release cadence would fit the previous
> practice: 3 years ;-)
>
> Are you suggesting to start with a PendingDeprecationWarning? I recall
> a debate about the value of this warning which is quiet by default,
> whereas DeprecationWarning is now displayed in the __main__ module.
>
> I'm not sure about enforcing the removal. Some APIs were deprecated,
> but it was unsure if we really wanted to remove them. Or because we
> had no idea how long it would take for developers to upgrade to the
> new functions.
>
> For example, the C API for the old Unicode API using "Py_UNICODE*"
> type is deprecated since Python 3.3 but there is no concrete plan to
> remove it (just a vague "Python 4.0 removal" plan).
>
> I deprecated the support for bytes filenames on Windows in Python 3.3,
> but Steve Dower implemented his PEP 529 (use UTF-8 for bytes on
> Windows) in Python 3.6 and so the deprecation warnings have been
> simply removed!
>
> I would sugges

[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Abdur-Rahmaan Janhangeer
Abdur-Rahmaan Janhangeer
http://www.pythonmembers.club | https://github.com/Abdur-rahmaanJ
Mauritius

On Wed, 4 Dec 2019, 06:52 Chris Angelico,  wrote:

> Python made the choice to NOT limit
> its integers, and I haven't heard of any non-toy examples where an
> attacker causes you to evaluate 2**2**100 and eats up all your RAM.
>

Happened with an IRC bot of mine, allowed calculations (hey you need to
keep rolling out new features). Someone calculated-crashed the bot by
overstepping that limit (was the day i learnt that feature of Py)

>
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/JEOAUIDLTKOC72R3WB64JLXRDRGUCTGU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Fun with Python 3.8 and Qt

2019-12-06 Thread Cristián Maureira-Fredes

Hey,

we also found the same issue last year,
the solution was to un-define use and re-define
https://code.qt.io/cgit/qt-creator/plugin-pythonextensions.git/tree/docs/plugin.md#n137

Here you have the code example of the workaround:
https://code.qt.io/cgit/qt-creator/plugin-pythonextensions.git/tree/plugins/pythonextensions/pyutil.cpp#n40

Cheers

On 10/22/19 5:11 AM, Kacvinsky, Tom wrote:

Today I discovered the this struct

typedef struct{
 const char* name;
 int basicsize;
 int itemsize;
 unsigned int flags;
 PyType_Slot *slots; /* terminated by slot==0. */
} PyType_Spec;

with "PyTypeSlot *slots" being on line 190 of object.h causes a problem when 
compiled with code that brings in Qt.  Qt has macro definitions of slots.

With a cursory preprocessing of the file I was working with, using the handy 
gcc options -dM -E, I found that
slots was defined to nothing

#define slots

and hence caused problems when object.h was brought into the mix.

I will try to make a simple reproducer tomorrow.  I know this probably could be 
solved by header file inclusion re-ordering,
or in some cases #undef'ing slots before including Python.h, but I also thought 
the Python dev team would like to know
about this issue.

Tom
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/5VRZVIMFB5OMX7SVQCXCH7FT5MCTN26J/
Code of Conduct: http://python.org/psf/codeofconduct/



--
Dr. Cristián Maureira-Fredes
https://maureira.xyz
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/476YAEYUY7GLQS3LNGRJ3DVLZVKWH26H/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 611: The one million limit.

2019-12-06 Thread Greg Ewing

On 7/12/19 2:54 am, Rhodri James wrote:


You've talked some about not making the 640k mistake


I think it's a bit unfair to call it a "mistake". They had a 1MB
address space limit to work with, and that was a reasonable place
to put the division between RAM and display/IO space. If anything
is to be criticised, it's Intel's decision to only add 4 more
address bits when going from an 8-bit to a 16-bit architecture.

--
Greg
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/U7DHPGSMVQAQMC5I6MBXSXPR7W5J2X5J/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 611: The one million limit.

2019-12-06 Thread Chris Angelico
On Sat, Dec 7, 2019 at 9:58 AM Greg Ewing  wrote:
>
> On 7/12/19 2:54 am, Rhodri James wrote:
>
> > You've talked some about not making the 640k mistake
>
> I think it's a bit unfair to call it a "mistake". They had a 1MB
> address space limit to work with, and that was a reasonable place
> to put the division between RAM and display/IO space. If anything
> is to be criticised, it's Intel's decision to only add 4 more
> address bits when going from an 8-bit to a 16-bit architecture.
>

And to construct a bizarre segmented system that means that 16 + 16 =
20, thus making it very hard to improve on it later. If it hadn't been
for the overlapping segment idea, it would have been easy to go to 24
address lines later, and eventually 32. But since the 16:16 segmented
system was built the way it was, every CPU afterwards had to remain
compatible with it.

Do you know when support for the A20 gate was finally dropped? 2013.
Yes. THIS DECADE. If they'd decided to go for 32-bit addressing (even
with 20 address lines), it would have been far easier to improve on it
later.

I'm sure there were good reasons for what they did (and hey, it did
mean TSRs could be fairly granular in their memory requirements), but
it's still a lesson to be learned from in not unnecessarily restrict
something that follows Moore's Law.

ChrisA
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/SW5HDQ27KBNYFMWAIMPVCGLVLNGQVNOK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 611: The one million limit.

2019-12-06 Thread David Malcolm
On Thu, 2019-12-05 at 16:38 +, Mark Shannon wrote:
> Hi Everyone,
> 
> Thanks for all your feedback on my proposed PEP. I've editing the PEP
> in 
> light of all your comments and it is now hopefully more precise and
> with 
> better justification.
> 
> https://github.com/python/peps/pull/1249

Other program languages have limits in their standards.  For example:

Values for #line in the C preprocessor:
"If lineno is 0 or greater than 32767 (until C99) 2147483647 (since
C99), the behavior is undefined."
  https://en.cppreference.com/w/c/preprocessor/line

Similar for C++'s preprocessor (but for C++11)
  https://en.cppreference.com/w/cpp/preprocessor/line


(These days I maintain GCC's location-tracking code, and we have a
number of implementation-specific limits and heuristics for packing
file/line/column data into a 32-bit type; see
https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libcpp/include/line-map.h 
and in particular LINE_MAP_MAX_LOCATION_WITH_COLS,
LINE_MAP_MAX_LOCATION, LINE_MAP_MAX_COLUMN_NUMBER, etc)


Hope this is constructive
Dave
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/7N3CF4MDOBSPKANRZJSZOY6JVAGOCHXF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 611: The one million limit.

2019-12-06 Thread Gregory P. Smith
I'd prefer it if we stayed on topic here...

On Fri, Dec 6, 2019 at 3:15 PM Chris Angelico  wrote:

> On Sat, Dec 7, 2019 at 9:58 AM Greg Ewing 
> wrote:
> >
> > On 7/12/19 2:54 am, Rhodri James wrote:
> >
> > > You've talked some about not making the 640k mistake
> >
> > I think it's a bit unfair to call it a "mistake". They had a 1MB
> > address space limit to work with, and that was a reasonable place
> > to put the division between RAM and display/IO space. If anything
> > is to be criticised, it's Intel's decision to only add 4 more
> > address bits when going from an 8-bit to a 16-bit architecture.
> >
>
> And to construct a bizarre segmented system that means that 16 + 16 =
> 20, thus making it very hard to improve on it later. If it hadn't been
> for the overlapping segment idea, it would have been easy to go to 24
> address lines later, and eventually 32. But since the 16:16 segmented
> system was built the way it was, every CPU afterwards had to remain
> compatible with it.
>
> Do you know when support for the A20 gate was finally dropped? 2013.
> Yes. THIS DECADE. If they'd decided to go for 32-bit addressing (even
> with 20 address lines), it would have been far easier to improve on it
> later.
>
> I'm sure there were good reasons for what they did (and hey, it did
> mean TSRs could be fairly granular in their memory requirements), but
> it's still a lesson to be learned from in not unnecessarily restrict
> something that follows Moore's Law.
>
> ChrisA
> ___
> Python-Dev mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/[email protected]/message/SW5HDQ27KBNYFMWAIMPVCGLVLNGQVNOK/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/2WHKCV6JZDTEWENX2LQTPH4JWU6PYGRI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Kyle Stanley
Steve Dower wrote:
> GitHub Actions *is* a CI service now, so my PR is actually using their
> machines for Windows/macOS/Ubuntu build and test.

Oh I see, that makes more sense. Thanks for the clarification.

Steve Dower wrote:
> It doesn't change the ease of enabling/disabling anything - that's still
> very easy if you're an administrator on our repo and impossible if
> you're not :)

Yeah I was aware that it required administrative permissions, I was just
under the impression that it wasn't a simple on/off toggle, and that there
wasn't an easy way to ensure failures were occurring due to external
factors. I suppose that's a separate issue though.

Steve Dower wrote:
> And you don't need a separate login to rerun checks - it's just a simple
button in the
> GitHub UI.

That would be especially great, and much better than closing+reopening the
PR. Restarting the checks is typically a last resort since intermittent
regression test failures are a concern, but it's highly useful when the
failure is occurring due to external factors (such as a recently merged PR
or issues with the CI service itself).

On Fri, Dec 6, 2019 at 1:33 PM Steve Dower  wrote:

> On 06Dec2019 1023, Kyle Stanley wrote:
> > Steve Dower wrote:
> >  > As a related aside, I've been getting GitHub Actions support together
> >  > (which I started at the sprints).
> >
> > Would adding support for GitHub Actions make it easier/faster to
> > temporarily disable and re-enable specific CI services when they're
> > having external issues? IIUC, that seems to be the primary concern to
> > address.
> >
> > Note that I'm not particularly well acquainted with GitHub Actions,
> > other than briefly looking over https://github.com/features/actions.
>
> GitHub Actions *is* a CI service now, so my PR is actually using their
> machines for Windows/macOS/Ubuntu build and test.
>
> It doesn't change the ease of enabling/disabling anything - that's still
> very easy if you're an administrator on our repo and impossible if
> you're not :)
>
> However, it does save jumping to an external website to view logs, and
> over time we can improve the integration so that error messages are
> captured properly (but you can easily view each step). And you don't
> need a separate login to rerun checks - it's just a simple button in the
> GitHub UI.
>
> Cheers,
> Steve
>
>
>
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/KB7F5JNUILUH3H2DJ3T7NGZAKQXMK76D/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 611: The one million limit.

2019-12-06 Thread Ethan Furman

On 12/06/2019 03:19 PM, Gregory P. Smith wrote:


I'd prefer it if we stayed on topic here...


I find discussion of other computing limits, and how and why they failed (and 
the hassles of working around them), very relevant.

--
~Ethan~
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/QEL4KJDPKZ7J3W36UTDCYLWNZBRRUXSE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Steven D'Aprano
On Fri, Dec 06, 2019 at 11:10:49AM +, Paul Moore wrote:
[...]
> > They might end up with a million dynamically created classes, each with
> > a single instance, when what they wanted was a single class with a
> > million instances.
> 
> But isn't that the point here? A limit would catch this and prompt
> them to rewrite the code as
> 
> cls = namedtuple("Spam", "fe fi fo fum")
> for attributes in list_of_attributes:
> obj = cls(*attributes)
> values.append(obj)

Indeed, and maybe in the long term their code would be better for it, 
but in the meantime code that was working is now broken. It's a 
backwards-incompatable change.

This PEP isn't about forcing people to write better code against 
their wishes :-)

This leads me to conclude that:

(1) Regardless of what we do for the other resources, "number of 
classes" may have to be excluded from the PEP.

(2) Any such limit on classes needs to be bumped up.

(3) Or we need a deprecation period before adding a limit:

In release 3.x, the interpreter raises a *warning* when the 
number of classes reaches a million; in release 3.X+2 or X+5 
or whenever, that gets changed to an error.

It will need a long deprecation period for the reasons Thomas mentions: 
the person seeing the warnings might not be the developer who can do 
anything about it. We have to give people plenty of time to see the 
warnings and hassle the developers into fixing it.

For classes, it might be better to the PEP to increase the desired limit 
from a million to, let's say, 2**32 (4 billion). Most people are going 
to run into other limits before they hit that: a bare class inheriting 
from type uses about 500 bytes on a 32-bit system, and twice that on a 
64-bit system:

py> sys.getsizeof(type('Klass', (), {}))
1056

so a billion classes uses about a terabyte. In comparison, Windows 10 
Home only supports 128 GB memory in total, and while Windows Server 2016 
supports up to 26 TB, we surely can agree that we aren't required to 
allow Python scripts to fill the entire RAM with nothing but classes :-)


I think Mark may have been too optimistic to hope for a single limit 
that is suitable for all seven of his listed resources:

* The number of source code lines in a module.
* The number of bytecode instructions in a code object.
* The sum of local variables and stack usage for a code object.
* The number of distinct names in a code object.
* The number of constants in a code object.
* The number of classes in a running interpreter.
* The number of live coroutines in a running interpreter.

A million seems reasonable for lines of source code, if we're prepared 
to tell people using machine generated code to split their humongous .py 
files into multiple scripts. A small imposition on a small subset of 
Python users, for the benefit of all. I'm okay with that.

Likewise, I guess a million is reasonable for the next four resources, 
but I'm not an expert and my guess is probably worthless :-)

A million seems like it might be too low for the number of classes; 
perhaps 2**32 is acceptible.

And others have suggested that a million is too low for coroutines.


> > Could there be people doing this deliberately? If so, it must be nice
> > to have so much RAM that we can afford to waste it so prodigiously: a
> > namedtuple with ten items uses 64 bytes, but the associated class uses
> > 444 bytes, plus the sizes of the methods etc. But I suppose there could
> > be a justification for such a design.
> 
> You're saying that someone might have a justification for deliberately
> creating a million classes,

Yes. I *personally* cannot think of an example that wouldn't (in my 
opinion) be better written another way, but I don't think I'm quite 
knowledgable enough to categorically state that ALL such uses are bogus.



-- 
Steven
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/DJZMLDGKV2FKIGQDCBALOEJDPNKAWDVO/
Code of Conduct: http://python.org/psf/codeofconduct/