[Python-Dev] Re: Python 3.8 problem with PySide

2019-12-05 Thread Abdur-Rahmaan Janhangeer
No idea why gmail landed such an important email in the spam folder (i grit
my teeth if pyside freezes @ <3.7)

Abdur-Rahmaan Janhangeer
http://www.pythonmembers.club | https://github.com/Abdur-rahmaanJ
Mauritius

>
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MPQQBY5TXAI7O3MCXJ2OLEZZMK47UHRN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Python 3.8 problem with PySide

2019-12-05 Thread Christian Tismer
Hi guys,

during the last few weeks I have been struggling quite much
in order to make PySide run with Python 3.8 at all.

The expected problems were refcounting leaks due to changed
handling of heaptypes. But in fact, the runtime behavior was
much worse, because I always got negative refcounts!

After exhaustive searching through the different 3.8 commits, I could
isolate the three problems with logarithmic search.

The hard problem was this:
Whenever PySide creates a new type, it crashes in PyType_Ready.
The reason is the existence of the Py_TPFLAGS_METHOD_DESCRIPTOR
flag.
During the PyType_Ready call, the function mro() is called.
This mro() call results in a negative refcount, because something
behaves differently since this flag is set by default in mro().

When I patched this flag away during the type_new call, everything
worked ok. I don't understand why this problem affects PySide
at all. Here is the code that would normally be only the newType line:


// PYSIDE-939: This is a temporary patch that circumvents the problem
// with Py_TPFLAGS_METHOD_DESCRIPTOR until this is finally solved.
PyObject *ob_PyType_Type = reinterpret_cast(_Type);
PyObject *mro = PyObject_GetAttr(ob_PyType_Type,
Shiboken::PyName::mro());
auto hold = Py_TYPE(mro)->tp_flags;
Py_TYPE(mro)->tp_flags &= ~Py_TPFLAGS_METHOD_DESCRIPTOR;
auto *newType = reinterpret_cast(type_new(metatype,
args, kwds));
Py_TYPE(mro)->tp_flags = hold;


I would really like to understand the reason for this unexpected effect.
Does this ring a bell? I have no clue what is wrong with PySide, if it
is wrong at all.

Thanks -- Chris

-- 
Christian Tismer :^)   tis...@stackless.com
Software Consulting  : http://www.stackless.com/
Karl-Liebknecht-Str. 121 : https://github.com/PySide
14482 Potsdam: GPG key -> 0xFB7BEE0E
phone +49 173 24 18 776  fax +49 (30) 700143-0023
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/U46CQTHQDSFMJ3QXHJZT6FXVVKXRF2JB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] PEP 611: The one million limit.

2019-12-05 Thread Mark Shannon

Hi Everyone,

Thanks for all your feedback on my proposed PEP. I've editing the PEP in 
light of all your comments and it is now hopefully more precise and with 
better justification.


https://github.com/python/peps/pull/1249

Cheers,
Mark.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KHCXDKDGYNI3PBQRBEYFLAAHRBTLMMG6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Mark Shannon

Hi,

On 05/12/2019 12:54 pm, Tal Einat wrote:
On Tue, Dec 3, 2019 at 6:23 PM Mark Shannon > wrote:


Hi Everyone,

I am proposing a new PEP, still in draft form, to impose a limit of one
million on various aspects of Python programs, such as the lines of
code
per module.

Any thoughts or feedback?


My two cents:

I find the arguments for security (malicious code) and ease of 
implementation compelling. I find the point about efficiency, on the 
other hand, to be almost a red herring in this case. In other words, 
there are great reasons to consider this regardless of efficiency, and 
IMO those should guide this.


I do find the 1 million limit low, especially for bytecode instructions 
and lines of code. I think 1 billion / 2^32 / 2^31 (we can choose the 
bikeshed color later) would be much more reasonable, and the effect on 
efficiency compared to 2^20 should be negligible.


The effect of changing the bytecode size limit from 2^20 to 2^31 would 
be significant. Bytecode must incorporate jumps and a lower limit for 
bytecode size means that a fixed size encoding is possible. A fixed size 
encoding simplifies and localizes opcode decoding, which impacts the 
critical path of the interpreter.




I like the idea of making these configurable, e.g. adding a compilation 
option to increase the limit to 10^18 / 2^64 / 2^63.


While theoretically possible, it would be awkward to implement and very 
hard to test effectively.




Mark, I say go for it, write the draft PEP, and try to get a wider 
audience to tell whether they know of cases where these limits would 
have been hit.


- Tal Einat

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KRPN3RXLUHWK6HKV5NIPZWJ626J23UMB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Deprecating the "u" string literal prefix

2019-12-05 Thread Victor Stinner
Le jeu. 5 déc. 2019 à 12:14, Thomas Wouters  a écrit :
> It should, but it doesn't always :) If you forget how your data is flawed, 
> the 'smarter' decision can easily be wrong, instead. I do think it's a good 
> idea to reject ideas that would break a certain number of PyPI packages, say, 
> but just because it won't break them doesn't mean it won't break a 
> significant number of others.

What I proposed in the *rejected* PEP is to check the state at each
Python release to see the progress, especially during release
candidates. My intent is not to prevent incompatible changes, but more
the opposite: better drive transitions to be able to make *more*
incompatible changes.

The PEP is nothing new, core developers are already helping a lot to
provide pull requests to projects broken by incompatible changes. See
for example of the PEP 570 has been handled in 3rd party code.

As far as I recall, we never reverted incompatible changes. So I don't
expect that suddenly, we would revert all incompatible changes because
a single obscure PyPI project which was made incompatible.

Maybe one good example is that the u-prefix of strings (u"unicode")
removal was reverted in Python 3.3 for practicability :-)

I don't think that we live in a black & white world, it's all a matter
of tradeoffs ;-)

Victor
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S7MZJHF2QKJTLMJD5TGHAHQO54CR2RGM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Joao S. O. Bueno
>
> Assuming a code base of 50M loc, *and* that all the code would be loaded
> into a single application (I sincerely hope that isn't the case) *and*
> that each class is only 100 lines, even then there would only be 500,000
> classes.
> If a single application has 500k classes, I don't think that a limit of
> 1M classes would be its biggest problem :)


It is more like 1 million calls to `type` adding some linear combination of
attributes
to a base class.

Think of a persistently running server that would create dynamic named
tuples lazily.
(I am working on code that does that, but with currently 5-6 attributes -
that gives me up to 64 classes,
but if I had 20 attributes this code would hit that limit - (if one would
use the lib in a persistent server, that is :-)  )

Anyway, not happening soon - I am just writting to say that one million
classes does not mean 1 million hard-codeed 100 LoC classes,
rather, it is 1 million calls to "namedtuple".

On Thu, 5 Dec 2019 at 11:30, Mark Shannon  wrote:

> Hi Guido,
>
> On 04/12/2019 3:51 pm, Guido van Rossum wrote:
> > I am overwhelmed by this thread (and a few other things in real life)
> > but here are some thoughts.
> >
> > 1. It seems the PEP doesn't sufficiently show that there is a problem to
> > be solved. There are claims of inefficiency but these aren't
> > substantiated and I kind of doubt that e.g. representing line numbers in
> > 32 bits rather than 20 bits is a problem.
>
> Fundamentally this is not about the immediate performance gains, but
> about the potential gains from not having to support huge, vaguely
> defined limits that are never needed in practice.
>
> Regarding line numbers,
> decoding the line number table for exception tracebacks, profiling and
> debugging is expensive and the cost is linear in the size of the code
> object. So, the performance benefit would be largest for the code that
> is nearest to the limits.
>
> >
> > 2. I have handled complaints in the past about existing (accidental)
> > limits that caused problems for generated code. People occasionally
> > generate *really* wacky code (IIRC the most recent case was a team that
> > was generating Python code from machine learning models they had
> > developed using other software) and as long as it works I don't want to
> > limit such applications.
>
> The key word here is "occasionally". How much do we want to increase the
> costs of every Python user for the very rare code generator that might
> bump into a limit?
>
> >
> > 3. Is it easy to work around a limit? Even if it is, it may be a huge
> > pain. I've heard of a limit of 65,000 methods in Java on Android, and my
> > understanding was that it was actually a huge pain for both the
> > toolchain maintainers and app developers (IIRC the toolchain had special
> > tricks to work around it, but those required app developers to change
> > their workflow). Yes, 65,000 is a lot smaller than a million, but in a
> > different context the same concern applies.
>
> 64k *methods* is much, much less than 1M *classes*. At 6 methods per
> class, it is 100 times less.
>
> The largest Python code bases, that I am aware of, are at JP Morgan,
> with something like 36M LOC and Bank of America with a similar number.
>
> Assuming a code base of 50M loc, *and* that all the code would be loaded
> into a single application (I sincerely hope that isn't the case) *and*
> that each class is only 100 lines, even then there would only be 500,000
> classes.
> If a single application has 500k classes, I don't think that a limit of
> 1M classes would be its biggest problem :)
>
> >
> > 4. What does Python currently do if you approach or exceed one of these
> > limits? I tried a simple experiment, eval(str(list(range(200,
> > and this completes in a few seconds, even though the source code is a
> > single 16 Mbyte-long line.
>
> You can have lines as long as you like :)
>
> >
> > 5. On the other hand, the current parser cannot handle more than 100
> > nested parentheses, and I've not heard complaints about this. I suspect
> > the number of nested indent levels is similarly constrained by the
> > parser. The default function call recursion limit is set to 1000 and
> > bumping it significantly risks segfaults. So clearly some limits exist
> > and are apparently acceptable.
> >
> > 6. In Linux and other UNIX-y systems, there are many per-process or
> > per-user limits, and they can be tuned -- the user (using sudo) can
> > change many of those limits, the sysadmin can change the defaults within
> > some range, and sometimes the kernel can be recompiled with different
> > absolute limits (not an option for most users or even sysadmins). These
> > limits are also quite varied -- the maximum number of open file
> > descriptors is different than the maximum pipe buffer size. This is of
> > course as it should be -- the limits exist to protect the OS and other
> > users/processes from runaway code and intentional attacks on resources.
> > (And yet, fork 

[Python-Dev] Re: Deprecating the "u" string literal prefix

2019-12-05 Thread Victor Stinner
Please try to get an email client which is able to reply in a thread,
rather than creating a new thread each time you send an email.

You might want to try HyperKitty:
https://mail.python.org/archives/list/python-dev@python.org/

Victor

Le jeu. 5 déc. 2019 à 10:50, Anders Munch  a écrit :
>
> >> I'm struggling to see what i-strings would do for i18n that str.format 
> >> doesn't do better.
> Serhiy Storchaka [mailto:storch...@gmail.com]
> > You would not need to repeat yourself.
> > _('{name} costs ${price:.2f}').format(name=name, price=price)
>
> A small price to pay for having a well-defined interface with the translator.
>
> Security is one reason: A translator could sneak {password} or {signing_key} 
> into an unrelated string, if those names happen to be present in the 
> namespace.  That may not seem like a big issue if you've only ever used 
> gettext/.po-file translation, where the translation is pre-built with the 
> program, but in a more dynamic setting where end-users can supply 
> translations, it's a different story.
>
> You could parse the strings and require translations to have the same 
> variables, but that is limiting. E.g. that would mean you couldn't translate
> 'Welcome, {first_name}'
> into
> 'Willkommen, {title} {last_name}'
>
> Another reason is that you don't want variable names in your program and 
> translations to have to change in lock-step.
> E.g. you might change your code to:
>  _('{name} costs ${price:.2f}').format(name=prod.short_name,
> 
> price=context.convert_to_chosen_currency(price))
> without needing to change any translations.
>
> regards, Anders
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/A2PDRFFJWK2XYPHLVGKC26QR2DJ6H4QM/
> Code of Conduct: http://python.org/psf/codeofconduct/



-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2F3O3CMR2A5NZJ6UQMYW4QJB2MKSEFBH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Mark Shannon

Hi Guido,

On 04/12/2019 3:51 pm, Guido van Rossum wrote:
I am overwhelmed by this thread (and a few other things in real life) 
but here are some thoughts.


1. It seems the PEP doesn't sufficiently show that there is a problem to 
be solved. There are claims of inefficiency but these aren't 
substantiated and I kind of doubt that e.g. representing line numbers in 
32 bits rather than 20 bits is a problem.


Fundamentally this is not about the immediate performance gains, but 
about the potential gains from not having to support huge, vaguely 
defined limits that are never needed in practice.


Regarding line numbers,
decoding the line number table for exception tracebacks, profiling and 
debugging is expensive and the cost is linear in the size of the code 
object. So, the performance benefit would be largest for the code that 
is nearest to the limits.




2. I have handled complaints in the past about existing (accidental) 
limits that caused problems for generated code. People occasionally 
generate *really* wacky code (IIRC the most recent case was a team that 
was generating Python code from machine learning models they had 
developed using other software) and as long as it works I don't want to 
limit such applications.


The key word here is "occasionally". How much do we want to increase the 
costs of every Python user for the very rare code generator that might 
bump into a limit?




3. Is it easy to work around a limit? Even if it is, it may be a huge 
pain. I've heard of a limit of 65,000 methods in Java on Android, and my 
understanding was that it was actually a huge pain for both the 
toolchain maintainers and app developers (IIRC the toolchain had special 
tricks to work around it, but those required app developers to change 
their workflow). Yes, 65,000 is a lot smaller than a million, but in a 
different context the same concern applies.


64k *methods* is much, much less than 1M *classes*. At 6 methods per 
class, it is 100 times less.


The largest Python code bases, that I am aware of, are at JP Morgan, 
with something like 36M LOC and Bank of America with a similar number.


Assuming a code base of 50M loc, *and* that all the code would be loaded 
into a single application (I sincerely hope that isn't the case) *and* 
that each class is only 100 lines, even then there would only be 500,000 
classes.
If a single application has 500k classes, I don't think that a limit of 
1M classes would be its biggest problem :)




4. What does Python currently do if you approach or exceed one of these 
limits? I tried a simple experiment, eval(str(list(range(200, 
and this completes in a few seconds, even though the source code is a 
single 16 Mbyte-long line.


You can have lines as long as you like :)



5. On the other hand, the current parser cannot handle more than 100 
nested parentheses, and I've not heard complaints about this. I suspect 
the number of nested indent levels is similarly constrained by the 
parser. The default function call recursion limit is set to 1000 and 
bumping it significantly risks segfaults. So clearly some limits exist 
and are apparently acceptable.


6. In Linux and other UNIX-y systems, there are many per-process or 
per-user limits, and they can be tuned -- the user (using sudo) can 
change many of those limits, the sysadmin can change the defaults within 
some range, and sometimes the kernel can be recompiled with different 
absolute limits (not an option for most users or even sysadmins). These 
limits are also quite varied -- the maximum number of open file 
descriptors is different than the maximum pipe buffer size. This is of 
course as it should be -- the limits exist to protect the OS and other 
users/processes from runaway code and intentional attacks on resources. 
(And yet, fork bombs exist, and it's easy to fill up a filesystem...) I 
take from this that limits are useful, may have to be overridable, and 
should have values that make sense given the resource they guard.


Being able to dynamically *reduce* a limit from one million seems like a 
good idea.





--
--Guido van Rossum (python.org/~guido )
/Pronouns: he/him //(why is my pronoun here?)/ 


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Z4QO3SJDKXMWOP5H5XBFSPSIEFH6BJBS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Macros instead of inline functions?

2019-12-05 Thread Rhodri James

On 04/12/2019 18:22, Serhiy Storchaka wrote:
In these files (symtable.c, compile.c, ast_opt.c, etc) there are 
sequences of calls for child nodes. Every call can return an error, so 
you need to check every call and return an error immediately after the 
call. With inline functions you would need to write


     if (!VISIT(...)) {
     return 0;
     }
     if (!VISIT(...)) {
     return 0;
     }
     if (!VISIT(...)) {
     return 0;
     }

instead of just

     VISIT(...);
     VISIT(...);
     VISIT(...);


Can I just say as a C programmer by trade that I hate this style of 
macro with the burning passion of a thousand fiery suns?  Code like the 
second example is harder to comprehend because you can't simply see that 
it can change your flow of control.   It comes as a surprise that if 
something went wrong in the first VISIT(), the remaining VISIT()s don't 
get called.


It's not so bad in the case you've demonstrated of bombing out on 
errors, but I've seen the idiom used much less coherently in real-world 
applications to produce code that is effectively unreadable.  It took me 
a long time to wrap my brain around what the low-level parsing code in 
Expat was doing, for example.  I strongly recommend not starting down 
that path.


--
Rhodri James *-* Kynesim Ltd
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IRNQHQUE7PX5RM63IHXGFOODBOFCZGAI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Mark Shannon




On 03/12/2019 5:22 pm, Steve Dower wrote:

On 03Dec2019 0815, Mark Shannon wrote:

Hi Everyone,

I am proposing a new PEP, still in draft form, to impose a limit of 
one million on various aspects of Python programs, such as the lines 
of code per module.


I assume you're aiming for acceptance in just under four months? :)


Why not? I'm an optimist at heart :)




Any thoughts or feedback?


It's actually not an unreasonable idea, to be fair. Picking an arbitrary 
limit less than 2**32 is certainly safer for many reasons, and very 
unlikely to impact real usage. We already have some real limits well 
below 10**6 (such as if/else depth and recursion limits).


That said, I don't really want to impact edge-case usage, and I'm all 
too familiar with other examples of arbitrary limits (no file system 
would need a path longer than 260 characters, right? :o) ).


Some comments on the specific items, assuming we're not just going to 
reject this out of hand.



Specification
=

This PR proposes that the following language features and runtime 
values be limited to one million.


* The number of source code lines in a module


This one feels the most arbitrary. What if I have a million blank lines 
or comments? We still need the correct line number to be stored, which 
means our lineno fields still have to go beyond 10**6. Limiting total 
lines in a module to 10**6 is certainly too small.



* The number of bytecode instructions in a code object.


Seems reasonable.


* The sum of local variables and stack usage for a code object.


I suspect our effective limit is already lower than 10**6 here anyway - 
do we know what it actually is?



* The number of distinct names in a code object


SGTM.


* The number of constants in a code object.


SGTM.


* The number of classes in a running interpreter.


I'm a little hesitant on this one, but perhaps there's a way to use a 
sentinel for class_id (in your later struct) for when someone exceeds 
this limit? The benefits seem worthwhile here even without the rest of 
the PEP.



* The number of live coroutines in a running interpreter.


SGTM. At this point we're probably putting serious pressure on kernel 
wait objects/FDs anyway, and if you're not waiting then you're probably 
not efficiently using coroutines anyway.


From my limited googling, linux has a hard limit of about 600k file 
descriptors across all processes. So, 1M is well past any reasonable 
per-process limit. My impression is that the limits are lower on 
Windows, is that right?





Having 20 bit operands (21 bits for relative branches) allows 
instructions
to fit into 32 bits without needing additional ``EXTENDED_ARG`` 
instructions.
This improves dispatch, as the operand is strictly local to the 
instruction.

Using super-instructions would make that the 32 bit format
almost as compact as the 16 bit format, and significantly faster.


We can measure this - how common are EXTENDED_ARG instructions? ISTR we 
checked this when switching to 16-bit instructions and it was worth it, 
but I'm not sure whether we also considered 32-bit instructions at that 
time.


The main benefit of 32 bit instructions is super-instructions, but 
removing EXTENDED_ARG does streamline instruction decoding a bit.





Total number of classes in a running interpreter


This limit has to the potential to reduce the size of object headers 
considerably.


This would be awesome, and I *think* it's ABI compatible (as the 
affected fields are all put behind the PyObject* that gets returned, 
right?). If so, I think it's worth calling that out in the text, as it's 
not immediately obvious.


Cheers,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4XK733PE7FC64V3RUTA5DNQKYB3PE74J/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Karthikeyan
On Thu, Dec 5, 2019 at 6:23 PM Mark Shannon  wrote:

>
>
> On 05/12/2019 12:45 pm, Karthikeyan wrote:
> > On Thu, Dec 5, 2019, 5:53 PM Mark Shannon  > > wrote:
> >
> >
> >
> > On 04/12/2019 2:31 am, Gregory P. Smith wrote:
> >  >
> >  >
> >  > On Tue, Dec 3, 2019 at 8:21 AM Mark Shannon  > 
> >  > >> wrote:
> >  >
> >  > Hi Everyone,
> >  >
> >  > I am proposing a new PEP, still in draft form, to impose a
> > limit of one
> >  > million on various aspects of Python programs, such as the
> > lines of
> >  > code
> >  > per module.
> >  >
> >  > Any thoughts or feedback?
> >  >
> >
> > [snip]
> >
> >  >
> >  >
> >  > Overall I /like/ the idea of limits... /But.../ in my experience,
> > limits
> >  > like this tend to impact generated source code or generated
> > bytecode,
> >  > and thus any program that transitively uses those.
> >  >
> >  > Hard limits within the Javaish world have been *a major pain* on
> the
> >  > Android platform for example.  I wouldn't call workarounds
> >  > straightforward when it comes to total number of classes or
> > methods in a
> >  > process.
> >
> > Do you have any numbers? 1M is a lot bigger then 64K, but real world
> > numbers would be helpful.
> >
> >
> > I guess the relevant case in question is with Facebook patching the
> > limit of 65,000 classes in Android :
> >
> https://m.facebook.com/notes/facebook-engineering/under-the-hood-dalvik-patch-for-facebook-for-android/10151345597798920
>
> Is that the correct link? That seems to be an issue with an internal
> buffer size, not the limit on the number of classes.
>

Sorry, it should have been about the number of methods limit in Android
that is around 65,000 methods :
https://developer.android.com/studio/build/multidex . I guess facebook
worked around the limit but couldn't find a reliable source for it. There
was also a post on Facebook iOS app with large number of classes but not
essentially hitting a limit on the iOS platform :
https://quellish.tumblr.com/post/126712999812/how-on-earth-the-facebook-ios-application-is-so
.
I guess it's the number referred but I could be mistaken.


>
> >
> >  >
> >  > If we're to adopt limits where there were previously none, we
> > need to do
> >  > it via a multi-release deprecation cycle feedback loop to give
> > people a
> >  > way to find report use cases that exceed the limits in real world
> >  > practical applications.  So the limits can be reconsidered or the
> >  > recommended workarounds tested and agreed upon.
> >  >
> >  > -gps
> > ___
> > Python-Dev mailing list -- python-dev@python.org
> > 
> > To unsubscribe send an email to python-dev-le...@python.org
> > 
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> >
> https://mail.python.org/archives/list/python-dev@python.org/message/ECKES7IPWGD74DAKFYV7JEWNOBAFEWYF/
> > Code of Conduct: http://python.org/psf/codeofconduct/
> >
>


-- 
Regards,
Karthikeyan S
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EHHH44J6KDRQGZCEUNMCGUUIZ5JUX6UV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Tal Einat
 On Tue, Dec 3, 2019 at 6:23 PM Mark Shannon  wrote:

> Hi Everyone,
>
> I am proposing a new PEP, still in draft form, to impose a limit of one
> million on various aspects of Python programs, such as the lines of code
> per module.
>
> Any thoughts or feedback?
>

My two cents:

I find the arguments for security (malicious code) and ease of
implementation compelling. I find the point about efficiency, on the other
hand, to be almost a red herring in this case. In other words, there are
great reasons to consider this regardless of efficiency, and IMO those
should guide this.

I do find the 1 million limit low, especially for bytecode instructions and
lines of code. I think 1 billion / 2^32 / 2^31 (we can choose the bikeshed
color later) would be much more reasonable, and the effect on efficiency
compared to 2^20 should be negligible.

I like the idea of making these configurable, e.g. adding a compilation
option to increase the limit to 10^18 / 2^64 / 2^63.

Mark, I say go for it, write the draft PEP, and try to get a wider audience
to tell whether they know of cases where these limits would have been hit.

- Tal Einat
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TWACAUZCTR4TANEI3DK5IKPG5RWJNJJM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Mark Shannon



On 05/12/2019 12:45 pm, Karthikeyan wrote:
On Thu, Dec 5, 2019, 5:53 PM Mark Shannon > wrote:




On 04/12/2019 2:31 am, Gregory P. Smith wrote:
 >
 >
 > On Tue, Dec 3, 2019 at 8:21 AM Mark Shannon mailto:m...@hotpy.org>
 > >> wrote:
 >
 >     Hi Everyone,
 >
 >     I am proposing a new PEP, still in draft form, to impose a
limit of one
 >     million on various aspects of Python programs, such as the
lines of
 >     code
 >     per module.
 >
 >     Any thoughts or feedback?
 >

[snip]

 >
 >
 > Overall I /like/ the idea of limits... /But.../ in my experience,
limits
 > like this tend to impact generated source code or generated
bytecode,
 > and thus any program that transitively uses those.
 >
 > Hard limits within the Javaish world have been *a major pain* on the
 > Android platform for example.  I wouldn't call workarounds
 > straightforward when it comes to total number of classes or
methods in a
 > process.

Do you have any numbers? 1M is a lot bigger then 64K, but real world
numbers would be helpful.


I guess the relevant case in question is with Facebook patching the 
limit of 65,000 classes in Android : 
https://m.facebook.com/notes/facebook-engineering/under-the-hood-dalvik-patch-for-facebook-for-android/10151345597798920


Is that the correct link? That seems to be an issue with an internal 
buffer size, not the limit on the number of classes.




 >
 > If we're to adopt limits where there were previously none, we
need to do
 > it via a multi-release deprecation cycle feedback loop to give
people a
 > way to find report use cases that exceed the limits in real world
 > practical applications.  So the limits can be reconsidered or the
 > recommended workarounds tested and agreed upon.
 >
 > -gps
___
Python-Dev mailing list -- python-dev@python.org

To unsubscribe send an email to python-dev-le...@python.org

https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at

https://mail.python.org/archives/list/python-dev@python.org/message/ECKES7IPWGD74DAKFYV7JEWNOBAFEWYF/
Code of Conduct: http://python.org/psf/codeofconduct/


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IXJV7FQU4YPF6GWP5EP2ZUUX65N2CTFK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Karthikeyan
On Thu, Dec 5, 2019, 5:53 PM Mark Shannon  wrote:

>
>
> On 04/12/2019 2:31 am, Gregory P. Smith wrote:
> >
> >
> > On Tue, Dec 3, 2019 at 8:21 AM Mark Shannon  > > wrote:
> >
> > Hi Everyone,
> >
> > I am proposing a new PEP, still in draft form, to impose a limit of
> one
> > million on various aspects of Python programs, such as the lines of
> > code
> > per module.
> >
> > Any thoughts or feedback?
> >
>
> [snip]
>
> >
> >
> > Overall I /like/ the idea of limits... /But.../ in my experience, limits
> > like this tend to impact generated source code or generated bytecode,
> > and thus any program that transitively uses those.
> >
> > Hard limits within the Javaish world have been *a major pain* on the
> > Android platform for example.  I wouldn't call workarounds
> > straightforward when it comes to total number of classes or methods in a
> > process.
>
> Do you have any numbers? 1M is a lot bigger then 64K, but real world
> numbers would be helpful.
>

I guess the relevant case in question is with Facebook patching the limit
of 65,000 classes in Android :
https://m.facebook.com/notes/facebook-engineering/under-the-hood-dalvik-patch-for-facebook-for-android/10151345597798920

>
> > If we're to adopt limits where there were previously none, we need to do
> > it via a multi-release deprecation cycle feedback loop to give people a
> > way to find report use cases that exceed the limits in real world
> > practical applications.  So the limits can be reconsidered or the
> > recommended workarounds tested and agreed upon.
> >
> > -gps
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ECKES7IPWGD74DAKFYV7JEWNOBAFEWYF/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DYW4WQKNEKUGK526OPP7XQ3XF25WZCJU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-05 Thread Mark Shannon



On 04/12/2019 2:31 am, Gregory P. Smith wrote:



On Tue, Dec 3, 2019 at 8:21 AM Mark Shannon > wrote:


Hi Everyone,

I am proposing a new PEP, still in draft form, to impose a limit of one
million on various aspects of Python programs, such as the lines of
code
per module.

Any thoughts or feedback?



[snip]




Overall I /like/ the idea of limits... /But.../ in my experience, limits 
like this tend to impact generated source code or generated bytecode, 
and thus any program that transitively uses those.


Hard limits within the Javaish world have been *a major pain* on the 
Android platform for example.  I wouldn't call workarounds 
straightforward when it comes to total number of classes or methods in a 
process.


Do you have any numbers? 1M is a lot bigger then 64K, but real world 
numbers would be helpful.




If we're to adopt limits where there were previously none, we need to do 
it via a multi-release deprecation cycle feedback loop to give people a 
way to find report use cases that exceed the limits in real world 
practical applications.  So the limits can be reconsidered or the 
recommended workarounds tested and agreed upon.


-gps

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ECKES7IPWGD74DAKFYV7JEWNOBAFEWYF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Deprecating the "u" string literal prefix

2019-12-05 Thread Thomas Wouters
On Wed, Dec 4, 2019 at 6:37 PM Victor Stinner  wrote:

> Le mer. 4 déc. 2019 à 14:49, Thomas Wouters  a écrit :
> >> (...)
> >> It's very different if an incompatible change break 1% or 90% of
> >> Python projects.
> >
> > Unfortunately there is a distinctive bias if you select popular
> projects, or even packages from PyPI. There is a very large body of work
> that never appears there, but is nonetheless used, useful and maintained
> lacklusterly enough to pose a big problem for changes like these.
> Tutorials, examples in documentation, random github repos, plugins for
> programs that embed Python, etc. The latter also represents an example of
> cases where you can't just decide to use an older version of Python to use
> something that wasn't updated yet.
>
> My point is that currently, we have no data to take decisions. We can
> only make assumptions. Having more data than nothing should help to
> take smarter decisions ;-)
>

It should, but it doesn't always :) If you forget how your data is flawed,
the 'smarter' decision can easily be wrong, instead. I do think it's a good
idea to reject ideas that would break a certain number of PyPI packages,
say, but just because it won't break them doesn't mean it won't break a
significant number of others.


>
> I know that there is closed source and unpublished projects. But if
> 20% (for example) of the most popular projects on PyPI are broken by
> an incompatible change, it's not hard to extrapolate that *at least*
> 20% of unpublished will be broken at least. Usually, closed source and
> unpublished projects are getting less attention and so are less up to
> date than PyPI projects.
>
> Even if you restrict the scope to PyPI: most PyPI top 100 modules are
> the most common dependencies in projects. It's easy to extrapolate
> that if 20% of these PyPI top 100 modules are broken, all applications
> using at least one of these broken projects will be broken as well.
>
> Another point of the PEP 608 is that there are not often resources to
> fix the most popular dependencies on PyPI, it's likely better to
> *revert* the incompatible change causing the issue. Again, to be able
> to revert a change, we need the information that we broke something.
> If a change goes through the final release, usually we prefer to
> acknoledge that the "ship has sailed" and deal with it, rather than
> reverting the annoying change.
>
> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
>


-- 
Thomas Wouters 

Hi! I'm an email virus! Think twice before sending your email to help me
spread!
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EZRSZQVS4AB6AJQYT4XV4B2H4OCOE4LB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Deprecating the "u" string literal prefix

2019-12-05 Thread Anders Munch
>> I'm struggling to see what i-strings would do for i18n that str.format 
>> doesn't do better.
Serhiy Storchaka [mailto:storch...@gmail.com]
> You would not need to repeat yourself.
> _('{name} costs ${price:.2f}').format(name=name, price=price)

A small price to pay for having a well-defined interface with the translator.
 
Security is one reason: A translator could sneak {password} or {signing_key} 
into an unrelated string, if those names happen to be present in the namespace. 
 That may not seem like a big issue if you've only ever used gettext/.po-file 
translation, where the translation is pre-built with the program, but in a more 
dynamic setting where end-users can supply translations, it's a different story.

You could parse the strings and require translations to have the same 
variables, but that is limiting. E.g. that would mean you couldn't translate
'Welcome, {first_name}'
into
'Willkommen, {title} {last_name}'

Another reason is that you don't want variable names in your program and 
translations to have to change in lock-step.
E.g. you might change your code to:
 _('{name} costs ${price:.2f}').format(name=prod.short_name, 

price=context.convert_to_chosen_currency(price))
without needing to change any translations.

regards, Anders

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/A2PDRFFJWK2XYPHLVGKC26QR2DJ6H4QM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Deprecating the "u" string literal prefix

2019-12-05 Thread Anders Munch
Barry Warsaw [mailto:ba...@python.org] wrote:
> str.format() really isn’t enough to do proper i18n; it’s actually a fairly 
> complex topic.

Obviously.

> I’m not convinced that PEP 501 would provide much benefit on the technical 
> side.

My point exactly.

> flufl.i18n - https://flufli18n.readthedocs.io/en/latest/

This seems to retrieve variables from the surrounding scope, in a manner
reminiscent of f-strings (though obviously implemented very differently).  I
would never use that for translatable strings, partly on security grounds, and
partly because it adds coupling between UI texts and code. 

regards, Anders

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/3GENPV6OVV772BTGSYX3KHH7SMWFBARW/
Code of Conduct: http://python.org/psf/codeofconduct/