[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-19 Thread Nathaniel Smith
On Tue, May 18, 2021 at 2:49 PM Pablo Galindo Salgado
 wrote:
> * It actually doesn't have more advantages. The current solution in the PEP 
> can do exactly the same as this solution if you allow reparsing when
> displaying tracebacks. This is because with the start line, end line, start 
> offset and end offset and the original file, you can extract the source that
> is associated with the instruction, parse it (and this
> is much faster because you just need to parse the tiny fragment) and then you 
> get an AST node that you can use for whatever you want.

Excellent point! Do you know how reliable this is in practice, i.e.
what proportion of bytecode source spans are something you can
successfully pass to ast.parse? If it works it's obviously nicer, but
I can't tell how often it works. E.g. anything including
return/break/continue/yield/await will fail, since those require an
enclosing context to be legal. I doubt return/break/continue will
raise exceptions often, but yield/await do all the time.

You could kluge it by wrapping the source span in a dummy 'async def'
before parsing, since that makes yield/await legal, but OTOH it makes
'yield from' and 'from X import *' illegal.

I guess you could have a helper that attempts passing the string to
ast.parse, and if that fails tries wrapping it in a loop/sync
def/async def/etc. until one of them succeeds. Maybe that would be a
useful utility to add to the traceback module?

Or add a PyCF_YOLO flag that tries to make sense of an arbitrary
out-of-context string.

(Are there any other bits of syntax that require specific contexts
that I'm not thinking of? If __enter__/__exit__ raise an exception,
then what's the corresponding span? The entire 'with' block, or just
the 'with' line itself?)

-n

PS: this is completely orthogonal to PEP 657, but if you're excited
about making tracebacks more readable, another piece of low-hanging
fruit would be to print method __qualname__s instead of __name__s in
the traceback output. The reason we don't do that now is that
__qualname__ lives on the function object, but in a traceback, we
can't get the function object. The traceback only has access to the
code object, and the code object doesn't have __qualname__, just
__name__. Probably the cleanest way to do this would be to make the
traceback or code object have a pointer back to the function object.
See also https://bugs.python.org/issue12857.

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/JWZHMW6WQOQMSAGWKMRTEHHRSZRMNW3C/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-19 Thread Pablo Galindo Salgado
>
> Excellent point! Do you know how reliable this is in practice, i.e.
> what proportion of bytecode source spans are something you can
> successfully pass to ast.parse? If it works it's obviously nicer, but
> I can't tell how often it works. E.g. anything including
> return/break/continue/yield/await will fail, since those require an
> enclosing context to be legal. I doubt return/break/continue will
> raise exceptions often, but yield/await do all the time.


All those limitations are compiler-time limitations because they imply
scoping. A valid AST is any piece of a converted parse tree, or a piece
of the PEG sub grammar:

>>> ast.dump(ast.parse("yield"))
'Module(body=[Expr(value=Yield())], type_ignores=[])'
>>> ast.dump(ast.parse("return"))
'Module(body=[Return()], type_ignores=[])'
>>> ast.dump(ast.parse("continue"))
'Module(body=[Continue()], type_ignores=[])'
>>> ast.dump(ast.parse("await x"))
"Module(body=[Expr(value=Await(value=Name(id='x', ctx=Load(],
type_ignores=[])"

On Thu, 20 May 2021 at 03:22, Nathaniel Smith  wrote:

> On Tue, May 18, 2021 at 2:49 PM Pablo Galindo Salgado
>  wrote:
> > * It actually doesn't have more advantages. The current solution in the
> PEP can do exactly the same as this solution if you allow reparsing when
> > displaying tracebacks. This is because with the start line, end line,
> start offset and end offset and the original file, you can extract the
> source that
> > is associated with the instruction, parse it (and this
> > is much faster because you just need to parse the tiny fragment) and
> then you get an AST node that you can use for whatever you want.
>
> Excellent point! Do you know how reliable this is in practice, i.e.
> what proportion of bytecode source spans are something you can
> successfully pass to ast.parse? If it works it's obviously nicer, but
> I can't tell how often it works. E.g. anything including
> return/break/continue/yield/await will fail, since those require an
> enclosing context to be legal. I doubt return/break/continue will
> raise exceptions often, but yield/await do all the time.
>
> You could kluge it by wrapping the source span in a dummy 'async def'
> before parsing, since that makes yield/await legal, but OTOH it makes
> 'yield from' and 'from X import *' illegal.
>
> I guess you could have a helper that attempts passing the string to
> ast.parse, and if that fails tries wrapping it in a loop/sync
> def/async def/etc. until one of them succeeds. Maybe that would be a
> useful utility to add to the traceback module?
>
> Or add a PyCF_YOLO flag that tries to make sense of an arbitrary
> out-of-context string.
>
> (Are there any other bits of syntax that require specific contexts
> that I'm not thinking of? If __enter__/__exit__ raise an exception,
> then what's the corresponding span? The entire 'with' block, or just
> the 'with' line itself?)
>
> -n
>
> PS: this is completely orthogonal to PEP 657, but if you're excited
> about making tracebacks more readable, another piece of low-hanging
> fruit would be to print method __qualname__s instead of __name__s in
> the traceback output. The reason we don't do that now is that
> __qualname__ lives on the function object, but in a traceback, we
> can't get the function object. The traceback only has access to the
> code object, and the code object doesn't have __qualname__, just
> __name__. Probably the cleanest way to do this would be to make the
> traceback or code object have a pointer back to the function object.
> See also https://bugs.python.org/issue12857.
>
> --
> Nathaniel J. Smith -- https://vorpus.org
>
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/VLTCIVKY3PBZC6LAMQ7EFZBO53KSGWYD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 659: Specializing Adaptive Interpreter

2021-05-19 Thread Terry Reedy

On 5/13/2021 4:18 AM, Mark Shannon wrote:

Hi Terry,

On 13/05/2021 5:32 am, Terry Reedy wrote:

On 5/12/2021 1:40 PM, Mark Shannon wrote:

This is an informational PEP about a key part of our plan to improve 
CPython performance for 3.11 and beyond.



As always, comments and suggestions are welcome.


The claim that starts the Motivation section, "Python is widely 
acknowledged as slow.", has multiple problems. While some people 
believe, or at least claim to believe "Python is slow", other know 
that as stated, the latter is false.  Languages do not have a speed, 
only implementations running code for particular applications have a 
speed, or a speed relative to equivalent code in another language with 
a different runtime.


I broadly agree, but CPython is largely synonymous with Python and 
CPython is slower than it could be.


The phrase was not meant to upset anyone.
How would you rephrase it, bearing in mind that needs to be short?


Others have given some fine suggestions.  Take your pick.

[ship]
We want people to be able to write code in Python and have it perform at 
the level they would get from a good Javascript or lua implementation.


I agree with others that this is a good way to state the goal.  It also 
seems on the face of it reasonable, though not trivial.  I get the 
impression that you are proposing to use python-adapted variations of 
techniques already used for such implementations.




It is still important to speed up Python though.


I completely agree.  Some application areas are amenable to speedup be 
resorting to C libraries, often already available.  Others are not.  The 
latter involve lots of idiosyncratic business logic, individual numbers 
rather than arrays of numbers, and strings.


Numpy based applications gain firstly from using unboxed arrays of 
machine ints and floats instead of lists (and lists) of boxed ints and 
floats and secondly from C or assembly-coded routines


Python strings are already arrays of machine ints (codepoints).  Basic 
operations on strings, such as 'substring in string' are already coded 
in C working on machine values.  So the low-hanging fruit has already 
been picked.


If a program does 95% of its work in a C++ library and 5% in Python, it 
can easily spend the majority of its time in Python because CPython is a 
lot slower than C++ (in general).


I believe the ratio for the sort of numerical computing getting bogus 
complaints is sometimes more like 95% of *time* in compiled C and only, 
say, 5% of *time* in the Python interpreter.  So even if the interpreter 
ran instantly, it would make also no difference -- for such applications.


--
Terry Jan Reedy


___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/GFUISO3ZDYS7E3FV457AVRW2Q7B5BAVW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Question for potential python development contributions on Windows

2021-05-19 Thread pjfarley3
The Python Developers Guide specifically states to get VS2017 for developing
or enhancing python on a Windows system.

 

Is it still correct to specifically use VS2017 , or is VS2019 also
acceptable?

 

I ask this because I know that the *.vcproj files and other
build-environment files have changed format pretty dramatically over the
many releases of VS.  If a potential new contribution targeted for current
and future python will require new build environment files, I wouldn't want
to have to "down release" those files (or expect core dev's to do it) at or
after submission to the community for approval.  Much better to use the same
setup as core dev's use than to introduce up-level differences in the build
environment.

 

IOW, for new releases on Windows are core dev's still using VS2017 and so
potential contributors should also use that version, or has the core dev's
process for Windows releases been updated to use VS2019?

 

Peter

--

 

___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/AFHJ2GWCINPKAV7DMMIRZBTAB6AXCMNK/
Code of Conduct: http://python.org/psf/codeofconduct/