[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-05-01 Thread Larry Hastings


On 5/1/22 15:44, Paul Bryan wrote:
Can someone state what's currently unpalatable about 649? It seemed to 
address the forward-referencing issues, certainly all of the cases I 
was expecting to encounter.



Carl's talk was excellent here; it would be lovely if he would chime in 
and reply.  Here is my almost-certainly-faulty recollection of what he said.


 * PEP 649 doesn't work for code bases that deliberately using
   un-evaluatable expressions but still examine them at runtime. Some
   code bases would have a major circular import problem if they had to
   import every module they use in annotations.  By only importing
   those in "if TYPE_CHECKING" blocks, mypy (which inspects each module
   in isolation) can resolve the references, so it works fine at static
   analysis time.  Occasionally they /also/ need to examine the
   annotation at runtime, but this is only a rudimentary check, so a
   string works fine for them.  So 563 works, but 649 throws e.g. a
   NameError.  Carl proposes a mitigation strategy here: run the
   co_annotations code object with a special globals dict set that
   creates fake objects instead of failing on lookups.
 * PEP 649 is a pain point for libraries using annotations for
   documentation purposes.  The annotation as written may be very
   readable, but evaluating it may turn into a very complicated object,
   and the repr() or str() of that object may be a complicated and
   obscure the original intent.  Carl proposes using much the same
   strategy here; also it /might/ work to use ast.unparse to pull the
   original expression out of the source code, though this seems like
   it would be less reliable.

That's everything I remember... but I was operating on two hours' sleep 
that day.



You might also consult Brett's thread about finding edge cases in PEPs 
484, 563, and 649:


   
https://discuss.python.org/t/finding-edge-cases-for-peps-484-563-and-649-type-annotations/14314/18


Cheers,


//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YGX4URVSXRNGPFROCHEQ4Q5HRYVIZ22S/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-05-01 Thread Larry Hastings


FWIW, I'm in agreement.  My "forward class" proposal(s) were me trying 
to shine a light to find a way forward; I'm in no way adamant that we go 
that direction.  If we can make 649 palatable without introducing 
forward declarations for classes, that's great!  If in the future we 
discover more edge cases that Carl's approach doesn't easily solve, we 
could always revisit it later. For now it goes in the freezer of "ideas 
we aren't moving forward with".



//arry/

On 4/29/22 19:08, Guido van Rossum wrote:
 FWIW, Carl presented a talk about his proposed way forward using PEP 
649 with some small enhancements to handle cases like dataclasses (*), 
and it was well received by those present. I personally hope that this 
means the end of the "forward class declarations" proposals (no matter 
how wonderful), but the final word is up to the SC.


(*) Mostly fixing the edge cases of the "eval __code__ with tweaked 
globals" hack that Carl came up with previously, see 
https://github.com/larryhastings/co_annotations/issues/2#issuecomment-1092432875.


--
--Guido van Rossum (python.org/~guido )
/Pronouns: he/him //(why is my pronoun here?)/ 



___
Python-Dev mailing list --python-dev@python.org
To unsubscribe send an email topython-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived 
athttps://mail.python.org/archives/list/python-dev@python.org/message/EBDKGKPMOHM674PMUXCVZDRUD5NTIKZB/
Code of Conduct:http://python.org/psf/codeofconduct/___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RXZWTIRVRVHOOKZ6FHIRLQRQBDNCL62X/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-04-26 Thread Larry Hastings


On 4/26/22 09:31, MRAB wrote:

On 2022-04-26 06:32, Larry Hastings wrote:


Note that this spelling is also viable:

    class C


I don't like that because it looks like you've just forgotten the colon.

Perhaps:

    class C: ...



That's not a good idea.  Every other place in Python where there's a 
statement that ends in a colon, it's followed by a nested block of 
code.  But the point of this statement is to forward-declare C, and this 
statement /does not have/ a class body.  Putting a colon there is 
misleading.


Also, your suggestion is already legal Python syntax; it creates a class 
with no attributes.  So changing this existing statement to mean 
something else would potentially (and I think likely) break existing code.



Consider C++'s forward-declared class statement:

   class C;

You could say about that, "I don't like that because it looks like 
you've just forgotten the curly braces."  But we didn't forget anything, 
it's just new syntax for a different statement.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S54FAGLEKF22T3WTLTIZ37FW3BVMJQ3V/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-04-26 Thread Larry Hastings


On 4/25/22 23:56, Ronald Oussoren wrote:
A problem with this trick is that you don’t know how large a class 
object can get because a subclass of type might add new slots. This is 
currently not possible to do in Python code (non-empty ``__slots__`` 
in a type subclass is rejected at runtime), but you can do this in C 
code.


Dang it!  __slots__!  Always there to ruin your best-laid plans. *shakes 
fist at heavens*


I admit I don't know how __slots__ is currently implemented, so I wasn't 
aware of this.  However!  The first part of my proto-PEP already 
proposes changing the implementation of __slots__, to allow adding 
__slots__ after the class is created but before it's instantiated.  
Since this is so late-binding, it means the slots wouldn't be allocated 
at the same time as the type, so happily we'd sidestep this problem.  On 
the other hand, this raises the concern that we may need to change the C 
interface for creating __slots__, which might break C extensions that 
use it.  (Maybe we can find a way to support the old API while 
permitting the new late-binding behavior, though from your description 
of the problem I'm kind of doubtful.)



Cheers,


//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YU3PJKPMJZNWKLZUG3JCJFGFOKGMV2GI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Proto-PEP part 4: The wonderful third option

2022-04-25 Thread Larry Hastings


Sorry, folks, but I've been busy the last few days--the Language Summit 
is Wednesday, and I had to pack and get myself to SLC for PyCon,   
I'll circle back and read the messages on the existing threads 
tomorrow.  But for now I wanted to post "the wonderful third option" for 
forward class definitions we've been batting around for a couple of days.


The fundamental tension in the proposal: we want to /allocate/ the 
object at "forward class" time so that everyone can take a reference to 
it, but we don't want to /initialize/ the class (e.g. run the class 
body) until "continue class" time.  However, the class might have a 
metaclass with a custom __new__, which would be responsible for 
allocating the object, and that isn't run until after the "class body".  
How do we allocate the class object early while still supporting custom 
metaclass.__new__ calls?


So here's the wonderful third idea.  I'm going to change the syntax and 
semantics a little, again because we were batting them around quite a 
bit, so I'm going to just show you our current thinking.


The general shape of it is the same.  First, we have some sort of 
forward declaration of the class.  I'm going to spell it like this:


   forward class C

just for clarity in the discussion.  Note that this spelling is also viable:

   class C

That is, a "class" statement without parentheses or a colon. (This is 
analogous to how C++ does forward declarations of classes, and it was 
survivable for them.)  Another viable spelling:


   C = ForwardClass()

This spelling is nice because it doesn't add new syntax.  But maybe it's 
less obvious what is going on from a user's perspective.


Whichever spelling we use here, the key idea is that C is bound to a 
"ForwardClass" object.  A "ForwardClass" object is /not/ a class, it's a 
forward declaration of a class.  (I suspect ForwardClass is similar to a 
typing.ForwardRef, though I've never worked with those so I couldn't say 
for sure.)  Anyway, all it really has is a name, and the promise that it 
might get turned into a class someday.  To be explicit about it, 
"isinstance(C, type)" is False.


I'm also going to call instances of ForwardClass "immutable".  C won't 
be immutable forever, but for now you're not permitted to set or change 
attributes of C.



Next we have the "continue" class statement.  I'm going to spell it like 
this:


   continue class C(BaseClass, ..., metaclass=MyMetaclass):
    # class body goes here
    ...

I'll mention other possible spellings later.  The first change I'll 
point out here: we've moved the base classes and the metaclass from the 
"forward" statement to the "continue" statement.  Technically we could 
put them either place if we really cared to.  But moving them here seems 
better, for reasons you'll see in a minute.


Other than that, this "continue class" statement is similar to what I 
(we) proposed before.  For example, here C is an expression, not a name.


Now comes the one thing that we might call a "trick".  The trick: when 
we allocate the ForwardClass instance C, we make it as big as a class 
object can ever get.  (Mark Shannon assures me this is simply "heap 
type", and he knows far more about CPython internals than I ever will.)  
Then, when we get to the "continue class" statement, we convince 
metaclass.__new__ call to reuse this memory, and preserve the reference 
count, but to change the type of the object to "type" (or 
what-have-you).  C has now been changed from a "ForwardClass" object 
into a real type.  (Which almost certainly means C is now mutable.)


These semantics let us preserve the entire existing class creation 
mechanism.  We can call all the same externally-visible steps in the 
same externally-visible order.  We don't add any new dunder methods, we 
don't remove any dunder methods, we don't expose a new dunder attribute 
for users to experiment with.


What mechanism do we use to achieve this?  metaclass.__new__ always has 
to do one of these two things to create the class object: either it 
calls "super().__new__", or what we usually call "three-argument type".  
In both cases, it passes through the **kwargs that it received into the 
super().__new__ call or the three-argument type call.  So the "continue 
class C" statement will internally add a new kwarg: "__forward__ = C".  
If super().__new__ or three-argument type get this kwarg, they won't 
allocate a new object, they'll reuse C.  They'll preserve the current 
reference count, but otherwise overwrite C with all the juicy vitamins 
and healthy minerals packed into a Python class object.


So, technically, this means we could spell the "continue class" step 
like so:


   class C(BaseClass, ..., metaclass=MyMetaClass, __forward__=C):
    ...

Which means that, combined with the "C = ForwardClass()" statement 
above, we could theoretically implement this idea without changing the 
syntax of the language.  And since we already don't have to change the 
underlying 

[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-23 Thread Larry Hastings


On 4/23/22 08:57, Eric V. Smith wrote:

On 4/23/2022 9:55 AM, Jelle Zijlstra wrote:


However, it doesn't solve the problem for base classes. For example, 
str is conceptually defined as `class str(Sequence["str"]):`. A 
forward reference can't make `str` defined when the bases are 
evaluated, because bases are resolved at the `forward class` stage.


Larry's second email "Proto-PEP part 2: Alternate implementation 
proposal for "forward class" using a proxy object" discusses a 
possibility to move the bases and metaclasses to the "continue class" 
stage. It also has the advantage of not changing the behavior of 
__new__, and I think is in general easier to reason about.




Let me expound on Eric's statement for a bit.  Moving the base classes 
and metaclass to the "continue" statement might permit the 
self-referential "str" definition suggested above:


   forward class str

   ...

   continue class str(Sequence[str]):
    ...

Though I suspect this isn't viable, or at least not today.  I'm willing 
to bet a self-referential definition like this would result in an 
infinite loop when calculating the MRO.



I don't have a strong sense of whether it'd be better to have the base 
classes and metaclass defined with the "forward" declaration or the 
"continue" declaration.  The analogous syntax in C++ has the base 
classes defined with the "continue" class.


Actually, that reminds me of something I should have mentioned in the 
proto-PEP.  If the "class proxy" version is viable and desirable, and we 
considered moving the base classes and metaclass down to the "continue" 
statement, that theoretically means we could drop the "forward" and 
"continue" keywords entirely.  I prefer them, simply because it makes it 
so explicit what you're reading.  But the equivalent syntax in C++ 
doesn't bother with extra keywords for either the "forward" declaration 
or the "continue" declaration, and people seem to like it fine.  Using 
that variant of the syntax, the toy example from the PEP would read as 
follows:


   class A

   class B:
   value: A

   class A:
   value: B

If you're speed-reading here and wondering "wait, how is this not 
ambiguous?", note that in the first line of the example, the "class" 
statement has no colon.  Also, it would never have parentheses or 
decorators.  The forward declaration of a class would always be just 
"class", followed by a name, followed by a newline (or comment).



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GTOJIZZXON3KND37WSATL3E7Q4KON6YO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-23 Thread Larry Hastings


On 4/23/22 06:55, Jelle Zijlstra wrote:

So to reiterate, your proposal would be to write this as:

forward class B:
    pass

class A:
    value: B

continue class B:
    value: A


Not quite; the "forward class" statement doesn't have a colon or a class 
body.  This would be written as:


   forward class B

   class A:
    value: B

   continue class B:
    value: A



While the current workaround is:

class A:
    value: "B"

class B:
     value: "A"


In this example, with two toy classes in one file, it shouldn't be 
necessary to quote the annotation in B.  So all you need is the quotes 
around the first annotation:


   class A:
    value: "B"

   class B:
 value: A


I don't think I would write the "forward class" version if I had the 
choice. It's clunkier and requires more refactoring if I change my 
mind about whether the `value` attribute should exist.


In this toy example, it adds an extra line.  Describing that as "clunky" 
is a matter of opinion; I disagree and think it's fine.


But the real difference is when it comes to larger codebases.  If 
classes "A" and "B" are referenced dozens or even hundreds of times, 
you'd have to add quote marks around every annotation that references 
one (both?).  Manual stringization of large codebases was sufficiently 
disliked as to have brought about the creation and acceptance of PEP 
563.  Judicious use of the "forward class" statement should obviate most 
(all?) the manual stringizing in these codebases.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TVCB6ITGHO5QI5GYFNMEVEZKE6K24UDB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-23 Thread Larry Hastings
I should have said "numpy_forward", not "numpy.forward".  I changed my mind
at the last second as I was writing that email, and momentarily forgot that
when you import x.y you implicitly import x.


/arry

On Sat, Apr 23, 2022, 01:53 Larry Hastings  wrote:

>
> On 4/23/22 01:14, Steven D'Aprano wrote:
>
> On Sat, Apr 23, 2022 at 12:46:37AM -0700, Larry Hastings wrote:
>
>
> But rather than speculate further, perhaps someone who works on one of
> the static type analysis checkers will join the discussion and render an
> informed opinion about how easy or hard it would be to support "forward
> class" and "continue class".
>
>
> No offense Larry, but since this proto-PEP is designed to help the
> typing community (I guess...) shouldn't you have done that before
> approaching Python-Dev with the proposal?
>
> The perfect is the enemy of the good.  Like I said, I wanted to get this
> out there before the Language Summit, and I just ran out of time.  I think
> there's also some sort of typing summit next week?  I'm not really plugged
> in to the static type analysis world--I don't use it in any of my projects.
>
>
> Wouldn't that be a massively breaking change? Anyone who does:
>
> from numpy import ndarray
>
> will get the forward-declared class object instead of the fully
> initialised class object, leading to all sorts of action-at-a-distance
> bugs.
>
> I wasn't recommending The Famous numpy Project do this exact thing, it was
> an abstract example using the name "numpy".  I didn't think this was a real
> example anyway, as I was assuming that most people who import numpy don't
> do so in an "if TYPE_CHECKING:" block.
>
> Separating the forward class declaration from the continue class
> implementation in the actual "numpy" module itself is probably not in the
> cards for a while, if ever.  But perhaps numpy could do this:
>
> import numpy.forward
> if TYPE_CHECKING:
> import numpy
>
> In this case, the "numpy" module would also internally "import
> numpy.forward", and would contain the "continue class" statements for the
> forward-declared classes in "numpy.forward".
>
> There are lots of ways to solve problems with the flexibility afforded by
> the proposed "forward class" / "continue class" syntax.  Perhaps in the
> future you'll suggest some of them!
>
>
> */arry*
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QHHI3D2T7MK2Y4SCRJ33BO433R4MKZYU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-23 Thread Larry Hastings


On 4/23/22 03:10, Terry Reedy wrote:

On 4/22/2022 11:16 PM, Larry Hastings wrote:


So I still prefer "forward class".

I don't think it's as clear as "forward class"


'forward class' for an incomplete class is not at all clear to me.  It 
is not clear to me which part of speech you intend it to be: noun, 
verb, adjective, or adverb.  You must have some experience with 
'forward' in a different context that makes it clearer to you.



It's a reference to the term "forward declaration":

   https://en.wikipedia.org/wiki/Forward_declaration


//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VEX3ONWBKIIDF2VR5OPANSKEIDFV3AP3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-23 Thread Larry Hastings


On 4/23/22 01:14, Steven D'Aprano wrote:

On Sat, Apr 23, 2022 at 12:46:37AM -0700, Larry Hastings wrote:


But rather than speculate further, perhaps someone who works on one of
the static type analysis checkers will join the discussion and render an
informed opinion about how easy or hard it would be to support "forward
class" and "continue class".

No offense Larry, but since this proto-PEP is designed to help the
typing community (I guess...) shouldn't you have done that before
approaching Python-Dev with the proposal?


The perfect is the enemy of the good.  Like I said, I wanted to get this 
out there before the Language Summit, and I just ran out of time.  I 
think there's also some sort of typing summit next week?  I'm not really 
plugged in to the static type analysis world--I don't use it in any of 
my projects.




Wouldn't that be a massively breaking change? Anyone who does:

 from numpy import ndarray

will get the forward-declared class object instead of the fully
initialised class object, leading to all sorts of action-at-a-distance
bugs.


I wasn't recommending The Famous numpy Project do this exact thing, it 
was an abstract example using the name "numpy".  I didn't think this was 
a real example anyway, as I was assuming that most people who import 
numpy don't do so in an "if TYPE_CHECKING:" block.


Separating the forward class declaration from the continue class 
implementation in the actual "numpy" module itself is probably not in 
the cards for a while, if ever.  But perhaps numpy could do this:


   import numpy.forward
   if TYPE_CHECKING:
    import numpy

In this case, the "numpy" module would also internally "import 
numpy.forward", and would contain the "continue class" statements for 
the forward-declared classes in "numpy.forward".


There are lots of ways to solve problems with the flexibility afforded 
by the proposed "forward class" / "continue class" syntax.  Perhaps in 
the future you'll suggest some of them!



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Y2PSUKYJFCVUUWBBV5Z3HNZBVRVS6SZS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-23 Thread Larry Hastings


On 4/23/22 00:53, Steven D'Aprano wrote:

It's a "forward-declared class object".  It's the real class object, but
it hasn't been fully initialized yet, and won't be until the "continue
class" statement.

The only thing that makes it not fully initialised is that it has a bozo
bit dunder "__forward__" instructing the interpreter to disallow
instantiation. Yes?

If I take that class object created by `forward class X`, and delete the
dunder, there is no difference between it and other regular classes. Am
I correct?


No, there are several differences.

 * It still has the "dict-like object" returned by
   metaclass.__prepare__, rather than its final dict.
 * Its class body hasn't been called yet, so it likely doesn't have any
   of its important dunder methods.
 * It hasn't had its  BaseClass.__init_subclass__ called yet.
 * It hasn't had its metaclass.__init__ called yet.

The "forward-declared class object" is in a not-yet-fully initialized 
state, and is not ready for use as a class.


From my perspective, the "__forward__" attribute is an internal 
implementation detail, and something that user code should strictly 
leave alone.  But if it's considered too dangerous to expose to users, 
we could hide it in the class object and not expose it to users.  I'm 
not convinced that's the right call; I think the Consenting Adults rule 
still applies.  Python lets you do crazy things like assigning to 
__class__, and resurrecting objects from inside their __del__; manually 
removing __forward__ seems like it falls into the same category.  It's 
not recommended, and we might go so far as to say doing that results in 
undefined behavior.  But Python shouldn't stand in your way if you 
really think you need to do it for some reason.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2XDQGNBYQMN6M7Y5JNZA26JX5XS5AYIF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-23 Thread Larry Hastings


On 4/22/22 23:41, Mehdi2277 wrote:

My main question for this approach is how would this work with type checkers?


It would be new syntax for Python, so type checkers would have to 
understand it.




Is there any restriction that forward class's continuation must appear in same 
module?


No.



If it's allowed that a forward class may be continued in a different module I 
do not see how type checker like mypy/pyright could handle that. Classes are 
generally viewed as closed and fully defined within type checker. Monkey 
patching at runtime later is not supported.


If it became official Python syntax, I suspect they'd figure out a way 
to support it.


They might require that the expression used in the "continue class" 
statement map to the original "forward class" declaration, e.g. they 
might stipulate that they don't support this:


   forward class X
   random_name = X

   continue class random_name:
    ...

But rather than speculate further, perhaps someone who works on one of 
the static type analysis checkers will join the discussion and render an 
informed opinion about how easy or hard it would be to support "forward 
class" and "continue class".




One other edge case here is how would you forward declare an annotation for a 
type that like this,

if TYPE_CHECKING:
   import numpy

def f(x: numpy.ndarray) -> None:
  ...

forward declaring ndarray here would not make numpy.ndarray available.


In this case, adding forward declarations for classes in "numpy" would 
be up to the "numpy" module.  One approach might look more like this:


   import numpy # contains forward declarations
   if TYPE_CHECKING:
    import numpy.impl

Though numpy presumably couldn't do this while they still supported 
older versions of Python.  That's one downside of using new syntax--you 
can't use it until you stop support for old versions of Python that 
predate it.




Would you forward declare modules? Is that allowed?


I haven't proposed any syntax for forward-declaring modules, only classes.



I'm confused in general how if TYPE_CHECKING issue is handled by this approach. 
Usually class being imported in those blocks is defined normally (without 
continue class) somewhere else.


My proposal should mate well with "if TYPE_CHECKING".  You would define 
your forward classes in a module that does get imported, but leave the 
continue classes in a separate module that is only imported "if 
TYPE_CHECKING", as per my example with "numpy" above.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/A7MW3B7JYM2LT4VUCAE7OUHJSYEX76G7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-22 Thread Larry Hastings


On 4/22/22 22:03, Chris Angelico wrote:

Anyhow, [a forward-defined class object is] a class, with some special
features (notably that you can't instantiate it).


Yes.  Specifically, here's my intention for "forward-defined class 
objects": you can examine some generic dunder values (__name__, 
__mro__), and you can take references to it.  You can't instantiate it 
or meaningfully examine its contents, because it hasn't been fully 
initialized yet.




It seems odd that you define a blessed way of monkeypatching a class,
but then demand that it can only be done once unless you mess with
dunders. Why not just allow multiple continuations?


I think monkeypatching is bad, and I'm trying to avoid Python condoning it.

On that note, the intent of my proposal is that "continue class" is not 
viewed as "monkeypatching" the class, it's the second step in defining 
the class.


I considered attempting to prevent the user modifying the 
"forward-declared class object".  But a) that just seemed like an arms 
race with the user--"oh yeah? well watch THIS!" and b) I thought the 
Consenting Adults rule applied.


Still, it's not the intent of my PEP to condone or facilitate 
monkeypatching.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CXRMKKMKIHGGPAW3F4W4QCNRDIBHFLTB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-22 Thread Larry Hastings


On 4/22/22 20:58, Steven D'Aprano wrote:

On Fri, Apr 22, 2022 at 06:13:57PM -0700, Larry Hastings wrote:


This PEP proposes an additional syntax for declaring a class which splits
this work across two statements:
* The first statement is `forward class`, which declares the class and binds
   the class object.
* The second statement is `continue class`, which defines the contents
   of the class in the "class body".

To be clear: `forward class` creates the official, actual class object.
Code that wants to take a reference to the class object may take references
to the `forward class` declared class, and interact with it as normal.
However, a class created by `forward class` can't be *instantiated*
until after the matching `continue class` statement finishes.

Since the "forward class" is a real class,


It's a "forward-declared class object".  It's the real class object, but 
it hasn't been fully initialized yet, and won't be until the "continue 
class" statement.




it doesn't need any new
syntax to create it. Just use plain old regular class syntax.

 class X(object):
 """Doc string"""
 attribute = 42

And now we have our X class, ready to use in annotations.

To add to it, or "continue class" in your terms, we can already do this:

 X.value = "Hello World"


But if X has a metaclass that defines __new__ , "value" won't be defined 
yet, so metaclass.__new__ won't be able to react to and possibly modify 
it.  Similarly for metaclass.__init__ and BaseClass.__init_subclass__.


So, while your suggested technique doesn't "break" class creation per 
se, it prevents the user from benefiting from metaclasses and base 
classes using these advanced techniques.




Counter proposal:

`continue class expression:` evaluates expression to an existing class
object (or raises an exception) and introduces a block. The block is
executed inside that class' namespace, as the `class` keyword does,
except the class already exists.


If "continue class" is run on an already-created class, this breaks the 
functionality of __prepare__, which creates the namespace used during 
class body execution and is thrown away afterwards.  The "dict-like 
object" returned by __prepare__ will have been thrown away by the time 
"continue class" is executed.


Also, again, this means that the contents added to the class in the 
"continue class" block won't be visible to metaclass.__new__, 
metaclass.__init__, and BaseClass.__init_subclass__.


Also, we would want some way of preventing the user from running 
"continue class" multiple times on the same class--else we accidentally 
condone monkey-patching in Python, which we don't want to do.




Isn't this case solved by either forward references:

 class A:
 value: "B"

or by either of PEP 563 or PEP 649?


It is, but:

a) manual stringizing was rejected by the community as too tiresome and 
too error-prone (the syntax of the string isn't checked until you run 
your static type analysis tool).  Also, if you need the actual Python 
value at runtime, you need to eval() it, which causes a lot of headaches.


b) PEP 649 doesn't solve this only-slightly-more-advanced case:

   @dataclass
   class A:
  value: B

   @dataclass
   class B:
  value: A

as the dataclass decorator examines the contents of the class, including 
its annotations.


c) PEP 563 has the same "what if you need the actual Python value at 
runtime" problem as manual stringization, which I believe is why the SC 
has delayed its becoming default behavior.


Perhaps my example for b) would be a better example for the PEP.



That could become:

 class A:
 pass

 class B:
 value: A  # This is fine, A exists.

 A.value: B  # And here B exists, so this is fine too.

No new syntax is needed. This is already legal.


It's legal, but it doesn't set the annotation of "value" on A. Perhaps 
this is just a bug and could be fixed.  (TBH I'm not sure what the 
intended semantics of this statement are, or where that annotation ends 
up currently.  I couldn't find it in A.__annotations__ or the module's 
__annotations__.  Is it just forgotten?)


I assert this approach will be undesirable to Python programmers.  This 
makes for two very-different feeling approaches to defining the members 
of a class.  One of the goals of my PEP was to preserve the existing 
"feel" of Python as much as possible.


Also, as previously mentioned, your technique prevents "A.value", and 
all other attributes and methods set using this technique, from being 
visible to metaclass.__new__, metaclass.__init__, and 
BaseClass.__init_subclass__.




This proposed  `forward class` / `continue class` syntax should permit
solving *every* forward-reference and circular-reference proble

[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-22 Thread Larry Hastings


On 4/22/22 19:36, Terry Reedy wrote:

On 4/22/2022 9:13 PM, Larry Hastings wrote:

 forward class X()


New keywords are a nuisance.  And the proposed implementation seems 
too complex.


My proposed implementation seemed necessary to handle the complexity of 
the problem.  I would welcome a simpler solution that also worked for 
all the same use cases.



How about a 'regular' class statement with a special marker of some 
sort.  Example: 'body=None'.


It's plausible.  I take it "body=None" would mean the declaration would 
not be permitted to have a colon and a class body.  So now we have two 
forms of the "class" statement, and which syntax you're using is 
controlled by a named parameter argument.


I think this "body=None" argument changing the syntax of the "class" 
statement is clumsy.  It lacks the visibility and clarity of "forward 
class"; the keyword here makes it pretty obvious that this is not a 
conventional class declaration.  So I still prefer "forward class".


In my PEP I proposed an alternate syntax for "forward class": "def 
class", which has the feature that it doesn't require adding a new 
keyword.  But again, I don't think it's as clear as "forward class", and 
I think clarity is vital here.



Either __new__ or __init__ could raise XError("Cannot instantiate 
until this is continued.", so no special instantiation code would 
needed and X could be a real class, with a special limitation.


Yes, my proposal already suggests that __new__ raise an exception.  
That's not the hard part of the problem.


The problem with X being a "real class" is that creating a "real class" 
means running all the class creation code, and a lot of the class 
creation code is designed with the assumption that the namespace has 
already been filled by executing the class body.


For example, Enum in enum.py relies on EnumMeta, which defines __new__, 
which examines the already-initialized namespace of the class you want 
to create.  If you propose a "body=None" class be a "real class object", 
then how do you declare a class that inherits from Enum using 
"body=None"?  Creating the class object will call EnumMeta.__new__, 
which needs to examine the namespace, which hasn't been initialized yet.


Changing the class object creation code so we can construct a class 
object in two steps, with the execution of the class body being part of 
the second step, was the major sticking point--and the source of most of 
the complexity of my proposal.




 continue class X:
 # class body goes here
 def __init__(self, key):
 self.key = key


'continue' is already a keyword. 


I'm aware.  I'm not sure why you mentioned it.


Given that X is a real class, could implementation be 
X.__dict__.update(new-body-dict)


That's what the proof-of-concept does.  But the proof-of-concept fails 
with a lot of common use cases:


 * metaclasses that override __new__ or __init__
 * base classes that implement __init_subclass__

because these methods assume the namespace of the class has already been 
filled in, but it doesn't get filled in until the @continue_() class 
decorator.  Handling these cases is why my proposal is sadly as complex 
as it is, and why in practice the proof-of-concept doesn't work a lot of 
the time.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KVDT632CHPORAA6KWJG6ZZVY4FFSZI27/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-22 Thread Larry Hastings


On 4/22/22 19:17, Chris Angelico wrote:

I'm unsure about the forward class. How is it different from subclassing an ABC?


They're just different objects.  A subclass of an ABC is either itself 
another abstract base class, which will never be instantiatable, or a 
non-abstract class, which is immediately instantiatable.  A 
forward-declared class object is not currently instantiatable, and is 
not fully defined, but will become fully defined and instantiatable 
after the matching "continue class" statement.




What happens if you try to continue a non-forward class?


From the proto-PEP:

   Executing a `continue class` statement with a class defined by the
   `class` statement raises a `ValueError` exception.

And also:

   It's expected that knowledgeable users will be able to trick Python
   into executing `continue class` on the same class multiple times by
   interfering with "dunder" attributes.  The same tricks may also
   permit users to trick Python into executing `continue class` on a
   class defined by the `class` statement. This is undefined and
   unsupported behavior, but Python will not prevent it.


//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QU2HE5AAMLHU33Z5Y4EPDF46CXO7F3NF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Proto-PEP part 3: Closing thoughts on "forward class", etc.

2022-04-22 Thread Larry Hastings



Just a quick note from me on the proto-PEP and the two proposed 
implementations.  When I started exploring this approach, I didn't 
suspect it'd require such sweeping changes to be feasible. Specifically, 
I didn't think I was going to propose changing the fundamental mechanism 
used to create class objects.  That's an enormous change, and it makes 
me uncomfortable; I suspect I won't be alone in having that reaction.


The alternate implementation with proxy objects was borne of my 
reaction, but it's worrisome too.  It's a hack--though whether it's a 
"big" hack or a "small" hack is debatable.  Anyway, I'm specifically 
worried about the underlying class object escaping the proxy and 
becoming visible inside Python somehow.  If that happened, we'd have two 
objects representing the same "type" at runtime, a situation that could 
quickly become confusing.


Also, as I hopefully made clear in the "alternate implementation" 
approach using a class proxy object, I'm not 100% certain that the proxy 
will work in all cases.  I ran out of time to investigate it more--I 
wanted to post this idea with some lead time before the 2022 Language 
Summit, so that folks had time to read and digest it and discuss it 
before the Summit.  I have some implementation ideas--the "class proxy" 
class may need its own exotic metaclass.


Ultimately I'm posting this proto-PEP to foster discussion.  I'm 
confident that "forward class" / "continue class" could solve all our 
forward-reference and circular-reference problems; the questions we need 
to collectively answer are:


 * how should the implementation work, and
 * is the cost of the implementation worth it?


Best wishes,


//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HD7YPONSPL5ZFZISKCOUWVUXMIJTQG2M/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Proto-PEP part 2: Alternate implementation proposal for "forward class" using a proxy object

2022-04-22 Thread Larry Hastings


Here's one alternate idea for how to implement the "forward class" syntax.

The entire point of the "forward class" statement is that it creates
the real actual class object.  But what if it wasn't actually the
"real" class object?  What if it was only a proxy for the real object?

In this scenario, the syntax of "forward object" remains the same.
You define the class's bases and metaclass.  But all "forward class"
does is create a simple, lightweight class proxy object.  This object
has a few built-in dunder values, __name__ etc.  It also allows you
to set attributes, so let's assume (for now) it calls
metaclass.__prepare__ and uses the returned "dict-like object" as
the class proxy object __dict__.

"continue class" internally performs all the rest of the
class-creation machinery.  (Everything except __prepare__, as we
already called that.)  The first step is metaclass.__new__, which
returns the real class object.  "continue class" takes that
object and calls a method on the class proxy object that says
"here's your real class object".  From that moment on, the proxy
becomes a pass-through for the "real" class object, and nobody
ever sees a reference to the "real" class object ever again.
Every interaction with the class proxy object is passed through
to the underlying class object.  __getattribute__ calls on the
proxy look up the attribute in the underlying class object.  If
the object returned is a bound method object, it rebinds that
callable with the class proxy instead, so that the "self" passed
in to methods is the proxy object.  Both base_cls.__init_subclass__
and cls.__init__ see the proxy object during class creation.  As far
as Python user code is concerned, the class proxy *is* the class,
in every way, important or not.

The upside: this moves all class object creation code into "continue
class" call.  We don't have to replace __new__ with two new calls.

The downside: a dinky overhead to every interaction with a "forward
class" class object and with instances of a "forward class" class
object.


A huge concern: how does this interact with metaclasses implemented
in C?  If you make a method call on a proxy class object, and that
calls a C function from the metaclass, we'd presumably have to pass
in the "real class object", not the proxy class object.  Which means
references to the real class object could leak out somewhere, and
now we have a real-class-object vs proxy-class-object identity crisis.
Is this a real concern?


A possible concern: what if metaclass.__new__ keeps a reference to
the object it created?  Now we have two objects with an identity
crisis.  I don't know if people ever do that.  Fingers crossed that
they don't.  Or maybe we add a new dunder method:

    @special_cased_staticmethod
    metaclass.__bind_proxy__(metaclass, proxy, cls)

This tells the metaclass "bind cls to this proxy object", so
metaclasses that care can update their database or whatever.
The default implementation uses the appropriate mechanism,
whatever it is.

One additional probably-bad idea: in the case where it's just a
normal "class" statement, and we're not binding it to a proxy,
should we call this?

    metaclass.__bind_proxy__(metaclass, None, cls)

The idea there being "if you register the class objects you create,
do the registration in __bind_proxy__, it's always called, and you'll
always know the canonical object in there".  I'm guessing probably not,
in which case we tell metaclasses that track the class objects we
create "go ahead and track the object you return from __new__, but
be prepared to update your tracking info in case we call __bind_proxy__
on you".


A small but awfully complicated wrinkle here: what do we do if the
metaclass implements __del__?  Obviously, we have to call __del__
with the "real" class object, so it can be destroyed properly.
But __del__ might resurrect that object, which means someone took a 
reference to it.




One final note.  Given that, in this scenario, all real class creation
happens in "continue class", we could move the bases and metaclass
declaration down to the "continue class" statement.  The resulting
syntax would look like:

    forward class X

    ...

    continue class X(base1, base2, metaclass=AmazingMeta, rocket="booster")

Is that better? worse? doesn't matter?  I don't have an intuition about
it right now--I can see advantages to both sides, and no obvious
deciding factor.  Certainly this syntax prevents us from calling
__prepare__ so early, so we'd have to use a real dict in the "forward
class" proxy object until we reached continue, then copy the values from
that dict into the "dict-like object", etc.

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OJRA7F7EMRJCIXQPRRKZZ7YMFD2ZKQV2/

[Python-Dev] Proto-PEP part 1: Forward declaration of classes

2022-04-22 Thread Larry Hastings



This document is a loose proto-PEP for a new "forward class" / "continue 
class" syntax.  Keep in mind, the formatting is a mess. If I wind up 
submitting it as a real PEP I'll be sure to clean it up first.



/arry

--


PEP : Forward declaration of classes

Overview


Python currently has one statement to define a class, the `class` statement:

```Python
    class X():
    # class body goes here
    def __init__(self, key):
    self.key = key
```

This single statement declares the class, including its bases and metaclass,
and also defines the contents of the class in the "class body".

This PEP proposes an additional syntax for declaring a class which splits
this work across two statements:
* The first statement is `forward class`, which declares the class and binds
  the class object.
* The second statement is `continue class`, which defines the contents
  of the class in the "class body".

To be clear: `forward class` creates the official, actual class object.
Code that wants to take a reference to the class object may take references
to the `forward class` declared class, and interact with it as normal.
However, a class created by `forward class` can't be *instantiated*
until after the matching `continue class` statement finishes.

Defining class `X` from the previous example using this new syntax would 
read

as follows:

```
    forward class X()

    continue class X:
    # class body goes here
    def __init__(self, key):
    self.key = key
```

This PEP does not propose altering or removing the traditional `class` 
statement;

it would continue to work as before.


Rationale
-

Python programmers have had a minor problem with classes for years: there's
no way to have early-bound circular dependencies between objects. If A
depends on B, and B depends on A, there's no linear order that allows
you to cleanly declare both.

Most of the time, the dependencies were in late-binding code, e.g. A refers
to B inside a method.  So this was rarely an actual problem at runtime.  
When

this problem did arise, in code run at definition-time, it was usually only
a minor headache and could be easily worked around.

But the explosion of static type analysis in Python, particularly with
the `typing` module and the `mypy` tool, has made circular definition-time
dependencies between classes commonplace--and much harder to solve.  Here's
one simple example:

```Python
    class A:
    value: B

    class B:
    value: A
```

An attribute of `B` is defined using a type annotation of `A`, and an
attribute of `A` is defined using a type annotation of `B`. There's
no order to these two definitions that works; either `A` isn't defined
yet, or `B` isn't defined yet.

Various workarounds and solutions have been proposed to solve this problem,
including two PEPs: PEP 563 (automatic stringized annotations) and PEP 649
(delayed evaluation of annotations using functions).
But nothing so far has been both satisfying and complete; either it
is wordy and clumsy to use (manually stringizing annotations), or it
added restrictions and caused massive code breakage for runtime use of
annotations (PEP 563), or simply didn't solve every problem (PEP 649).
This proposed  `forward class` / `continue class` syntax should permit
solving *every* forward-reference and circular-reference problem faced
in Python, using an elegant and Pythonic new syntax.

As a side benefit, `forward class` and `continue class` syntax enables
rudimentary separation of "interface" from "implementation", at least for
classes.  A user seeking to "hide" the implementation details of their
code could put their class definitions in one module, and the
implementations of those classes in a different module.

This new syntax is not intended to replace the traditional `class`
declaration syntax in Python.  If this PEP were accepted, the `class`
statement would still be the preferred mechanism for creating classes
in Python; `forward class` should only be used when it confers some
specific benefit.


Syntax
--

The `forward class` statement is the same as the `class` statement,
except it doesn't end with a colon and is not followed by an indented block.
Without any base classes or metaclass, the `forward class` statement is
as follows:

```
    forward class X
```

This would declare class `X`.

If `X` needs base classes or metaclass, the corresponding `forward 
class` statement

would be as follows, rendered in a sort of "function prototype" manner:

```
    forward class X(*bases, metaclass=object, **kwargs)
```

The `continue class` statement is similar to a `class` statement
without any bases or metaclass.  It ends with a colon,
and is followed by the "class body":

    continue class X:
    # class body goes here
    pass

One important difference: the `X` in `continue class X:` is not a *name*,
it's an *expression*.  This code is valid:

```
    forward 

[Python-Dev] Re: Defining tiered platform support

2022-03-15 Thread Larry Hastings


On 3/14/22 20:31, Brett Cannon wrote:



On Fri, Mar 11, 2022 at 5:17 PM Victor Stinner  
wrote:


It would be great to have the list of supported platforms per
Python version!


I could see the table in PEP 11 being copied into the release PEPs.



By "release PEPs", you mean the release schedule PEPs, like PEP 619 
"Python 3.10 Release Schedule"?


   https://peps.python.org/pep-0619/

Because, yeah, I think that would be the best place for it. Though then 
maybe the name of the PEP should change, as the document is doing double 
duty.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WBAV2X5JRHSMSO3374PZIVU7TU2WDP6E/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount" (round 2)

2022-02-22 Thread Larry Hastings

On 2/22/22 6:00 PM, Eric Snow wrote:

On Sat, Feb 19, 2022 at 12:46 AM Eric Snow  wrote:

Performance
---

A naive implementation shows `a 4% slowdown`_.
Several promising mitigation strategies will be pursued in the effort
to bring it closer to performance-neutral.  See the `mitigation`_
section below.

FYI, Eddie has been able to get us back to performance-neutral after
applying several of the mitigation strategies we discussed. :)



Are these optimizations specifically for the PR, or are these 
optimizations we could apply without taking the immortal objects? Kind 
of like how Sam tried to offset the nogil slowdown by adding 
optimizations that we went ahead and added anyway ;-)



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7GX4C3DQ23B2K5JXTOYQQPT2ZLJD7CP4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount"

2022-02-21 Thread Larry Hastings


On 2/21/22 22:06, Chris Angelico wrote:

On Mon, 21 Feb 2022 at 16:47, Larry Hastings  wrote:

While I don't think it's fine to play devil's advocate, given the choice between "this will 
help a common production use-case" (pre-fork servers) and "this could hurt a hypothetical 
production use case" (long-running applications that reload modules enough times this could 
waste a significant amount of memory), I think the former is more important.


Can the cost be mitigated by reusing immortal objects? So, for
instance, a module-level constant of 60*60*24*365 might be made
immortal, meaning it doesn't get disposed of with the module, but if
the module gets reloaded, no *additional* object would be created.

I'm assuming here that any/all objects unmarshalled with the module
can indeed be shared in this way. If that isn't always true, then that
would reduce the savings here.



It could, but we don't have any general-purpose mechanism for that.  We 
have "interned strings" and "small ints", but we don't have e.g. 
"interned tuples" or "frequently-used large ints and floats".


That said, in this hypothetical scenario wherein someone is constantly 
reloading modules but we also have immortal objects, maybe someone could 
write a smart reloader that lets them somehow propagate existing 
immortal objects to the new module. It wouldn't even have to be that 
sophisticated, just some sort of hook into the marshal step combined 
with a per-module persistent cache of unmarshalled constants.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UN3BIEHDK2CCL563MSIJ4DXDWOWHNKHR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount"

2022-02-21 Thread Larry Hastings


On 2/21/22 21:44, Larry Hastings wrote:


While I don't think it's fine to play devil's advocate,"



Oh!  Please ignore the word "don't" in the above sentence.  I /do/ think 
it's fine to play devil's advocate.


Sheesh,


//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TABGFU4OFTUDPGF72LY5QMSDTKDUUHHY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount"

2022-02-20 Thread Larry Hastings


While I don't think it's fine to play devil's advocate, given the choice 
between "this will help a common production use-case" (pre-fork servers) 
and "this could hurt a hypothetical production use case" (long-running 
applications that reload modules enough times this could waste a 
significant amount of memory), I think the former is more important.



//arry/

On 2/20/22 06:01, Antoine Pitrou wrote:

On Sat, 19 Feb 2022 12:05:22 -0500
Larry Hastings  wrote:

On 2/19/22 04:41, Antoine Pitrou wrote:

On Fri, 18 Feb 2022 14:56:10 -0700
Eric Snow   wrote:

On Wed, Feb 16, 2022 at 11:06 AM Larry Hastings   wrote:

   He suggested(*) all the constants unmarshalled as part of loading a module should be 
"immortal", and if we could rejigger how we allocated them to store them in 
their own memory pages, that would dovetail nicely with COW semantics, cutting down on 
the memory use of preforked server processes.

Cool idea.  I may mention it in the PEP as a possibility.  Thanks!

That is not so cool if for some reason an application routinely loads
and unloads modules.

Do applications do that for some reason?  Python module reloading is
already so marginal, I thought hardly anybody did it.

I have no data point, but I would be surprised if there wasn't at least
one example of such usage somewhere in the world, for example to
hotload fixes in specific parts of an application without restarting it
(or as part of a plugin / extension / mod system).

There's also the auto-reload functionality in some Web servers or
frameworks, but that is admittedly more of a development feature.

Regards

Antoine.


___
Python-Dev mailing list --python-dev@python.org
To unsubscribe send an email topython-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived 
athttps://mail.python.org/archives/list/python-dev@python.org/message/C2MWXHPOFFH5CLLPKJCVEQD4EGHKTD24/
Code of Conduct:http://python.org/psf/codeofconduct/___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VYGLHB4JXSYKTNJ2AOLDFUKO4GDHWVIV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount"

2022-02-19 Thread Larry Hastings


On 2/19/22 04:41, Antoine Pitrou wrote:

On Fri, 18 Feb 2022 14:56:10 -0700
Eric Snow  wrote:

On Wed, Feb 16, 2022 at 11:06 AM Larry Hastings  wrote:

  He suggested(*) all the constants unmarshalled as part of loading a module should be 
"immortal", and if we could rejigger how we allocated them to store them in 
their own memory pages, that would dovetail nicely with COW semantics, cutting down on 
the memory use of preforked server processes.

Cool idea.  I may mention it in the PEP as a possibility.  Thanks!

That is not so cool if for some reason an application routinely loads
and unloads modules.



Do applications do that for some reason?  Python module reloading is 
already so marginal, I thought hardly anybody did it.


Anyway, my admittedly-dim understanding is that COW is most helpful for 
the "pre-fork" server model, and I bet those folks never bother to 
unload modules.



//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/N7UFJCMQLO6W4HUJ6DL5M55JOU4CEX4K/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount"

2022-02-16 Thread Larry Hastings


I experimented with this at the EuroPython sprints in Berlin years ago.  
I was sitting next to MvL, who had an interesting observation about it.  
He suggested(*) all the constants unmarshalled as part of loading a 
module should be "immortal", and if we could rejigger how we allocated 
them to store them in their own memory pages, that would dovetail nicely 
with COW semantics, cutting down on the memory use of preforked server 
processes.



//arry/

(*) Assuming I remember what he said accurately, of course.  If any of 
this is dumb assume it's my fault.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/E2AVH3BSINO7Z55BGQ47LSIE5VKTOGFB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 674 – Disallow using macros as l-value (version 2)

2022-01-26 Thread Larry Hastings

On 1/26/22 3:02 PM, Victor Stinner wrote:

Hi,

My PEP 674 proposed to change PyDescr_TYPE() and PyDescr_NAME()
macros. This change breaks M2Crypto and mecab-python3 projects in code
generated by SWIG. I tried two solutions to prevent SWIG accessing
PyDescrObject members directly:
https://bugs.python.org/issue46538

At the end, IMO it's too much work, whereas there is no need in the
short term to modify the PyDescrObject structure, or structure
inheriting from PyDescrObject.

So I excluded PyDescr_TYPE() and PyDescr_NAME() macros from PEP 674 to
leave them unchanged:
https://python.github.io/peps/pep-0674/#pydescr-name-and-pydescr-type-are-left-unchanged



Just so I understand: is this effectively permanent?  My first thought 
was, we get SWIG to change its code generation, and in a couple of years 
we can follow up and make this change.  But perhaps there's so much SWIG 
code out there already, and the change would propagate so slowly, that 
it's not even worth considering.  Is it something like that?  Or is it 
just that it's such a minor change that it's not worth fighting about?



//arry/

p.s. thanks for doing this!

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KQD4MW2DIFQYDYP25Z6A5XHQHZW75NEV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The current state of typing PEPs

2021-11-29 Thread Larry Hastings


On 11/29/21 7:10 PM, Inada Naoki wrote:

Anyone against making a statement that "PEP 563 will never be the
default behavior"?


I think only the SC is empowered to make such a statement.



Then, we do not need to decide "PEP 563 or 649".
We can focus on whether we can replace "stock semantics + opt-in PEP
563" with PEP 649 or not.


I doubt the current status quo--keeping PEP 563 as optional behavior, 
permanently--is viable long-term.  It causes problems for most runtime 
uses of annotations, which we were collectively basically ignoring until 
now.  As runtime use of annotations /and/ typing both become more 
pervasive, these problems are only going to grow.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JHOJWKZZWW4TB6TS7GX26WERKAEAC7VH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The current state of typing PEPs

2021-11-29 Thread Larry Hastings


On 11/29/21 2:56 PM, Barry Warsaw wrote:

PEP 563 and 649 have visible effects that even within that domain can have 
important side effects.  For example, PEP 563’s loss of local scope, which even 
“de-stringify-ing” can’t recover.  This is what we need help with.


Well, sure.  If PEP 563 and 649 didn't have visible effects, there'd be 
no point in doing them.  That said, I suggest 649 does a lovely job of 
avoiding /undesirable/ side-effects.


Sure, 649 has observable side effects.  For example, you can detect 
whether or not 649 is active by rebinding a name after it's used in an 
annotation but before examining that annotation at runtime.  This seems 
harmless--unlikely to happen in production code, and easily remedied if 
someone did trip over it.


A more credible side effect: if you use an undefined name in an 
annotation, you won't notice at compile-time.  Now, this is actually 
649's major feature!  But there are also scenarios where this feature 
could cover up a bug, like if you misspell a name--you won't notice 
until you examine the annotation at runtime (or, more likely, until you 
run your static analyzer).  563 has this same behavior--and it wasn't 
enough to prevent 563 being accepted.  So I assume this wouldn't be 
enough to prevent accepting 649 either.


649 has effects on memory usage and performance, but honestly I'm not 
worried about these.  I don't think the memory usage and performance of 
the prototype were particularly bad.  Anyway, as I've said many times: 
we should figure out the semantics we want first, and then we can worry 
about optimization.  The Python core dev community has no end of smart 
people who love optimizing things--I'm sure if 649 was accepted and 
merged, the optimizations would start rolling in.


Then of course there are also things 649 simply doesn't do, e.g. resolve 
the "if TYPE_CHECKING" situation.  But it's not appropriate to call that 
a "side effect" per se.


And that's my list.  If anybody knows of other visible side effects from 
649, naturally you should contact the SC.  And/or me, if you think we 
could change 649 to mitigate it without losing its major features.



Happy holidays,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XD6A3GG3BUW7V57FS36B5JPBH5OLN347/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Oh look, I've been subscribed to python/issues-test-2 notifications again

2021-11-04 Thread Larry Hastings


I guess this is part of the migration from bpo to GitHub issues? Maybe 
the initial work could be done in a private repo, to cut down on the 
spurious email notifications to literally everybody subscribed to 
cpython?  Which is a lot of people.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/H5KW6GRHIF2VWOGNRH5367WB3K2GPARO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Naming convention for AST types

2021-10-29 Thread Larry Hastings


Hey, as public mailing list mistakes go, that one's pretty mild.


//arry/


On 10/28/21 6:35 PM, Jeremiah Vivian wrote:

Sorry for the two replies, I didn't think the first one would be sent.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DNZMANF45N6WGGWYYN5JIQ67IDTCIL4Q/
Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EZBJIT4CYPVYRCBCPB2VA5PE2SB4OEI2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-26 Thread Larry Hastings


On 10/26/21 5:22 PM, Bluenix wrote:

* Functions having the same signature share the same annotation tuple.

Is this true with code that have a mutable default?
[... examples deleted...]



You're confusing two disjoint concepts.

First of all, all your examples experiment with default values which are 
unrelated to their annotations.  None of your examples use or examine 
annotations.


Second, Inada-san was referring to the tuple of strings used to 
initialize the annotations for a function when PEP 583 (stringized 
annotations) is active.  This is a clever implementation tweak that 
first shipped with Python 3.10, which makes stringized annotations very 
efficient.  Since all the names and annotations are strings, rather than 
creating the dictionary at function binding time, they're stored in a 
tuple, and the dictionary is created on demand.  This tuple is a 
constant object, and marshalling a module automatically collapses 
duplicate constants into the same constant.  So identical PEP 583 
annotation tuples are collapsed into the same tuple.  Very nice!



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/D7UKJYT533PAI4F6WV3EDWPFVUD2QVJW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-23 Thread Larry Hastings


On 10/22/21 1:45 AM, Steven D'Aprano wrote:

Any other runtime annotation tool has to support strings, otherwise the
"from __future__ import annotations" directive will have already broken
it. If the tool does type-checking, then it should support stringified
annotations. They have been a standard part of type-hinting since 2014
and Python 3.5:

https://www.python.org/dev/peps/pep-0484/#forward-references

Any type-checking tool which does not already support stringified
references right now is broken.



It's an debatable point since "from future" behavior is always off by 
default.  I'd certainly agree that libraries /should/ support stringized 
annotations by now, considering they were nearly on by default in 3.10.  
But I wouldn't say stringized annotations were a "standard" part of 
Python, yet.  As yet they are optional.  Optional things aren't 
standard, and standard things aren't optional.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/I4AC6DQL6FB3RG6DIRUVUDBJ2MP3BKLD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-21 Thread Larry Hastings


On 10/21/21 1:17 AM, Steven D'Aprano wrote:

On Thu, Oct 21, 2021 at 04:48:28PM -0400, Larry Hastings wrote:


In Python, if you evaluate an undefined name, Python raises a
NameError.  This is so consistent I'm willing to call it a "rule".
Various folks have proposed an exception to this "rule": evaluating an
undefined name in an PEP 649 delayed annotation wouldn't raise
NameError, instead evaluating to some yet-to-be-determined value
(ForwardRef, AnnotationName, etc).  I don't think annotations are
special enough to "break the rules" in this way.

Can we have a code snippet illustrating that? I think this is what you
mean. Please correct me if I have anything wrong.

If I have this:

 from typing import Any
 def function(arg:Spam) -> Any: ...

then we have four sets of (actual or proposed) behaviour:

1. The original, current and standard behaviour is that Spam raises a
NameError at function definition time, just as it would in any other
context where the name Spam is undefined, e.g. `obj = Spam()`.

2. Under PEP 563 (string annotations), there is no NameError, as the
annotations stay as strings until you attempt to explicitly resolve them
using eval. Only then would it raise NameError.

3. Under PEP 649 (descriptor annotations), there is no NameError at
function definition time, as the code that resolves the name Spam (and
Any for that matter) is buried in a descriptor. It is only on inspecting
`function.__annotations__` at runtime that the code in the descriptor is
run and the name Spam will generate a NameError.

4. Guido would(?) like PEP 649 to be revised so that inspecting the
annotations at runtime would not generate a NameError. Since Spam is
unresolvable, some sort of proxy or ForwardRef (or maybe just a
string?) would have to be returned instead of raising.

Am I close?



Your description of the four behaviors is basically correct.



So if I have understood the options correctly, I like the idea of a
hybrid descriptor + stringy annotations solution.

- defer evaluation of the annotations using descriptors (PEP 649);

- on runtime evaluation, if a name does not resolve, stringify it (as
PEP 563 would have done implicitly);

- anyone who really wants to force a NameError can eval the string.



You might also be interested in my "Great Compromise" proposal from back 
in April:


   
https://mail.python.org/archives/list/python-dev@python.org/thread/WUZGTGE43T7XV3EUGT6AN2N52OD3U7AE/

Naturally I'd prefer PEP 649 as written.  The "compromise" I described 
would have the same scoping limitations as stringized annotations, one 
area where PEP 649 is a definite improvement.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HEAXKK7U2YNGSV6AFVDG7XNIL34DE5TF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-21 Thread Larry Hastings


It's certainly not my goal to be misleading.  Here's my perspective.

In Python, if you evaluate an undefined name, Python raises a 
NameError.  This is so consistent I'm willing to call it a "rule".  
Various folks have proposed an exception to this "rule": evaluating an 
undefined name in an PEP 649 delayed annotation wouldn't raise 
NameError, instead evaluating to some yet-to-be-determined value 
(ForwardRef, AnnotationName, etc).  I don't think annotations are 
special enough to "break the rules" in this way.


Certainly this has the potential to be irritating for code using 
annotations at runtime, e.g. Pydantic.  Instead of catching the 
exception, it'd have to check for this substitute value.   I'm not sure 
if the idea is to substitute for the entire annotation, or for just the 
value that raised NameError; if the latter, Pydantic et al would have to 
iterate over every value in an annotation to look for this special value.


As a consumer of annotations at runtime, I'd definitely prefer that they 
raise NameError rather than silently substitute in this alternative value.



//arry

/

On 10/21/21 8:01 PM, Guido van Rossum wrote:
On Thu, Oct 21, 2021 at 10:35 AM Larry Hastings <mailto:la...@hastings.org>> wrote:


.

Your proposal is one of several suggesting that type annotations
are special enough to break the rules.  I don't like this idea. 
But you'll be pleased to know there are a lot of folks in the
"suppress the NameError" faction, including Guido (IIUC).


Yes, I still want this part of your PEP changed. I find your 
characterization of my position misleading -- there is no rule to 
break here(*), just an API.


(*) The Zen of Python does not have rules.

--
--Guido van Rossum (python.org/~guido <http://python.org/~guido>)
/Pronouns: he/him //(why is my pronoun here?)/ 
<http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/T72R3LFOSW4KFHO3GCZW3F2BCTGGAWD4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-21 Thread Larry Hastings


On 10/21/21 5:42 PM, Damian Shaw wrote:
Sorry for the naive question but why doesn't "TYPE_CHECKING" work 
under PEP 649?


I think I've seen others mention this but as the code object isn't 
executed until inspected then if you are just using annotations for 
type hints it should work fine?


Yes, it works fine in that case.


Is the use case wanting to use annotations for type hints and real 
time inspection but you also don't want to import the objects at run time?


If that's really such a strong use cause couldn't PEP 649 be modified 
to return a repr of the code object when it gets a NameError? Either 
by attaching it to the NameError exception or as part of a ForwardRef 
style object if that's how PEP 649 ends up getting implemented?


That's the use case.

Your proposal is one of several suggesting that type annotations are 
special enough to break the rules.  I don't like this idea. But you'll 
be pleased to know there are a lot of folks in the "suppress the 
NameError" faction, including Guido (IIUC).


See also this PR against co_annotations, proposing returning a new 
AnnotationName object when evaluating the annotations raises a NameError.


   https://github.com/larryhastings/co_annotations/pull/3


Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/O75ZCWHRCZEINYRO2QJFKKWXP4GW2WVM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-21 Thread Larry Hastings


On 10/21/21 5:01 AM, Henry Fredrick Schreiner wrote:
PEP 649 was about the same as the current performance, but PEP 563 was 
significantly faster, since it doesn’t instantiate or deal with 
objects at all, which both the current default and PEP 563 do.


I don't understand what you're saying about how PEP 563 both does and 
doesn't instantiate objects.


PEP 649, and the current implementation of PEP 563, are definitely both 
faster than stock behavior when you don't examine annotations; both of 
these approaches don't "instantiate or deal with objects" unless you 
examine the annotations.  PEP 649 is roughly the same as stock when you 
do examine annotations.  PEP 563 is faster if you only ever examine the 
annotations as strings, but becomes /enormously/ slower if you examine 
the annotations as actual Python values.


The way I remember it, most of the negative feedback about PEP 649's 
performance concerned its memory consumption.  I've partially addressed 
that by always lazy-creating the function object.  But, again, I suggest 
that performance is a distraction at this stage.  The important thing is 
to figure out what semantics we want for the language.  We have so many 
clever people working on CPython, I'm sure this team will make whatever 
semantics we choose lean and performant.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XRATIM3V543XVD4U22T45UJ43ZPLTUU2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python multithreading without the GIL

2021-10-12 Thread Larry Hastings
Oops! Sorry everybody, I meant that to be off-list.

Still, I hope you at least enjoyed my enthusiasm!


/arry

On Tue, Oct 12, 2021, 12:55 Larry Hastings  wrote:

>
> (off-list)
>
>
> On 10/11/21 2:09 PM, Sam Gross wrote:
>
> The ccbench results look pretty good: about 18.1x speed-up on "pi
> calculation" and 19.8x speed-up on "regular expression" with 20 threads
> (turbo off). The latency and throughput results look good too.
>
>
> JESUS CHRIST
>
>
>
> */arry*
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/H4QIVWY7RB4A765FYH2JVKJM52V42B4U/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python multithreading without the GIL

2021-10-12 Thread Larry Hastings


(off-list)


On 10/11/21 2:09 PM, Sam Gross wrote:
The ccbench results look pretty good: about 18.1x speed-up on "pi 
calculation" and 19.8x speed-up on "regular expression" with 20 
threads (turbo off). The latency and throughput results look good too.



JESUS CHRIST



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CKRGEP64K4YCGV2KJEIO4NN7FASB5ZJA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python multithreading without the GIL

2021-10-08 Thread Larry Hastings


On 10/7/21 8:52 PM, Sam Gross wrote:
I've been working on changes to CPython to allow it to run without the 
global interpreter lock.



Before anybody asks: Sam contacted me privately some time ago to pick my 
brain a little.  But honestly, Sam didn't need any help--he'd already 
taken the project further than I'd ever taken the Gilectomy.  I have 
every confidence in Sam and his work, and I'm excited he's revealed it 
to the world!



Best wishes,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CCGH6COYQGCAFZWD32ROUOHRSE4BUL3P/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 654 except* formatting

2021-10-06 Thread Larry Hastings


On 10/6/21 2:34 PM, Łukasz Langa wrote:
On 6 Oct 2021, at 12:06, Larry Hastings <mailto:la...@hastings.org>> wrote:


It seems like, for this to work, "group" would have to become a keyword.


No, just like `match` and `case` didn't have to.


This would play havoc with a lot of existing code.

Extraordinary claims require extraordinary evidence, Larry. I maintain 
this will be entirely backwards compatible.



My claim is that making "group" a hard-coded keyword, visible at all 
times, and thus no longer permitting use of "group" as an identifier, 
would play havoc with a lot of existing code.  I don't think it's an 
extraordinary claim to say that "group" is a reasonably popular 
identifier.  For example, I offer the 1,117 uses of the word "group" in 
the Python 3.10.0 Lib/ directory tree.  (I admit I didn't review them 
all to see which ones were actual identifiers, and which ones were in 
strings or documentation.)


If the proposal is to add it as some "it's only a keyword in this 
context" magic thing, a la how "async"/"await" were "soft keywords" in 
3.5, and if we otherwise would permit the word "group" to be used as an 
identifier in perpetuity--okay, it won't cause this problem.



We can even make its error message smarter than the default NameError, 
since -- as I claim -- it's terribly unlikely somebody would mean to 
name their dynamic exception collection "group".


I concede I don't completely understand PEP 654 yet, much less the 
counter-proposals flying around right now.  But it does seem like 
"except group" has the potential to be ambiguous, given that "group" is 
a reasonably popular identifier.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WCFDNJKKFHWVIS64FR3ZOQGOKAWRITGT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 654 except* formatting

2021-10-06 Thread Larry Hastings


It seems like, for this to work, "group" would have to become a 
keyword.  This would play havoc with a lot of existing code.  I can't 
tell you how many times I've used the identifier "group" in my code, 
particularly when dealing with regular expressions.


Even making it a soft keyword, a la "await" in 3.5, would lead to 
ambiguity:


   group = KeyboardInterrupt

   try:
    while True:
    print("thou can only defeat me with Ctrl-C")
   except group as error:
    print("lo, thou hast defeated me")


//arry/

On 10/6/21 2:12 AM, Barry Warsaw wrote:

What do the PEP authors think about `except group`?  Bikeshedding aside, that’s 
still the best alternative I’ve seen.  It’s unambiguous, self-descriptive, and 
can’t be confused with unpacking syntax.

-Barry


On Oct 5, 2021, at 11:15, sascha.schlemmer--- via Python-Dev 
 wrote:

I agree that *(E1, E2) looks like unpacking, how about

except *E1 as error: ...
except (*E1, *E2) as error: ...

even better would be if we could drop the braces:
except *E1, *E2 as error: ...
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PFYQC7XMYFAGOPU5C2YVMND2BQSIJPRC/
Code of Conduct: http://python.org/psf/codeofconduct/


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/SZNDJPKT7WNWJHG4UDJ6D3BU6IN5ZXZO/
Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RNPS7637OJLMUR4LWJ4QYJ55BU7VZSOG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-12 Thread Larry Hastings


On 8/12/21 8:25 AM, Guido van Rossum wrote:
Maybe we could specialize the heck out of this and not bother with a 
function object? In the end we want to execute the code, the function 
object is just a convenient way to bundle defaults, free variables 
(cells) and globals. But co_annotation has no arguments or defaults, 
and is only called once. It does need to have access to the globals of 
the definition site (the call site may be in another module) and 
sometimes there are cells (not sure).


Yes, there are sometimes cells.

   def foo():
    my_type = int
    def bar(a:my_type):
    return a
    return bar

Both bar() and the co_annotations function for bar() are nested 
functions inside foo, and the latter has to refer to the cell for 
my_type.  Also, co_annotations on a class may keep a reference to the 
locals dict, permitting annotations to refer to values defined in the class.



I don't know if it's worth making a specialized object as you suggest.  
I'd forgotten I did this, but late in the development of the 649 
prototype, I changed it so the co_annotations function is /always/ 
lazy-bound, only constructed on demand.


There are three possible blobs of information the descriptor might need 
when binding the co_annotations function:


 * the co_annotations code object,
 * the tuple of cells, when co_annotations is a closure, and
 * the locals dict, when co_annotations is defined inside a class and
   might refer to names defined in the class.

The code object is always necessary, the other two are optional. If we 
only need the code object, we store that code object inside the function 
object and we're done.  If we need either or both of the other two 
blobs, we throw all the blobs we need in a tuple and store the tuple 
instead.  At the point that someone asks for the annotations on that 
object, the descriptor does PyType_ checks to determine which blobs of 
data it has, binds the function appropriately, calls it, and returns the 
result.  I think this approach is reasonable and I'm not sure what a 
custom callable object would get us.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RIG3THLL5D55ES67RZVWWSYZCUDW42HX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-11 Thread Larry Hastings

On 8/11/21 5:15 AM, Chris Angelico wrote:

On Wed, Aug 11, 2021 at 10:03 PM Larry Hastings  wrote:

This approach shouldn't break reasonable existing code.  That said, this change 
would be observable from Python, and pathological code could notice and break.  
For example:

def ensure_Foo_is_a_class(o):
 assert isinstance(Foo, type)
 return o

class Foo:
 ...

@ensure_Foo_is_a_class
def Foo():
 ...

This terrible code currently would not raise an assertion.  But if we made the 
proposed change to the implementation of decorators, it would.  I doubt anybody 
does this sort of nonsense, I just wanted to fully flesh out the topic.


You would be here declaring that a @monkeypatch decorator is terrible
code. I'm not sure whether you're right or wrong. You may very well be
right.

def monkeypatch(cls):
 basis = globals()[cls.__name__]
 for attr in dir(cls): setattr(basis, attr, getattr(cls, attr))
 return basis

@monkeypatch
class SomeClass:
 def new_method(self): ...

Currently this works, since SomeClass doesn't get assigned yet. This
could be made to work across versions by writing it as
@monkeypatch(SomeClass) instead (and then the actual class name would
become immaterial).



Golly!  I've never seen that.  Is that a common technique?

If we need to preserve that behavior, then this idea is probably a 
non-starter.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QNHESUROGJ7V4Q5LFUNXCU2FUWNOW7OE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-11 Thread Larry Hastings

On 8/11/21 5:21 AM, Inada Naoki wrote:

But memory footprint and GC time is still an issue.
Annotations in PEP 649 semantics can be much heavier than docstrings.



I'm convinced that, if we accept PEP 649 (or something like it), we can 
reduce its CPU and memory consumption.


Here's a slightly crazy idea I had this morning: what if we didn't 
unmarshall the code object for co_annotation during the initial import, 
but instead lazily loaded it on demand?  The annotated object would 
retain knowledge of what .pyc file to load, and what offset the 
co_annotation code object was stored at. (And, if the co_annotations 
function had to be a closure, a reference to the closure tuple.)  If the 
user requested __annotations__ (or __co_annotations__), the code would 
open the .pyc file, unmarshall it, bind it, etc.  Obviously this would 
only work for code loaded from .pyc (etc) files.  To go even crazier, 
the runtime could LRU cache N (maybe == 1) open .pyc file handles as a 
speed optimization, perhaps closing them after some wall-clock timeout.


I doubt we'd do exactly this--it's easy to find problems with the 
approach.  But maybe this idea will lead to a better one?



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JMLXOEV6GRBVPNHQD6CYEW63I7WJBWZC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-11 Thread Larry Hastings


On 8/11/21 2:48 AM, Jukka Lehtosalo wrote:
On Wed, Aug 11, 2021 at 10:32 AM Thomas Grainger <mailto:tagr...@gmail.com>> wrote:


    Larry Hastings wrote:
> On 8/11/21 12:02 AM, Thomas Grainger wrote:
> > I think as long as there's a test case for something like
> > @dataclass
> > class Node:
> >      global_node: ClassVar[Node | None]
> >      left: InitVar[Node | None]
> >      right: InitVar[None | None]
> >
> > the bug https://bugs.python.org/issue33453
<https://bugs.python.org/issue33453> and the current
implementation

https://github.com/python/cpython/blob/bfc2d5a5c4550ab3a2fadeb9459b4bd948ff6.

<https://github.com/python/cpython/blob/bfc2d5a5c4550ab3a2fadeb9459b4bd948ff6.>..
shows this is a tricky problem
> > The most straightforward workaround for this is to skip the
decorator
> syntax.  With PEP 649 active, this code should work:
> class Node:
>          global_node: ClassVar[Node | None]
>          left: InitVar[Node | None]
>          right: InitVar[None | None]
>     Node = dataclass(Node)
> //arry/

the decorator version simply has to work


I also think that it would be unfortunate if the decorator version 
wouldn't work. This is a pretty basic use case.



So, here's an idea, credit goes to Eric V. Smith.  What if we tweak how 
decorators work, /jst slghtly/, so that they work like the 
workaround code above?


Specifically: currently, decorators are called just after the function 
or class object is created, before it's bound to a variable.  But we 
could change it so that we first bind the variable to the initial value, 
then call the decorator, then rebind.  That is, this code:


   @dekor8
   class C:
    ...

would become equivalent to this code:

   class C:
    ...
   C = dekorate(C)

This seems like it would solve the class self-reference problem--the 
"Node" example above--when PEP 649 is active.


This approach shouldn't break reasonable existing code.  That said, this 
change would be observable from Python, and pathological code could 
notice and break.  For example:


   def ensure_Foo_is_a_class(o):
    assert isinstance(Foo, type)
    return o

   class Foo:
    ...

   @ensure_Foo_is_a_class
   def Foo():
    ...

This terrible code currently would not raise an assertion.  But if we 
made the proposed change to the implementation of decorators, it would.  
I doubt anybody does this sort of nonsense, I just wanted to fully flesh 
out the topic.



If this approach seems interesting, here's one wrinkle to iron out.  
When an object has multiple decorators, would we want to re-bind after 
each decorator call?  That is, would


   @dekor1
   @dekor2
   @dekor3
   class C:
    ...

turn into approach A:

   class C:
    ...
   C = dekor1(dekor2(dekor3(C)))

or approach B:

   class C:
    ...
   C = dekor3(C)
   C = dekor2(C)
   C = dekor1(C)

I definitely think "approach B" makes more sense.


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OA4J23TU3XACE5NAUUQNQQ52BXGNHUIS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-11 Thread Larry Hastings

On 8/11/21 12:02 AM, Thomas Grainger wrote:

I think as long as there's a test case for something like

```
@dataclass
class Node:
 global_node: ClassVar[Node | None]
 left: InitVar[Node | None]
 right: InitVar[None | None]
```

the bug https://bugs.python.org/issue33453 and the current implementation 
https://github.com/python/cpython/blob/bfc2d5a5c4550ab3a2fadeb9459b4bd948ff61a2/Lib/dataclasses.py#L658-L714
 shows this is a tricky problem



The most straightforward workaround for this is to skip the decorator 
syntax.  With PEP 649 active, this code should work:


   class Node:
global_node: ClassVar[Node | None]
left: InitVar[Node | None]
right: InitVar[None | None]
   Node = dataclass(Node)


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZWTTKKIOVA7UYKX4SFWJ6DZ5X72Z2O4P/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-11 Thread Larry Hastings

On 8/9/21 8:25 PM, Inada Naoki wrote:

Currently, reference implementation of PEP 649 has been suspended.
We need to revive it and measure performance/memory impact.


Perhaps this sounds strange--but I don't actually agree.

PEP 563 was proposed to solve a forward-references problem for the 
typing community.  If you read the PEP, you'll note it contains no 
discussion about its impact on CPU performance or memory consumption.  
The performance of its prototype was fine, and in any case, the 
important thing was to solve the problem in the language.


I think PEP 649 should be considered in the same way.  In my opinion, 
the important thing is to figure out what semantics we want for the 
language.  Once we figure out what semantics we want, we should 
implement them, and only /then/ should we start worrying about 
performance.  Fretting about performance at this point is premature and 
a distraction.


I assert PEP 649's performance and memory use is already acceptable, 
particularly for a prototype.  And I'm confident that if PEP 649 is 
accepted, the core dev community will find endless ways to optimize the 
implementation.




As far as I remember, the reference implementation created a function
object for each methods.


First, it's only for methods with annotations.  Second, it only stores a 
code object on the annotated function; the co_annotations function is 
lazily created.  The exception to this is when co_annotations needs a 
closure; in that case, it currently creates and binds the co_annotation 
function non-lazily.  Obviously this could be done lazily too, I didn't 
bother for the prototype.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/R6QP2SQCJ3KBKUFXXA5HZW7GY2XAJQRD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-11 Thread Larry Hastings

On 8/10/21 11:15 AM, Thomas Grainger wrote:

Although the co_annoations code could intercept the NameError and replace 
return a ForwardRef object instead of the resolved name



No, it should raise a NameError, just like any other Python code.  
Annotations aren't special enough to break the rules.


I worry about Python-the-language enshrining design choices made by the 
typing module.  Python is now on its fourth string interpolation 
technology, and it ships with three command-line argument parsing 
libraries; in each of these cases, we were adding a New Thing that was 
viewed at the time as an improvement over the existing thing(s).  It'd 
be an act of hubris to assert that the current "typing" module is the 
ultimate, final library for expressing type information in Python.  But 
if we tie the language too strongly to the typing module, I fear we 
could strangle its successors in their cribs.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5NUVYOLIHEHS373WEK3D2ZGXSWPI5XV7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-11 Thread Larry Hastings

On 8/10/21 11:09 AM, Damian Shaw wrote:
Could PEP 649 be modified to say that if a NameError is raised the 
result is not cached and therefore you can inspect it later at runtime 
to get the real type once it is defined? Wouldn't that then allow 
users to write code that allows for all use cases under this scenario?



The PEP doesn't say so explicitly, but that was the intent of the 
design, yes.  If you look at the pseudo-code in the "__co_annotations__" 
section:


   https://www.python.org/dev/peps/pep-0649/#co-annotations

you'll see that it doesn't catch NameError, allowing it to bubble up to 
user code.  The prototype intentionally behaves the same way.  Certainly 
it wouldn't hurt to mention that explicitly in the PEP.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Q4QHBAXUUV3KVD2ZLMNZW3K64A6CQV7B/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: enum in the stable ABI (Was: PEP 558: Defined semantics for locals)

2021-07-23 Thread Larry Hastings


On 7/23/21 7:38 AM, Petr Viktorin wrote:
(In both C & C++, the size of an `enum` is implementation-defined. 
That's unlikely to be a problem in practice, but one more point 
against enum.)



True, but there's always the old trick of sticking in a value that 
forces it to be at least 32-bit:


   typedef enum {
    INVALID = 0,
    RED = 1,
    BLUE = 2,
    GREEN = 3,

    UNUSED = 1073741824
   } color_t;


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/3C5ANX5ONLE6OWZ4N24ENDOX2H3R5UC2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: change of behaviour for '.' in sys.path between 3.10.0a7 and 3.10.0b1

2021-06-03 Thread Larry Hastings


On 6/3/21 4:20 AM, Chris Johns wrote:

Might be out of context here, but IMHO "." shouldn't be assumed to be the 
current directory anyway.

As someone who has ported python to a system where it isn't, these assumptions 
tend to cause problems.



That sounds miserable.  What does "." signify on such a system, if not 
the current directory?



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LVOYZGSOJ3OGDX7OUTDXRQVXZ7YGGPS7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-18 Thread Larry Hastings

On 5/18/21 5:25 AM, Pablo Galindo Salgado wrote:

Yet another problem that I found:

One integer is actually not enough to assign IDs. One unsigned integer 
can cover 4,294,967,295 AST nodes, but is technically possibleto have 
more than that in a single file.



Surely you could use a 64-bit int for the node ID.


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AEBB7YY2OYSTPJL5K44USRS3WCS2BTMI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The repr of a sentinel

2021-05-13 Thread Larry Hastings

On 5/13/21 10:46 AM, Eric V. Smith wrote:

>>> MISSING


I think a repr of just "MISSING", or maybe "dataclasses.MISSING" would 
be better.



I literally just went down this road--for a while there was a special 
sentinel value for the eval_str parameter to inspect.get_annotations().  
The repr I went with was "", e.g "".  It depends on how 
seriously you take the idea that eval(repr(x)) == x.  Certainly most 
objects don't actually support that, e.g., uh, object(), a type which I 
understand is available in most Python implementations.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FJW37TM7JKZOSZEIYYADJLULT7EH6AJN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-09 Thread Larry Hastings

On 5/9/21 3:00 AM, M.-A. Lemburg wrote:

BTW: For better readability, I'd also not output the  lines
for every stack level in the traceback, but just the last one,
since it's usually clear where the call to the next stack
level happens in the upper ones.



Playing devil's advocate: in the un-usual case, where it may be 
ambiguous where the call came from, outputting the  lines could be a 
real life-saver.


I concede this is rare,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OTTSTWR4I7AG4J7RJX4PLCGCZO6DVOMZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-07 Thread Larry Hastings

On 5/7/21 2:45 PM, Pablo Galindo Salgado wrote:
Given that column numbers are not very big compared with line numbers, 
we plan to store these as unsigned chars
or unsigned shorts. We ran some experiments over the standard library 
and we found that the overhead of all pyc files is:


* If we use shorts, the total overhead is ~3% (total size 28MB and the 
extra size is 0.88 MB).
* If we use chars. the total overhead is ~1.5% (total size 28 MB and 
the extra size is 0.44MB).


One of the disadvantages of using chars is that we can only report 
columns from 1 to 255 so if an error happens in a column
bigger than that then we would have to exclude it (and not show the 
highlighting) for that frame. Unsigned short will allow

the values to go from 0 to 65535.


Are lnotab entries required to be a fixed size?  If not:

   if column < 255:
    lnotab.write_one_byte(column)
   else:
    lnotab.write_one_byte(255)
    lnotab.write_two_bytes(column)


I might even write four bytes instead of two in the latter case,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/B3SFCZPXIKGO3LM6UJVSJXFIRAZH2R26/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: In what tense should the changelog be written?

2021-04-30 Thread Larry Hastings

On 4/30/21 4:51 AM, Victor Stinner wrote:

On Fri, Apr 30, 2021 at 6:57 AM Larry Hastings  wrote:

Function and class names should not be followed by parentheses, unless 
demonstrating an example call.

Oh, I love putting parentheses when mentionning a function: "foo() now
does thigs new thing". Also, I like to use :func:`foo` Sphinx markup
to add a link in the changelog, and Sphinx adds parentheses for me
*but* I don't add parentheses :-)


I do too, and I definitely do that when I write emails, .txt files, and 
so on.  But--as you point out--Sphinx adds them for you.  NEWS blurbs 
should be written in CPython-docs-compatible ReST, and that includes 
using :func:`foo` to refer to a function called foo.  Sphinx will turn 
this into a hyperlink to the definition of foo in the docs if it knows 
where it is; :func:`inspect.signature` and :func:`getattr` both work great.


I'm still not sure what markup (if any) to use for Git checkins. On 
Github I know to use Markdown, and they provide a Markdown edit box.  
But should I use Markdown when I use "git commit" at the command-line?  
And why am I bringing this up now?  Surely this will only serve to 
fragment the conversation and confuse the larger issue.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EPTB5KK6NXKHGHVB4HXAGBWTIKNBCJML/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: In what tense should the changelog be written?

2021-04-29 Thread Larry Hastings



D'oh!  I have a second draft already.

   Your NEWS entry should be written in the /present tense,/ and should
   start with a verb:

 * Add foo [...]
 * Change bar [...]
 * Remove bat [...]
 * Fix buffalo.spam [...]

   Function and class names should not be followed by parentheses,
   unless demonstrating an example call.


Slapping my forehead,


//arry/


On 4/29/21 9:50 PM, Larry Hastings wrote:



I'll wait to see if anybody else has contrary opinions, but for now 
here's a first draft:


Your NEWS entry should be written in the present tense, and should
start with a verb:

  * Added foo [...]
  * Changed bar [...]
  * Removed bat [...]
  * Fixed buffalo.spam [...]

Function and class names should not be followed by parentheses,
unless demonstrating an example call.



//arry/

On 4/29/21 9:15 PM, Guido van Rossum wrote:
There’s something in the dev guide, but not about tense. Worth 
adding. (My pet peeve is not to write “The foo module resets the bar 
state when spam happens,” because that isn’t clear about whether 
that’s the before or after behavior.)


On Thu, Apr 29, 2021 at 17:37 Ethan Furman <mailto:et...@stoneleaf.us>> wrote:


On 4/29/21 7:57 PM, Larry Hastings wrote:

 > When one writes one's "blurb" for the changelog, in what tense
should it be?

Present tense.  :)

--
~Ethan~
___
Python-Dev mailing list -- python-dev@python.org
<mailto:python-dev@python.org>
To unsubscribe send an email to python-dev-le...@python.org
<mailto:python-dev-le...@python.org>
https://mail.python.org/mailman3/lists/python-dev.python.org/
<https://mail.python.org/mailman3/lists/python-dev.python.org/>
Message archived at

https://mail.python.org/archives/list/python-dev@python.org/message/AM3TQUXVNKGOAEC2GBVNUZAZOCLAD6N3/

<https://mail.python.org/archives/list/python-dev@python.org/message/AM3TQUXVNKGOAEC2GBVNUZAZOCLAD6N3/>
Code of Conduct: http://python.org/psf/codeofconduct/
<http://python.org/psf/codeofconduct/>

--
--Guido (mobile)

___
Python-Dev mailing list --python-dev@python.org
To unsubscribe send an email topython-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived 
athttps://mail.python.org/archives/list/python-dev@python.org/message/FNV3QHWZCNAHPNLR3SPTOGUFEYHJSYEK/
Code of Conduct:http://python.org/psf/codeofconduct/





___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YC6NJT4J6GBD37FJZGP6DC26TWUSSLGE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: In what tense should the changelog be written?

2021-04-29 Thread Larry Hastings



I'll wait to see if anybody else has contrary opinions, but for now 
here's a first draft:


   Your NEWS entry should be written in the present tense, and should
   start with a verb:

 * Added foo [...]
 * Changed bar [...]
 * Removed bat [...]
 * Fixed buffalo.spam [...]

   Function and class names should not be followed by parentheses,
   unless demonstrating an example call.



//arry/

On 4/29/21 9:15 PM, Guido van Rossum wrote:
There’s something in the dev guide, but not about tense. Worth adding. 
(My pet peeve is not to write “The foo module resets the bar state 
when spam happens,” because that isn’t clear about whether that’s the 
before or after behavior.)


On Thu, Apr 29, 2021 at 17:37 Ethan Furman <mailto:et...@stoneleaf.us>> wrote:


On 4/29/21 7:57 PM, Larry Hastings wrote:

 > When one writes one's "blurb" for the changelog, in what tense
should it be?

Present tense.  :)

--
~Ethan~
___
Python-Dev mailing list -- python-dev@python.org
<mailto:python-dev@python.org>
To unsubscribe send an email to python-dev-le...@python.org
<mailto:python-dev-le...@python.org>
https://mail.python.org/mailman3/lists/python-dev.python.org/
<https://mail.python.org/mailman3/lists/python-dev.python.org/>
Message archived at

https://mail.python.org/archives/list/python-dev@python.org/message/AM3TQUXVNKGOAEC2GBVNUZAZOCLAD6N3/

<https://mail.python.org/archives/list/python-dev@python.org/message/AM3TQUXVNKGOAEC2GBVNUZAZOCLAD6N3/>
Code of Conduct: http://python.org/psf/codeofconduct/
<http://python.org/psf/codeofconduct/>

--
--Guido (mobile)

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FNV3QHWZCNAHPNLR3SPTOGUFEYHJSYEK/
Code of Conduct: http://python.org/psf/codeofconduct/



___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DOL55BKC7FNLZCDPKRXLW6PT5EADZXQ3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] In what tense should the changelog be written?

2021-04-29 Thread Larry Hastings


When one writes one's "blurb" for the changelog, in what tense should it 
be?  I mostly see entries in present tense:


   bpo-43660: Fix crash that happens when replacing sys.stderr with a
   callable that can remove the object while an exception is being
   printed. Patch by Pablo Galindo.

   bpo-41561: Add workaround for Ubuntu’s custom OpenSSL security level
   policy.

But occasionally I see entries in past tense:

   bpo-26053: Fixed bug where the pdb interactive run command echoed
   the args from the shell command line, even if those have been
   overridden at the pdb prompt.

   bpo-40630: Added tracemalloc.reset_peak() to set the peak size of
   traced memory blocks to the current size, to measure the peak of
   specific pieces of code.

I couldn't find any guidance in the Python Dev Guide after sixty seconds 
of poking around.



Obviously this isn't a big deal.  But it might be nice to try and nudge 
everybody in the same direction.  It'd be pleasant if the changelog read 
in a more unified voice, and using the same tense and sentence structure 
would help towards that goal.


If we arrived at a firm decision, maybe "blurb" et al could add a little 
text suggesting the proper style.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/SUEPLPRN2IY7EGC7XBLWWN2BOLM3GYMQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Let's Fix Class Annotations -- And Maybe Annotations Generally

2021-04-24 Thread Larry Hastings

On 4/24/21 8:09 AM, Petr Viktorin wrote:

On 24. 04. 21 9:52, Larry Hastings wrote:

I've hit a conceptual snag in this.

What I thought I needed to do: set __annotations__= {} in the module 
dict, and set __annotations__= {} in user class dicts. The latter was 
more delicate than the former but I think I figured out a good spot 
for both.  I have this much working, including fixing the test suite.


But now I realize (*head-slap* here): if *every* class is going to 
have annotations, does that mean builtin classes too? StructSequence 
classes like float? Bare-metal type objects like complex?  Heck, what 
about type itself?!


My knee-jerk initial response: yes, those too.  Which means adding a 
new getsetdef to the type object.  But that's slightly complicated.  
The point of doing this is to preserve the existing best-practice of 
peeking in the class dict for __annotations__, to avoid inheriting 
it.  If I'm to preserve that, the get/set for __annotations__ on a 
type object would need to get/set it on tp_dict if tp_dict was not 
NULL, and use internal storage somewhere if there is no tp_dict.


It's worth noticing that builtin types don't currently have 
__annotations__ set, and you can't set them. (Or, at least, float, 
complex, and type didn't have them set, and wouldn't let me set 
annotations on them.)  So presumably people using current best 
practice--peek in the class dict--aren't having problems.


So I now suspect that my knee-jerk answer is wrong.  Am I going too 
far down the rabbit hole?  Should I /just/ make the change for user 
classes and leave builtin classes untouched?  What do you think?


Beware of adding mutable state to bulit-in (C static) type objects: 
these are shared across interpreters, so changing them can “pollute” 
unwanted contexts.


This has been so for a long time [0]. There are some subinterpreter 
efforts underway that might eventually lead to making __annotations__ 
on static types easier to add, but while you're certainly welcome to 
explore the neighboring rabbit hole as well, I do think you're going 
in too far for now :)


[0] 
https://mail.python.org/archives/list/python-dev@python.org/message/KLCZIA6FSDY3S34U7A72CPSBYSOMGZG3/



That's a good point!  The sort of detail one forgets in the rush of the 
moment.


Given that the lack of annotations on builtin types already isn't a 
problem, and given this wrinkle, and generally given the "naw you don't 
have to" vibe I got from you and Nick (and the lack of "yup you gotta" I 
got from anybody else), I'm gonna go with not polluting the builtin 
types for now.


This is not to say that, in the fullness of time, those objects should 
never have annotations.  Even in the three random types I picked in my 
example, there's at least one example: float.imag is a data member and 
might theoretically be annotated.  But we can certainly kick this can 
down the road too.  Maybe by the time we get around to it, we'll have a 
read-only dictionary we can use for the purpose.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CXJGWWHXK2UUDJQGLSJKU5JKZVKV6TFK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Let's Fix Class Annotations -- And Maybe Annotations Generally

2021-04-24 Thread Larry Hastings

On 4/24/21 7:11 AM, Nick Coghlan wrote:
On Sat, 24 Apr 2021, 5:53 pm Larry Hastings, <mailto:la...@hastings.org>> wrote:



So I now suspect that my knee-jerk answer is wrong.  Am I going
too far down the rabbit hole? Should I /just/ make the change for
user classes and leave builtin classes untouched?  What do you think?


I'd suggest kicking the can down the road: leave builtin classes alone 
for now, but file a ticket to reconsider the question for 3.11.


In the meantime, inspect.get_annotations can help hide the discrepancy.



The good news: inspect.get_annotations() absolutely can handle it.  
inspect.get_annotations() is so paranoid about examining the object you 
pass in, I suspect you could pass in an old boot and it would pull out 
the annotations--if it had any.


Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/X2AQ5XTXKHPKSNVG6E5F5TROFKWY4CDM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Let's Fix Class Annotations -- And Maybe Annotations Generally

2021-04-24 Thread Larry Hastings



I've hit a conceptual snag in this.

What I thought I needed to do: set __annotations__= {} in the module 
dict, and set __annotations__= {} in user class dicts.  The latter was 
more delicate than the former but I think I figured out a good spot for 
both.  I have this much working, including fixing the test suite.


But now I realize (*head-slap* here): if *every* class is going to have 
annotations, does that mean builtin classes too?  StructSequence classes 
like float? Bare-metal type objects like complex?  Heck, what about type 
itself?!


My knee-jerk initial response: yes, those too.  Which means adding a new 
getsetdef to the type object.  But that's slightly complicated.  The 
point of doing this is to preserve the existing best-practice of peeking 
in the class dict for __annotations__, to avoid inheriting it.  If I'm 
to preserve that, the get/set for __annotations__ on a type object would 
need to get/set it on tp_dict if tp_dict was not NULL, and use internal 
storage somewhere if there is no tp_dict.


It's worth noticing that builtin types don't currently have 
__annotations__ set, and you can't set them. (Or, at least, float, 
complex, and type didn't have them set, and wouldn't let me set 
annotations on them.)  So presumably people using current best 
practice--peek in the class dict--aren't having problems.


So I now suspect that my knee-jerk answer is wrong.  Am I going too far 
down the rabbit hole?  Should I /just/ make the change for user classes 
and leave builtin classes untouched?  What do you think?



Cheers,


//arry/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IK7IWUCTESD5OZE47J45EY3FRVM7GEKM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Let's Fix Class Annotations -- And Maybe Annotations Generally

2021-04-23 Thread Larry Hastings

On 4/23/21 9:26 PM, Guido van Rossum wrote:
This is happening, right? Adding a default `__annotations = {}` to 
modules and classes. (Though https://bugs.python.org/issue43901 
 seems temporarily stuck.)



It's happening, and I wouldn't say it's stuck.  I'm actively working on 
it--currently puzzling my way through some wild unit test failures.  I 
expect to ship my first PR over the weekend.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TQNOMVBPIH2NSYNDSYCIOCMOZ3VX3SXR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Keeping Python a Duck Typed Language.

2021-04-22 Thread Larry Hastings


On 4/20/21 10:03 AM, Mark Shannon wrote:
If you guarded your code with `isinstance(foo, Sequence)` then I could 
not use it with my `Foo` even if my `Foo` quacked like a sequence. I 
was forced to use nominal typing; inheriting from Sequence, or 
explicitly registering as a Sequence.



If I'm reading the library correctly, this is correct--but, perhaps, it 
could be remedied by adding a __subclasshook__ to Sequence that looked 
for an __iter__ attribute.  That technique might also apply to other 
ABCs in collections.abc, Mapping for example.  Would that work, or am I 
missing an critical detail?


Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QRS7JWBP3KX3TYSYDVKILSYOXTLOFUY3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Keeping Python a Duck Typed Language.

2021-04-20 Thread Larry Hastings


On 4/20/21 10:03 AM, Mark Shannon wrote:
Then came PEP 563 and said that if you wanted to access the 
annotations of an object, you needed to call typing.get_type_hints() 
to get annotations in a meaningful form.

This smells a bit like enforced static typing to me.


I'm working to address this.  We're adding a new function to the 
standard library called inspect.get_annotations().  It's a lot less 
opinionated than typing.get_type_hints()--it simply returns the 
un-stringized annotations.  It also papers over some other awkward 
annotations behaviors, such as classes inheriting annotations from base 
classes.  I propose that it be the official new Best Practice for 
accessing annotations (as opposed to type hints, you should still use 
typing.get_type_hints() for that).


   https://bugs.python.org/issue43817

This will get checked in in time for Python 3.10b1.

Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7TUTEDQ6MDKDV5VQCRLFUYI2XP3DIPDS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Let's Fix Class Annotations -- And Maybe Annotations Generally

2021-04-19 Thread Larry Hastings



As long as I'm gravedigging old conversations...!  Remember this one, 
also from January of this year?  Here's a link to the thread in the 
c.l.p-d Mailman archive.  The first message in the thread is a good 
overview of the problem:


   
https://mail.python.org/archives/list/python-dev@python.org/thread/AWKVI3NRCHKPIDPCJYGVLW4HBYTEOQYL/


Here's kind of where we left it:

On 1/12/21 7:48 PM, Guido van Rossum wrote:
On Tue, Jan 12, 2021 at 6:35 PM Larry Hastings <mailto:la...@hastings.org>> wrote:


On 1/12/21 5:28 PM, Brett Cannon wrote:

The other thing to keep in mind is we are talking about every
module, class, and function getting 64 bytes ... which I bet
isn't that much.


Actually it's only every module and class.  Functions don't have
this problem because they've always stored __annotations__
internally--meaning, peeking in their __dict__ doesn't work, and
they don't support inheritance anyway.  So the number is even
smaller than that.

If we can just make __annotations__ default to an empty dict on
classes and modules, and not worry about the memory consumption,
that goes a long way to cleaning up the semantics.


I would like that very much. And the exception for functions is 
especially helpful.



First of all, I've proposed a function that should also help a lot:

   https://bugs.python.org/issue43817

The function will be called inspect.get_annotations(o).  It's like 
typing.get_type_hints(o) except less opinionated.  This function would 
become the best practice for everybody who wants annotations**, like so:


   import inspect
   if hasattr(inspect, "get_annotations"):
    how_i_get_annotations = inspect.get_annotations
   else:
    # do whatever it was I did in Python 3.9 and before...


** Everybody who specifically wants /type hints/ should instead call 
typing.get_type_hints(), and good news!, /that/ function has existed for 
several versions now.  So they probably already /do/ call it.



I'd still like to add a default empty __annotations__ dict to all 
classes and modules for Python 3.10, for everybody who doesn't switch to 
using this as-yet-unwritten inspect.get_annotations() function.  The 
other changes I propose in that thread (e.g. deleting __annotations__ 
always throws TypeError) would be nice, but honestly they aren't high 
priority.  They can wait until after Python 3.10.  Just these these two 
things (inspect.get_annotations() and always populating __annotations__ 
for classes and modules) would go a long way to cleaning up how people 
examine annotations.


Long-term, hopefully we can fold the desirable behaviors of 
inspect.get_annotations() into the language itself, at which point we 
could probably deprecate the function.  That wouldn't be until a long 
time from now of course.



Does this need a lot of discussion, or can I just go ahead with the bpo 
and PR and such?  I mean, I'd JFDI, as Barry always encourages, but 
given how much debate we've had over annotations in the last two weeks, 
I figured I should first bring it up here.



Happy two-weeks'-notice,


//arry/

p.s. I completely forgot about this until just now--sorry.  At least I 
remembered before Python 3.10b1!


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/J4LZEIZTYZQWGIM5VZGNMQPWB5ZWVEXP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 in light of PEP 649

2021-04-19 Thread Larry Hastings

On 4/19/21 1:37 PM, Ethan Furman wrote:

On 4/19/21 10:51 AM, Larry Hastings wrote:

Something analogous /could/ happen in the PEP 649 branch but 
currently doesn't.  When running Inada Noki's benchmark, there are a 
total of nine possible annotations code objects.  Except, each 
function generated by the benchmark has a unique name, and I 
incorporate that name into the name given to the code object 
(f"{function_name}.__co_annotations__"). Since each function name is 
different, each code object name is different, so each code object 
/hash/ is different, and since they aren't /exact/ duplicates they 
are never consolidated.


I hate anonymous functions, so the name is very important to me. The 
primary code base I work on does have hundreds of methods with the 
same signature -- unfortunately, many of the also have the same name 
(four levels of super() calls is not unusual, and all to the same 
read/write/create parent methods from read/write/create child 
methods).  In such a case would the name make a meaningful difference?


Or maybe the name can be store when running in debug mode, and not 
stored with -O ?



I think it needs to have /a/ name.  But if it made a difference, perhaps 
it could use f"{function_name}.__co_annotations__" normally, and simply 
"__co_annotations__" with -O.


Note also that this is the name of the annotations code object, although 
I think the annotations function object reuses the name too.  Anyway, 
under normal circumstances, the Python programmer would have no reason 
to interact directly with the annotations code/function object, so it's 
not likely it will affect them one way or another.  The only time they 
would see it would be, say, if the calculation of an annotation threw an 
exception, in which case it seems like seeing 
f"{function_name}.__co_annotations__" in the traceback might be a 
helpful clue in diagnosing the problem.



I'd want to see some real numbers before considering changes here.  If 
it has a measurable and beneficial effect on real-world code, okay! 
let's change it!  But my suspicion is that it doesn't really matter.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZAPCP4MFDOF34E3G2TWAVY7JUQRHDOOB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 in light of PEP 649

2021-04-19 Thread Larry Hastings



Oops: where I said nine, I should have said, twenty-seven.  3-cubed.  
Should have had my coffee /before/ posting.  Carry on!



//arry/

On 4/19/21 10:51 AM, Larry Hastings wrote:



I noticed something this morning: there's another way in which Inada 
Naoki's benchmark here is--possibly?--unrealistic.


As mentioned, his benchmark generates a thousand functions, each of 
which takes exactly three parameters, and each of those parameters 
randomly chooses one of three annotations.  In current trunk (not in 
my branch, I'm behind), there's an optimization for stringized 
annotations that compiles the annotations into a tuple, and then when 
you pull out __annotations__ on the object at runtime it converts it 
into a dict on demand.


This means that even though there are a thousand functions, they only 
ever generate one of nine possible tuples for these annotation 
tuples.  And here's the thing: our lovely marshal module is smart 
enough to notice that these tuples /are/ duplicates, and it'll throw 
away the duplicates and replace them with references to the original.


Something analogous /could/ happen in the PEP 649 branch but currently 
doesn't.  When running Inada Noki's benchmark, there are a total of 
nine possible annotations code objects.  Except, each function 
generated by the benchmark has a unique name, and I incorporate that 
name into the name given to the code object 
(f"{function_name}.__co_annotations__"). Since each function name is 
different, each code object name is different, so each code object 
/hash/ is different, and since they aren't /exact/ duplicates they are 
never consolidated.


Inada Naoki has suggested changing this, so that all the annotations 
code objects have the same name ("__co_annotations__").  If we made 
that change, I'm pretty sure the code size delta in this synthetic 
benchmark would drop.  I haven't done it because the current name of 
the code object might be helpful in debugging, and I'm not convinced 
this would have an effect in real-world code.


But... would it?  Someone, and again I think it's Inada Naoki, 
suggests that in real-world applications, there are often many, many 
functions in a single module that have identical signatures.  The 
annotation-tuples optimization naturally takes advantage of that.  PEP 
649 doesn't.  Should it?  Would this really be beneficial to 
real-world code bases?


Cheers,


//arry/


On 4/16/21 12:26 PM, Larry Hastings wrote:



Please don't confuse Inada Naoki's benchmark results with the effect 
PEP 649 would have on a real-world codebase.  His artifical benchmark 
constructs a thousand empty functions that take three parameters with 
randomly-chosen annotations--the results provides some insights but 
are not directly applicable to reality.


PEP 649's effects on code size / memory / import time are contingent 
on the number of annotations and the number of objects annotated, not 
the overall code size of the module.  Expressing it that way, and 
suggesting that Python users would see the same results with 
real-world code, is highly misleading.


I too would be interested to know the effects PEP 649 had on a 
real-world codebase currently using PEP 563, but AFAIK nobody has 
reported such results.



//arry/

On 4/16/21 11:05 AM, Jukka Lehtosalo wrote:
On Fri, Apr 16, 2021 at 5:28 PM Łukasz Langa <mailto:luk...@langa.pl>> wrote:


[snip] I say "compromise" because as Inada Naoki measured,
there's still a non-zero performance cost of PEP 649 versus PEP 563:

- code size: +63%
- memory: +62%
- import time: +60%


Will this hurt some current users of typing? Yes, I can name you
multiple past employers of mine where this will be the case. Is
it worth it for Pydantic? I tend to think that yes, it is, since
it is a significant community, and the operations on type
annotations it performs are in the sensible set for which
`typing.get_type_hints()` was proposed.


Just to give some more context: in my experience, both import time 
and memory use tend to be real issues in large Python codebases 
(code size less so), and I think that the relative efficiency of PEP 
563 is an important feature. If PEP 649 can't be made more 
efficient, this could be a major regression for some users. Python 
server applications need to run multiple processes because of the 
GIL, and since code objects generally aren't shared between 
processes (GC and reference counting makes it tricky, I understand), 
code size increases tend to be amplified on large servers. Even 
having a lot of RAM doesn't necessarily help, since a lot of RAM 
typically implies many CPU cores, and thus many processes are needed 
as well.


I can see how both PEP 563 and PEP 649 bring significant benefits, 
but typically for different user populations. I wonder if there's a 
way of combining the benefits of both approaches. I don't like the 
idea of having toggles for different perform

[Python-Dev] Re: PEP 563 in light of PEP 649

2021-04-19 Thread Larry Hastings



I noticed something this morning: there's another way in which Inada 
Naoki's benchmark here is--possibly?--unrealistic.


As mentioned, his benchmark generates a thousand functions, each of 
which takes exactly three parameters, and each of those parameters 
randomly chooses one of three annotations.  In current trunk (not in my 
branch, I'm behind), there's an optimization for stringized annotations 
that compiles the annotations into a tuple, and then when you pull out 
__annotations__ on the object at runtime it converts it into a dict on 
demand.


This means that even though there are a thousand functions, they only 
ever generate one of nine possible tuples for these annotation tuples.  
And here's the thing: our lovely marshal module is smart enough to 
notice that these tuples /are/ duplicates, and it'll throw away the 
duplicates and replace them with references to the original.


Something analogous /could/ happen in the PEP 649 branch but currently 
doesn't.  When running Inada Noki's benchmark, there are a total of nine 
possible annotations code objects.  Except, each function generated by 
the benchmark has a unique name, and I incorporate that name into the 
name given to the code object (f"{function_name}.__co_annotations__"). 
Since each function name is different, each code object name is 
different, so each code object /hash/ is different, and since they 
aren't /exact/ duplicates they are never consolidated.


Inada Naoki has suggested changing this, so that all the annotations 
code objects have the same name ("__co_annotations__").  If we made that 
change, I'm pretty sure the code size delta in this synthetic benchmark 
would drop.  I haven't done it because the current name of the code 
object might be helpful in debugging, and I'm not convinced this would 
have an effect in real-world code.


But... would it?  Someone, and again I think it's Inada Naoki, suggests 
that in real-world applications, there are often many, many functions in 
a single module that have identical signatures.  The annotation-tuples 
optimization naturally takes advantage of that.  PEP 649 doesn't.  
Should it? Would this really be beneficial to real-world code bases?


Cheers,


//arry/


On 4/16/21 12:26 PM, Larry Hastings wrote:



Please don't confuse Inada Naoki's benchmark results with the effect 
PEP 649 would have on a real-world codebase.  His artifical benchmark 
constructs a thousand empty functions that take three parameters with 
randomly-chosen annotations--the results provides some insights but 
are not directly applicable to reality.


PEP 649's effects on code size / memory / import time are contingent 
on the number of annotations and the number of objects annotated, not 
the overall code size of the module.  Expressing it that way, and 
suggesting that Python users would see the same results with 
real-world code, is highly misleading.


I too would be interested to know the effects PEP 649 had on a 
real-world codebase currently using PEP 563, but AFAIK nobody has 
reported such results.



//arry/

On 4/16/21 11:05 AM, Jukka Lehtosalo wrote:
On Fri, Apr 16, 2021 at 5:28 PM Łukasz Langa <mailto:luk...@langa.pl>> wrote:


[snip] I say "compromise" because as Inada Naoki measured,
there's still a non-zero performance cost of PEP 649 versus PEP 563:

- code size: +63%
- memory: +62%
- import time: +60%


Will this hurt some current users of typing? Yes, I can name you
multiple past employers of mine where this will be the case. Is
it worth it for Pydantic? I tend to think that yes, it is, since
it is a significant community, and the operations on type
annotations it performs are in the sensible set for which
`typing.get_type_hints()` was proposed.


Just to give some more context: in my experience, both import time 
and memory use tend to be real issues in large Python codebases (code 
size less so), and I think that the relative efficiency of PEP 563 is 
an important feature. If PEP 649 can't be made more efficient, this 
could be a major regression for some users. Python server 
applications need to run multiple processes because of the GIL, and 
since code objects generally aren't shared between processes (GC and 
reference counting makes it tricky, I understand), code size 
increases tend to be amplified on large servers. Even having a lot of 
RAM doesn't necessarily help, since a lot of RAM typically implies 
many CPU cores, and thus many processes are needed as well.


I can see how both PEP 563 and PEP 649 bring significant benefits, 
but typically for different user populations. I wonder if there's a 
way of combining the benefits of both approaches. I don't like the 
idea of having toggles for different performance tradeoffs 
indefinitely, but I can see how this might be a necessary compromise 
if we don't want to make things worse for any user groups.


Jukka


[Python-Dev] Re: PEP 563 and 649: The Great Compromise

2021-04-18 Thread Larry Hastings

On 4/17/21 8:43 PM, Larry Hastings wrote:
TBD: how this interacts with PEP 649.  I don't know if it means we 
only do this, or if it would be a good idea to do both this and 649.  
I just haven't thought about it.  (It would be a runtime error to set 
both "o.__str_annotations__" and "o.__co_annotations__", though.)



I thought about it some, and I think PEP 649 would still be a good idea, 
even if this "PEP 1212" proposal (or a variant of it) was workable and 
got accepted.  PEP 649 solves the forward references problem for most 
users without the restrictions of PEP 563 (or "PEP 1212").  So most 
people wouldn't need to turn on the "PEP 1212" behavior.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BS7QVY7L5UHUQHCFPFCCEORH6JGW2A3Q/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 and 649: The Great Compromise

2021-04-18 Thread Larry Hastings



On 4/18/21 9:14 AM, Richard Levasseur wrote:
Alternatively: what if the "trigger" to resolve the expression to an 
object was moved from a module-level setting to the specific 
expression? e.g.


def foo(x: f'{list[int]}') -> f'{str}':
  bar: f'{tuple[int]}' = ()

@pydantic_or_whatever_that_needs_objects_from_annotations
class Foo:
  blah: f'{tuple[int]}' = ()

I picked f-strings above since they're compatible with existing syntax 
and visible to the AST iirc; the point is some syntax/marker at the 
annotation level to indicate "eagerly resolve this / keep the value at 
runtime". Maybe "as", or ":@", or a "magic" 
@typing.runtime_annotations decorator, or some other bikeshed etc. (As 
an aside, Java deals with this problem by making its annotations 
compile-time only unless you mark them to be kept at runtime)


I genuinely don't understand what you're proposing.  Could you elaborate?

I will note however that your example adds a lot of instances of quoting 
and curly braces and the letter 'f'.  Part of the reason that PEP 563 
exists is that users of type hints didn't like quoting them all the 
time.  Also, explicitly putting quotes around type hints means that 
Python didn't examine them at compile-time, so outright syntax errors 
would not be caught at compile-time. PEP 563 meant that syntax errors 
would be caught at compile-time. (Though PEP 563 still delays other 
errors, like NameError and ValueError, until runtime, the same way that 
PEP 649 does.)




The reasons I suggest this are:

1. A module-level switch reminds me of __future__.unicode_literals. 
Switching that on/off was a bit of a headache due to the action at a 
distance.


__future__.unicode_literals changed the default behavior of strings so 
that they became Unicode.  An important part of my proposal is that it 
minimizes the observable change in behavior at runtime.  PEP 563 changes 
"o.__annotations__" so that it contains stringized annotations, my 
proposal changes that so it returns real values, assuming eval() succeeds.


What if the eval() fails, with a NameLookup or whatever?  Yes, this 
would change observable behavior.  Without the compile-time flag 
enabled, the annotation fails to evaluate correctly at import time.  
With the compile-time flag enabled, the annotation fails to evaluate 
correctly at the time it's examined.  I think this is generally a 
feature anyway.  As you observe in the next paragraph, the vast majority 
of annotations are unused at runtime.  If a program didn't need an 
annotation at runtime, then making it succeed at import time for 
something it doesn't care about seems like a reasonable change in 
behavior.  The downside is, nested library code might make it hard to 
determine which object had the bad annotation, though perhaps we can 
avoid this by crafting a better error message for the exception.



2. It's my belief that the /vast /majority of annotations are unused 
at runtime, so all the extra effort in resolving an annotation 
expression is just wasted cycles. It makes sense for the default 
behavior to be "string annotations", with runtime-evaluation/retention 
enabled when needed.


The conversion is lazy.  If the annotation is never examined at runtime, 
it's left in the state the compiler defined it in.  Where does it waste 
cycles?



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZJ6YFDWABERFXKI2DVWEEHYGW7DY6G6W/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 and 649: The Great Compromise

2021-04-18 Thread Larry Hastings

On 4/18/21 9:10 AM, Damian Shaw wrote:
Hi Larry, all, I was thinking also of a compromise but a slightly 
different approach:


Store annotations as a subclass of string but with the required frames 
attached to evaluate them as though they were in their local context. 
Then have a function "get_annotation_values" that knows how to 
evaluate these string subclasses with the attached frames.


This would allow those who use runtime annotations to access local 
scope like PEP 649, and allow those who use static type checking to 
relax the syntax (as long as they don't try and evaluate the syntax at 
runtime) as per PEP 563.



Something akin to this was proposed and discarded during the discussion 
of PEP 563, although the idea there was to still use actual Python 
bytecode instead of strings:


   
https://www.python.org/dev/peps/pep-0563/#keeping-the-ability-to-use-function-local-state-when-defining-annotations

It was rejected because it would be too expensive in terms of 
resources.  PEP 649's approach uses significantly fewer resources, which 
is one of the reasons it seems viable.


Also, I don't see the benefit of requiring a function like 
"get_annotation_values" to see the actual values.  This would force 
library code that examined annotations to change; I think it's better 
that we preserve the behavior that "o.__annotations__" are real values.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/3CVZKZJZQ35PHA6P7U2ZZLOBTOS7V2AD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 and 649: The Great Compromise

2021-04-18 Thread Larry Hastings



You're right, losing visibility into the local scope (and outer function 
scopes) is part of why I suggest the behavior be compile-time 
selectable.  The pro-PEP-563 crowd doesn't seem to care that 563 costs 
them visibility into anything but global scope; accepting this loss of 
visibility is part of the bargain of enabling the behavior.  But people 
who don't need the runtime behavior of 563 don't have to live with it.


As to only offering marginal benefit beyond typing.get_type_hints()--I 
think the benefit is larger than that.  I realize now I should have gone 
into this topic in the original post; sorry, I kind of rushed through 
that.  Let me fix that here.


One reason you might not want to use typing.get_type_hints() is that it 
doesn't return /annotations/ generally, it specifically returns /type 
hints./  This is more opinionated than just returning the annotations, e.g.


 * None is changed to type(None).
 * Values are wrapped with Optional[] sometimes.
 * String annotations are wrapped with ForwardRef().
 * If __no_type_check__ is set on the object, it ignores the
   annotations and returns an empty dict.

I've already proposed addressing this for Python 3.10 by adding a new 
function to the standard library, probably to be called 
inspect.get_annotations():


   https://bugs.python.org/issue43817


But even if you use this new function, there's still some murky ambiguity.

Let's say you're using Python 3.9, and you've written a library function 
that examines annotations. (Again, specifically: annotations, not type 
hints.)  And let's say the annotations dict contains one value, and it's 
the string "34".  What should you do with it?


If the module that defined it imported "from __future__ import 
annotations", then the actual desired value of the annotation was the 
integer 34, so you should eval() it.  But if the module that defined it 
/didn't/ import that behavior, then the user probably wanted the string 
"34".  How can you tell what the user intended?


I think the only actual way to solve it would be to go rooting around in 
the module to see if you can find the future object.  It's probably 
called "annotations".  But it /is/ possible to compile with that 
behavior without the object being visible--it could be renamed, the 
module could have deleted it. Though these are admittedly unlikely.


By storing stringized annotations in "o.__str_annotations__", we remove 
this ambiguity.  Now we know for certain that these annotations were 
stringized and we should eval() them.  And if a string shows up in 
"o.__annotations__" we know we should leave it alone.


Of course, by making the language do the eval() on the strings, we 
abstract away the behavior completely.  Now library code doesn't need to 
be aware if the module had stringized annotations, or PEP-649-style 
delayed annotations, or "stock" semantics.  Accessing 
"o.__annotations__" always gets you the real annotations values, every time.



Cheers,


//arry/

On 4/18/21 7:06 AM, Jelle Zijlstra wrote:



El sáb, 17 abr 2021 a las 20:45, Larry Hastings (<mailto:la...@hastings.org>>) escribió:



The heart of the debate between PEPs 563 and 649 is the question:
what should an annotation be?  Should it be a string or a Python
value?  It seems people who are pro-PEP 563 want it to be a
string, and people who are pro-PEP 649 want it to be a value.

Actually, let me amend that slightly.  Most people who are pro-PEP
563 don't actually care that annotations are strings, per se. 
What they want are specific runtime behaviors, and they get those
behaviors when PEP 563 turns their annotations into strings.

I have an idea--a rough proposal--on how we can mix together
aspects of PEP 563 and PEP 649.  I think it satisfies everyone's
use cases for both PEPs.  The behavior it gets us:

  * annotations can be stored as strings
  * annotations stored as strings can be examined as strings
  * annotations can be examined as values


The idea:

We add a new type of compile-time flag, akin to a "from
__future__" import, but not from the future.  Let's not call it
"from __present__", for now how about "from __behavior__".

In this specific case, we call it "from __behavior__ import
str_annotations".  It behaves much like Python 3.9 does when you
say "from __future__ import annotations", except: it stores the
dictionary with stringized values in a new member on the
function/class/module called "__str_annotations__".

If an object "o" has "__str_annotations__", set, you can access it
and see the stringized values.

If you access "o.__annotations__", and the object has
"o.__str_annotations__" set but "o.__annotations__" is not set, it
builds (and

[Python-Dev] PEP 563 and 649: The Great Compromise

2021-04-17 Thread Larry Hastings


The heart of the debate between PEPs 563 and 649 is the question: what 
should an annotation be?  Should it be a string or a Python value?  It 
seems people who are pro-PEP 563 want it to be a string, and people who 
are pro-PEP 649 want it to be a value.


Actually, let me amend that slightly.  Most people who are pro-PEP 563 
don't actually care that annotations are strings, per se.  What they 
want are specific runtime behaviors, and they get those behaviors when 
PEP 563 turns their annotations into strings.


I have an idea--a rough proposal--on how we can mix together aspects of 
PEP 563 and PEP 649.  I think it satisfies everyone's use cases for both 
PEPs.  The behavior it gets us:


 * annotations can be stored as strings
 * annotations stored as strings can be examined as strings
 * annotations can be examined as values


The idea:

We add a new type of compile-time flag, akin to a "from __future__" 
import, but not from the future.  Let's not call it "from __present__", 
for now how about "from __behavior__".


In this specific case, we call it "from __behavior__ import 
str_annotations".  It behaves much like Python 3.9 does when you say 
"from __future__ import annotations", except: it stores the dictionary 
with stringized values in a new member on the function/class/module 
called "__str_annotations__".


If an object "o" has "__str_annotations__", set, you can access it and 
see the stringized values.


If you access "o.__annotations__", and the object has 
"o.__str_annotations__" set but "o.__annotations__" is not set, it 
builds (and caches) a new dict by iterating over o.__str_annotations__, 
calling eval() on each value in "o.__str_annotations__".  It gets the 
globals() dict the same way that PEP 649 does (including, if you compile 
a class with str_annotations, it sets __globals__ on the class).  It 
does /not/ unset "o.__str_annotations__" unless someone explicitly sets 
"o.__annotations__".  This is so you can write your code assuming that 
"o.__str_annotations__" is set, and it doesn't explode if somebody 
somewhere ever looks at "o.__annotations__".  (This could lead to them 
getting out of sync, if someone modified "o.__annotations__".  But I 
suspect practicality beats purity here.)


This means:

 * People who only want stringized annotations can turn it on, and only
   ever examine "o.__str_annotations__".  They get the benefits of PEP
   563: annotations don't have to be valid Python values at runtime,
   just parseable.  They can continue doing the "if TYPE_CHECKING:"
   import thing.
 * Library code which wants to examine values can examine
   "o.__annotations__".  We might consider amending library functions
   that look at annotations to add a keyword-only parameter,
   "str_annotations=False", and if it's true it uses
   o.__str_annotations__ instead etc etc etc.


Also, yes, of course we can keep the optimization where stringized 
annotations are stored as a tuple containing an even number of strings.  
Similarly to PEP 649's automatic binding of an unbound code object, if 
you set "o.__str_annotations__" to a tuple containing an even number of 
strings, and you then access "o.__str_annotations__", you get back a dict.



TBD: how this interacts with PEP 649.  I don't know if it means we only 
do this, or if it would be a good idea to do both this and 649.  I just 
haven't thought about it.  (It would be a runtime error to set both 
"o.__str_annotations__" and "o.__co_annotations__", though.)



Well, whaddya think?  Any good?


I considered calling this "PEP 1212", which is 563 + 649,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WUZGTGE43T7XV3EUGT6AN2N52OD3U7AE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 in light of PEP 649

2021-04-17 Thread Larry Hastings



Obviously that's a bug.  Can you send me this test case?  Anything 
works--Github, private email, whatever is most convenient for you.  
Thank you!



//arry/

On 4/16/21 11:22 PM, Inada Naoki wrote:

## memory error on co_annotations

I modifled py_compile to add `from __future__ import co_annotations`
automatically.

```
$ ../co_annotations/python -m compileall mypy
Listing 'mypy'...
Compiling 'mypy/checker.py'...
free(): corrupted unsorted chunks
Aborted

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x77c73859 in __GI_abort () at abort.c:79
#2  0x77cde3ee in __libc_message
(action=action@entry=do_abort, fmt=fmt@entry=0x77e08285 "%s\n") at
../sysdeps/posix/libc_fatal.c:155
#3  0x77ce647c in malloc_printerr
(str=str@entry=0x77e0a718 "free(): corrupted unsorted chunks") at
malloc.c:5347
#4  0x77ce81c2 in _int_free (av=0x77e39b80 ,
p=0x55d1db30, have_lock=) at malloc.c:4356
#5  0x55603906 in PyMem_RawFree (ptr=) at
Objects/obmalloc.c:1922
#6  _PyObject_Free (ctx=, p=) at
Objects/obmalloc.c:1922
#7  _PyObject_Free (ctx=, p=) at
Objects/obmalloc.c:1913
#8  0x5567caa9 in compiler_unit_free (u=0x55ef0fd0) at
Python/compile.c:583
#9  0x5568aea5 in compiler_exit_scope (c=0x7fffc3d0) at
Python/compile.c:760
#10 compiler_function (c=0x7fffc3d0, s=,
is_async=0) at Python/compile.c:2529
#11 0x5568837d in compiler_visit_stmt (s=,
c=0x7fffc3d0) at Python/compile.c:3665
#12 compiler_body (c=c@entry=0x7fffc3d0, stmts=0x56222450) at
Python/compile.c:1977
#13 0x55688e51 in compiler_class (c=c@entry=0x7fffc3d0,
s=s@entry=0x56222a60) at Python/compile.c:2623
#14 0x55687ce3 in compiler_visit_stmt (s=,
c=0x7fffc3d0) at Python/compile.c:3667
#15 compiler_body (c=c@entry=0x7fffc3d0, stmts=0x563014c0) at
Python/compile.c:1977
#16 0x5568db00 in compiler_mod (filename=0x772e6770,
mod=0x563017b0, c=0x7fffc3d0) at Python/compile.c:2001
```



___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CJ3DQG3NUZ6P73BTQ27NALCOGUMYSHS7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 in light of PEP 649

2021-04-16 Thread Larry Hastings


On 4/16/21 5:00 PM, Guido van Rossum wrote:
(3) Ditto run with Larry's branch (PEP 649, assuming it's on by 
default there -- otherwise, modify the source by inserting the needed 
future import at the top)



The co_annotations stuff in my branch is gated with "from __future__ 
import co_annotations".  Without that import my branch has stock semantics.


Also, in case somebody does do this testing: don't use my branch for 
"from __future__ import annotations" testing.  There are neat new 
optimizations for stringized annotations but my branch is too 
out-of-date to have them.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/B3NWB54AADBCZROO4MIBDAJQP6Q6DXAW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 in light of PEP 649

2021-04-16 Thread Larry Hastings



Please don't confuse Inada Naoki's benchmark results with the effect PEP 
649 would have on a real-world codebase.  His artifical benchmark 
constructs a thousand empty functions that take three parameters with 
randomly-chosen annotations--the results provides some insights but are 
not directly applicable to reality.


PEP 649's effects on code size / memory / import time are contingent on 
the number of annotations and the number of objects annotated, not the 
overall code size of the module.  Expressing it that way, and suggesting 
that Python users would see the same results with real-world code, is 
highly misleading.


I too would be interested to know the effects PEP 649 had on a 
real-world codebase currently using PEP 563, but AFAIK nobody has 
reported such results.



//arry/

On 4/16/21 11:05 AM, Jukka Lehtosalo wrote:
On Fri, Apr 16, 2021 at 5:28 PM Łukasz Langa > wrote:


[snip] I say "compromise" because as Inada Naoki measured, there's
still a non-zero performance cost of PEP 649 versus PEP 563:

- code size: +63%
- memory: +62%
- import time: +60%


Will this hurt some current users of typing? Yes, I can name you
multiple past employers of mine where this will be the case. Is it
worth it for Pydantic? I tend to think that yes, it is, since it
is a significant community, and the operations on type annotations
it performs are in the sensible set for which
`typing.get_type_hints()` was proposed.


Just to give some more context: in my experience, both import time and 
memory use tend to be real issues in large Python codebases (code size 
less so), and I think that the relative efficiency of PEP 563 is an 
important feature. If PEP 649 can't be made more efficient, this could 
be a major regression for some users. Python server applications need 
to run multiple processes because of the GIL, and since code objects 
generally aren't shared between processes (GC and reference counting 
makes it tricky, I understand), code size increases tend to be 
amplified on large servers. Even having a lot of RAM doesn't 
necessarily help, since a lot of RAM typically implies many CPU cores, 
and thus many processes are needed as well.


I can see how both PEP 563 and PEP 649 bring significant benefits, but 
typically for different user populations. I wonder if there's a way of 
combining the benefits of both approaches. I don't like the idea of 
having toggles for different performance tradeoffs indefinitely, but I 
can see how this might be a necessary compromise if we don't want to 
make things worse for any user groups.


Jukka

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PBJ6MBQIE3DVQUUAO764PIQ3TWGLBS3X/
Code of Conduct: http://python.org/psf/codeofconduct/



___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4OHBEX4ARPMB57MS7ICTZNS44KEORJRI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-15 Thread Larry Hastings


On 4/15/21 9:24 PM, Inada Naoki wrote:

Unlike simple function case, PEP 649 creates function object instead
of code object for __co_annotation__ of methods.
It cause this overhead.  Can we avoid creating functions for each annotation?



As the implementation of PEP 649 currently stands, there are two reasons 
why the compiler might pre-bind the __co_annotations__ code object to a 
function, instead of simply storing the code object:


 * If the annotations refer to a closure ("freevars" is nonzero), or
 * If the annotations /possibly/ refer to a class variable (the
   annotations code object contains either LOAD_NAME or LOAD_CLASSDEREF).

If the annotations refer to a closure, then the code object also needs 
to be bound with the "closure" tuple.  If the annotations possibly refer 
to a class variable, then the code object also needs to be bound with 
the current "f_locals" dict.  (Both could be true.)


Unfortunately, when generating annotations on a method, references to 
builtins (e.g. "int", "str") seem to generate LOAD_NAME instructions 
instead of LOAD_GLOBAL.  Which means pre-binding the function happens 
pretty often for methods.  I believe in your benchmark it will happen 
every time.  There's a lot of code, and a lot of runtime data 
structures, inside compile.c and symtable.c behind the compiler's 
decision about whether something is NAME vs GLOBAL vs DEREF etc, and I 
wasn't comfortable with seeing if I could fix it.


Anyway I assume it wasn't "fixable".  The compiler would presumably 
already prefer to generate LOAD_GLOBAL vs LOAD_NAME, because LOAD_GLOBAL 
would be cheaper every time for a global or builtin.  The fact that it 
already doesn't do so implies that it can't.



At the moment I have only one idea for a possible optimization, as 
follows.  Instead of binding the function object immediately, it /might/ 
be cheaper to write the needed values into a tuple, then only actually 
bind the function object on demand (like normal).


I haven't tried this because I assumed the difference at runtime would 
be negligible.  On one hand, you're creating a function object; on the 
other you're creating a tuple.  Either way you're creating an object at 
runtime, and I assumed that bound functions weren't /that/ much more 
expensive than tuples.  Of course I could be very wrong about that.


The other thing is, it would be a lot of work to even try the 
experiment.  Also, it's an optimization, and I was more concerned with 
correctness... and getting it done and getting this discussion underway.



What follows are my specific thoughts about how to implement this 
optimization.


In this scenario, the logic in the compiler that generates the code 
object would change to something like this:


   has_closure = co.co_freevars != 0
   has_load_name = co.co_code does not contain LOAD_NAME or
   LOAD_CLASSDEREF bytecodes
   if not (has_closure or has_load_name):
    co_ann = co
   elif has_closure and (not has_load_name):
    co_ann = (co, freevars)
   elif (not has_closure) and has_load_name:
    co_ann = (co, f_locals)
   else:
    co_ann = (co, freevars, f_locals)
   setattr(o, "__co_annotations__", co_ann)

(The compiler would have to generate instructions creating the tuple and 
setting its members, then storing the resulting object on the object 
with the annotations.)


Sadly, we can't pre-create this "co_ann" tuple as a constant and store 
it in the .pyc file, because the whole point of the tuple is to contain 
one or more objects only created at runtime.



The code implementing __co_annotations__ in the three objects (function, 
class, module) would examine the object it got.  If it was a code 
object, it would bind it; if it was a tuple, it would unpack the tuple 
and use the values based on their type:


   // co_ann = internal storage for __co_annotations__
   if isinstance(co_ann, FunctionType) or (co_ann == None):
    return co_ann
   co, freevars, locals = None
   if isinstance(co_ann, CodeType):
    co = co_ann
   else:
    assert isinstance(co_ann, tuple)
    assert 1 <= len(co_ann) <= 3
    for o in co_ann:
        if isinstance(o, CodeObject):
            assert not co
    co = o
        elif isinstance(o, tuple):
            assert not freevars
    freevars = o
        elif isinstance(o, dict):
            assert not locals
    locals = o
        else:
            raise ValueError(f"illegal value in co_annotations
   tuple: {o!r}")
   co_ann = make_function(co, freevars=freevars, locals=locals)
   return co_ann


If you experiment with this approach, I'd be glad to answer questions 
about it, either here or on Github, etc.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived 

[Python-Dev] Re: In support of PEP 649

2021-04-15 Thread Larry Hastings

On 4/15/21 6:09 PM, Barry Warsaw wrote:

On Apr 15, 2021, at 17:47, Oscar Benjamin  wrote:

Would it be problematic to postpone making __future__.annotations the default?

This is a good question, and I think the SC would really like to know if 
anybody has objections to  postponing this to 3.11.  We haven’t discussed it 
yet (this whole topic is on our agenda for next Monday), but it might be the 
best thing to do given where we are in the 3.10 release cycle.  It would give 
everyone a chance to breathe and come up with the right long term solution.



I don't have any objections, and I certainly see the wisdom in such a 
decision.



Best wishes,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/K5B2SPKPJHNROKQORQIFNKUESNUUSPLT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-15 Thread Larry Hastings

On 4/15/21 2:02 PM, Sebastián Ramírez wrote:

## Questions

I'm not very familiar with the internals of Python, and I'm not sure how the new syntax for 
`Union`s using the vertical bar character ("pipe", "|") work.

But would PEP 649 still support things like this?:

def run(arg: int | str = 0): pass

And would it be inspectable at runtime?



As far as I can tell, absolutely PEP 649 would support this feature.  
Under the covers, all PEP 649 is really doing is changing the 
destination that annotation expressions get compiled to.  So anything 
that works in an annotation with "stock" semantics would work fine with 
PEP 649 semantics too, barring the exceptions specifically listed in the 
PEP (e.g. annotations defined in conditionals, walrus operator, etc).



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RD4X7EGVTY6PIT76TWQCSWC6PLHMIL6Z/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: In support of PEP 649

2021-04-15 Thread Larry Hastings

On 4/15/21 2:49 PM, Paul Ganssle wrote:
I haven't followed this closely enough — if PEP 649 were accepted 
today, would it even be ready for use before the 3.10 code freeze 
(which is in a few weeks)?



Perhaps I'm a poor judge of the quality of my own code.   But I'd say I 
have a working prototype and it seems... fine.  It didn't require a lot 
of changes, it just needed the right changes in the right spots.  
Getting my interactions with symtable and compile were the hardest parts 
and I think those are all sorted out now.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6VFTVY7CAXMYHC4GRA4NV2LTUFW6ESAW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-14 Thread Larry Hastings



Thanks for doing this!  I don't think PEP 649 is going to be accepted or 
rejected based on either performance or memory usage, but it's nice to 
see you confirmed that its performance and memory impact is acceptable.



If I run "ann_test.py 1", the annotations are already turned into 
strings.  Why do you do it that way?  It makes stock semantics look 
better, because manually stringized annotations are much faster than 
evaluating real expressions.


It seems to me that the test would be more fair if test 1 used real 
annotations.  So I added this to "lines":


   from types import SimpleNamespace
   foo = SimpleNamespace()
   foo.bar = SimpleNamespace()
   foo.bar.baz = float

I also changed quote(t) so it always returned t unchanged.  When I ran 
it that way, stock semantics "exec" time got larger.



Cheers,


//arry/

On 4/14/21 6:44 PM, Inada Naoki wrote:

I added memory usage data by tracemalloc.

```
# Python 3.9 w/ old semantics
$ python3 ann_test.py 1
code size: 121011
memory: (385200, 385200)
unmarshal: avg: 0.3341682574478909 +/- 3.700437551781949e-05
exec: avg: 0.4067857594229281 +/- 0.0006858555167675445

# Python 3.9 w/ PEP 563 semantics
$ python3 ann_test.py 2
code size: 121070
memory: (398675, 398675)
unmarshal: avg: 0.3352349083404988 +/- 7.749102039824168e-05
exec: avg: 0.24610224328935146 +/- 0.0008628035427956459

# master + optimization w/ PEP 563 semantics
$ ./python ~/ann_test.py 2
code size: 110488
memory: (193572, 193572)
unmarshal: avg: 0.31316645480692384 +/- 0.00011766086337841035
exec: avg: 0.11456295938696712 +/- 0.0017481202239372398

# co_annotations + optimization w/ PEP 649 semantics
$ ./python ~/ann_test.py 3
code size: 204963
memory: (208273, 208273)
unmarshal: avg: 0.597023528907448 +/- 0.00016614519056599577
exec: avg: 0.09546191191766411 +/- 0.00018099485135812695
```

Summary:

* Both of PEP 563 and PEP 649 has low memory consumption than Python 3.9.
* Importing time (unmarshal+exec) is about 0.7sec on old semantics and
PEP 649, 0.43sec on PEP 563.

On Thu, Apr 15, 2021 at 10:31 AM Inada Naoki  wrote:

I created simple benchmark:
https://gist.github.com/methane/abb509e5f781cc4a103cc450e1e7925d

This benchmark creates 1000 annotated functions and measure time to
load and exec.
And here is the result. All interpreters are built without --pydebug,
--enable-optimization, and --with-lto.

```
# Python 3.9 w/ stock semantics

$ python3 ~/ann_test.py 1
code size: 121011
unmarshal: avg: 0.33605549649801103 +/- 0.007382938279889738
exec: avg: 0.395090194279328 +/- 0.001004608380122509

# Python 3.9 w/ PEP 563 semantics

$ python3 ~/ann_test.py 2
code size: 121070
unmarshal: avg: 0.3407619891455397 +/- 0.0011833618746421965
exec: avg: 0.24590165729168803 +/- 0.0003123404336687428

# master branch w/ PEP 563 semantics

$ ./python ~/ann_test.py 2
code size: 149086
unmarshal: avg: 0.45410854648798704 +/- 0.00107521956753799
exec: avg: 0.11281821667216718 +/- 0.00011939747308270317

# master branch + optimization (*) w/ PEP 563 semantics
$ ./python ~/ann_test.py 2
code size: 110488
unmarshal: avg: 0.3184352931333706 +/- 0.0015278719180908732
exec: avg: 0.11042822999879717 +/- 0.00018108884723599264

# co_annotatins reference implementation w/ PEP 649 semantics

$ ./python ~/ann_test.py 3
code size: 229679
unmarshal: avg: 0.6402394526172429 +/- 0.0006400500128250688
exec: avg: 0.09774857209995388 +/- 9.275466265195788e-05

# co_annotatins reference implementation + optimization (*) w/ PEP 649 semantics

$ ./python ~/ann_test.py 3
code size: 204963
unmarshal: avg: 0.5824743471574039 +/- 0.007219086642131638
exec: avg: 0.09641968684736639 +/- 0.0001416784753249878
```

(*) I found constant folding creates new tuple every time even though
same tuple is in constant table.
See https://github.com/python/cpython/pull/25419
For co_annotations, I cherry-pick
https://github.com/python/cpython/pull/23056  too.


--
Inada Naoki  





___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PV3TWXK3WFITD2AGFQKQZEK3S26FQHRC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-14 Thread Larry Hastings

On 4/14/21 1:42 PM, Baptiste Carvello wrote:
Are there specific annoyances associated with quoting always, apart 
from the 2 more characters?



Yes.  Since the quoted strings aren't parsed by Python, syntax errors in 
these strings go undetected until somebody does parse them (e.g. your 
static type analyzer).  Having the Python compiler de-compile them back 
into strings means they got successfully parsed.  Though this doesn't 
rule out other errors, e.g. NameError.


I thought this was discussed in PEP 563, but now I can't find it, so 
unfortunately I can't steer you towards any more info on the subject.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UKO7SMXHUK5KLTWVTMU2RL6LAZQWA67U/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-14 Thread Larry Hastings


My plan was to post it here and see what the response was first. Back in 
January, when I posted the first draft, I got some very useful feedback 
that resulted in some dramatic changes.  This time around, so far, 
nobody has suggested even minor changes.  Folks have just expressed 
their opinions about it (which is fine).


Still left to do: ping the project leads of some other static type 
analysis projects and see if they have any feedback to contribute.  Once 
the dust completely settles around the conversation here, I expect to 
formally submit the PEP, hopefully later this week.


Cheers,


//arry/

On 4/14/21 12:22 PM, Brett Cannon wrote:



On Wed, Apr 14, 2021 at 12:08 PM Guido van Rossum <mailto:gu...@python.org>> wrote:


Let's just wait for the SC to join the discussion. I'm sure they
will, eventually.


FYI the PEP has not been sent to us via 
https://github.com/python/steering-council/issues 
<https://github.com/python/steering-council/issues> as ready for 
pronouncement, so we have not started officially discussing this PEP yet.


-Brett


On Wed, Apr 14, 2021 at 11:12 AM Larry Hastings
mailto:la...@hastings.org>> wrote:

On 4/14/21 10:44 AM, Guido van Rossum wrote:

besides the cost of closing the door to relaxed annotation
syntax, there's the engineering work of undoing the work that
was done to make `from __future__ import annotations` the
default (doing this was a significant effort spread over many
commits, and undoing will be just as hard).



I'm not sure either of those statements is true.

Accepting PEP 649 as written would deprecate stringized
annotations, it's true.  But the SC can make any decision it
wants here, including only accepting the new semantics of 649
without deprecating stringized annotations.  They could remain
in the language for another release (or two? or three?) while
we "kick the can down the road". This is not without its costs
too but it might be the best approach for now.

As for undoing the effort to make stringized annotations the
default, git should do most of the heavy lifting here. 
There's a technique where you check out the revision that made
the change, generate a reverse patch, apply it, and check that
in.  This creates a new head which you then merge. That's what
I did when I created my co_annotations branch, and at the time
it was literally the work of ten minutes.  I gather the list
of changes is more substantial now, so this would have to be
done multiple times, and it may be more involved.  Still, if
PEP 649 is accepted, I would happily volunteer to undertake
this part of the workload.


Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
<mailto:python-dev@python.org>
To unsubscribe send an email to python-dev-le...@python.org
<mailto:python-dev-le...@python.org>
https://mail.python.org/mailman3/lists/python-dev.python.org/
<https://mail.python.org/mailman3/lists/python-dev.python.org/>
Message archived at

https://mail.python.org/archives/list/python-dev@python.org/message/LRVFVLH4AHF7SX5MOEUBPPII7UNINAMJ/

<https://mail.python.org/archives/list/python-dev@python.org/message/LRVFVLH4AHF7SX5MOEUBPPII7UNINAMJ/>
Code of Conduct: http://python.org/psf/codeofconduct/
<http://python.org/psf/codeofconduct/>



-- 
--Guido van Rossum (python.org/~guido <http://python.org/~guido>)

/Pronouns: he/him //(why is my pronoun here?)/

<http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
___
Python-Dev mailing list -- python-dev@python.org
<mailto:python-dev@python.org>
To unsubscribe send an email to python-dev-le...@python.org
<mailto:python-dev-le...@python.org>
https://mail.python.org/mailman3/lists/python-dev.python.org/
<https://mail.python.org/mailman3/lists/python-dev.python.org/>
Message archived at

https://mail.python.org/archives/list/python-dev@python.org/message/V5ASSMVVAP4RZX3DOGJIDS52OEJ6LP7C/

<https://mail.python.org/archives/list/python-dev@python.org/message/V5ASSMVVAP4RZX3DOGJIDS52OEJ6LP7C/>
Code of Conduct: http://python.org/psf/codeofconduct/
<http://python.org/psf/codeofconduct/>

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MZNDSJ2Z5M6VXHBHWPOD2HYUQI72KGX2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-14 Thread Larry Hastings

On 4/14/21 10:44 AM, Guido van Rossum wrote:
besides the cost of closing the door to relaxed annotation syntax, 
there's the engineering work of undoing the work that was done to make 
`from __future__ import annotations` the default (doing this was a 
significant effort spread over many commits, and undoing will be just 
as hard).



I'm not sure either of those statements is true.

Accepting PEP 649 as written would deprecate stringized annotations, 
it's true.  But the SC can make any decision it wants here, including 
only accepting the new semantics of 649 without deprecating stringized 
annotations.  They could remain in the language for another release (or 
two? or three?) while we "kick the can down the road".  This is not 
without its costs too but it might be the best approach for now.


As for undoing the effort to make stringized annotations the default, 
git should do most of the heavy lifting here.  There's a technique where 
you check out the revision that made the change, generate a reverse 
patch, apply it, and check that in.  This creates a new head which you 
then merge.  That's what I did when I created my co_annotations branch, 
and at the time it was literally the work of ten minutes.  I gather the 
list of changes is more substantial now, so this would have to be done 
multiple times, and it may be more involved.  Still, if PEP 649 is 
accepted, I would happily volunteer to undertake this part of the workload.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LRVFVLH4AHF7SX5MOEUBPPII7UNINAMJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-14 Thread Larry Hastings


On 4/12/21 7:24 PM, Guido van Rossum wrote:
To be honest, the most pressing issue with annotations is the clumsy 
way that type variables have to be introduced. The current convention, 
`T = TypeVar('T')`, is both verbose (why do I have to repeat the 
name?) and widely misunderstood (many help request for mypy and 
pyright follow from users making a mistaken association between two 
type variables that are unrelated but share the same TypeVar definition).



This repeat-the-name behavior has been in Python for a long time, e.g.

   Point = namedtuple('Point', ['x', 'y'])

namedtuple() shipped with Python 2.6 in 2008.  So if that's the most 
pressing issue with annotations, annotations must be going quite well, 
because we've known about this for at least 13 years without attempting 
to solve it.


I've always assumed that this repetition was worth the minor 
inconvenience.  You only have to retype the name once, and the resulting 
code is clear and readable, with predictable behavior. A small price to 
pay to preserve Python's famous readability.



For what it's worth--and forgive me for straying slightly into 
python-ideas territory--/if/ we wanted to eliminate the need to repeat 
the name, I'd prefer a general-purpose solution rather than something 
tailored specifically for type hints.  In a recent private email 
conversation on a different topic, I proposed this syntax:


   bind  

This statement would be equivalent to

   id = expression('id')


Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/L5EBHWHUOQZDW33FH5VI2D6KFN75LLVW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-13 Thread Larry Hastings


On 4/13/21 1:52 PM, Guido van Rossum wrote:
On Tue, Apr 13, 2021 at 12:32 PM Larry Hastings <mailto:la...@hastings.org>> wrote:



On 4/12/21 7:24 PM, Guido van Rossum wrote:

I've been thinking about this a bit, and I think that the way
forward is for Python to ignore the text of annotations ("relaxed
annotation syntax"), not to try and make it available as an
expression.

To be honest, the most pressing issue with annotations is the
clumsy way that type variables have to be introduced. The current
convention, `T = TypeVar('T')`, is both verbose (why do I have to
repeat the name?) and widely misunderstood (many help request for
mypy and pyright follow from users making a mistaken association
between two type variables that are unrelated but share the same
TypeVar definition). And relaxed annotation syntax alone doesn't
solve this.

Nevertheless I think that it's time to accept that annotations
are for types -- the intention of PEP 3107 was to experiment with
different syntax and semantics for types, and that experiment has
resulted in the successful adoption of a specific syntax for
types that is wildly successful.



I don't follow your reasoning.  I'm glad that type hints have
found success, but I don't see why that implies "and therefore we
should restrict the use of annotations solely for type hints". 
Annotations are a useful, general-purpose feature of Python, with
legitimate uses besides type hints.  Why would it make Python
better to restrict their use now?


Because typing is, to many folks, a Really Important Concept, and it's 
confusing to use the same syntax ("x: blah blah") for different 
purposes, in a way that makes it hard to tell whether a particular 
"blah blah" is meant as a type or as something else -- because you 
have to know what's introspecting the annotations before you can tell. 
And that introspection could be signalled by a magical decorator, but 
it could also be implicit: maybe you have a driver that calls a 
function based on a CLI entry point name, and introspects that 
function even if it's not decorated.



I'm not sure I understand your point.  Are you saying that we need to 
take away the general-purpose functionality of annotations, that's been 
in the language since 3.0, and restrict annotations to just type 
hints... because otherwise an annotation might not be used for a type 
hint, and then the programmer would have to figure out what it means?  
We need to take away the functionality from all other use cases in order 
to lend /clarity/ to one use case?


Also, if you're stating that programmers get confused reading source 
code because annotations get used for different things at different 
places--surely that confirms that annotations are /useful/ for more than 
just type hints, in real-world code, today.  I genuinely have no sense 
of how important static type analysis is in Python--personally I have no 
need for it--but I find it hard to believe that type hints are so 
overwhelmingly important that they should become the sole use case for 
annotations, and we need to take away this long-standing functionality, 
that you suggest is being successfully used side-by-side with type hints 
today, merely to make type hints clearer.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6QIYSBQPXA3IU7MJL6XQAU6U3RPWSNA7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-13 Thread Larry Hastings


On 4/13/21 3:28 PM, Terry Reedy wrote:

On 4/13/2021 4:21 AM, Baptiste Carvello wrote:

Le 12/04/2021 à 03:55, Larry Hastings a écrit :



* in section "Interactive REPL Shell":


For the sake of simplicity, in this case we forego delayed evaluation.


The intention of the code + codeop modules is that people should be 
able to write interactive consoles that simulate the standard REPL.  
For example:


Python 3.10.0a7+ (heads/master-dirty:a9cf69df2e, Apr 12 2021, 
15:36:39) [MSC v.1900 64 bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" for more information.
>>> import code
>>> code.interact()
Python 3.10.0a7+ (heads/master-dirty:a9cf69df2e, Apr 12 2021, 
15:36:39) [MSC v.1900 64 bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> # Call has not returned.  Prompt is from code.InteractiveConsole.
>>> def f(x:int): -> float

>>> f.__annotations__ # should match REPL result

>>> ^Z

now exiting InteractiveConsole...
>>> Now back to repl

If the REPL compiles with "mode='single' and spec is changes to "when 
mode is 'single'", then above should work.  Larry, please test with 
your proposed implementation.



A couple things!

1. I apologize if the PEP wasn't clear, but this section was talking
   about the problem of /module/ annotations in the implicit __main__
   module when using the interactive REPL. Annotations on other objects
   (classes, functions, etc) defined in the interactive REPL work as
   expected.
2. The above example has a minor bug: when defining a return annotation
   on a function, the colon ending the function declaration goes
   /after/ the return annotation.  It should have been "def f(x:int) ->
   float:".
3. The above example works fine when run in my branch.
4. You need to "from __future__ import co_annotations" in order to
   activate delayed evaluation of annotations using code objects in my
   branch.  I added that (inside the code.interact() shell!) and it
   still works fine.

So I'm not sure what problem you're proposing to solve with this "mode 
is single" stuff.


 * If you thought there was a problem with defining annotations on
   functions and classes defined in the REPL, good news!, it was never
   a problem.
 * If you're solving the problem of defining annotations on the
   interactive module /itself,/ I don't understand what your proposed
   solution is or how it would work.  The problem is, how do you create
   a code object that defines all the annotations on a module, when the
   module never finishes being defined because it's the interactive shell?


Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IL2YCTRQ5BTKXSBRIC4CBHNLUTQONJTG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-13 Thread Larry Hastings


On 4/12/21 7:24 PM, Guido van Rossum wrote:
I've been thinking about this a bit, and I think that the way forward 
is for Python to ignore the text of annotations ("relaxed annotation 
syntax"), not to try and make it available as an expression.


To be honest, the most pressing issue with annotations is the clumsy 
way that type variables have to be introduced. The current convention, 
`T = TypeVar('T')`, is both verbose (why do I have to repeat the 
name?) and widely misunderstood (many help request for mypy and 
pyright follow from users making a mistaken association between two 
type variables that are unrelated but share the same TypeVar 
definition). And relaxed annotation syntax alone doesn't solve this.


Nevertheless I think that it's time to accept that annotations are for 
types -- the intention of PEP 3107 was to experiment with different 
syntax and semantics for types, and that experiment has resulted in 
the successful adoption of a specific syntax for types that is wildly 
successful.



I don't follow your reasoning.  I'm glad that type hints have found 
success, but I don't see why that implies "and therefore we should 
restrict the use of annotations solely for type hints". Annotations are 
a useful, general-purpose feature of Python, with legitimate uses 
besides type hints.  Why would it make Python better to restrict their 
use now?



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TC2ZHBWDLB2NB7DV25T3EC2TIXHNBJDM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-12 Thread Larry Hastings


On 4/12/21 4:50 PM, Inada Naoki wrote:

PEP 563 solves all problems relating to types not accessible in runtime.
There are many reasons users can not get types used in annotations at runtime:

* To avoid circular import
* Types defined only in pyi files
* Optional dependency that is slow to import or hard to install


It only "solves" these problems if you leave the annotation as a 
string.  If PEP 563 is active, but you then use typing.get_type_hints() 
to examine the actual Python value of the annotation, all of these 
examples will fail with a NameError.  So, in this case, "solves the 
problem" is a positive way of saying "hides a runtime error".


I don't know what the use cases are for examining type hints at runtime, 
so I can't speak as to how convenient or inconvenient it is to deal with 
them strictly as strings.  But it seems to me that examining annotations 
as their actual Python values would be preferable.




This is the most clear point where PEP 563 is better for some users.
See this example:

```
from dataclasses import dataclass

if 0:
 from fakemod import FakeType

@dataclass
class C:
 a : FakeType = 0
```

This works on PEP 563 semantics (Python 3.10a7). User can get
stringified annotation.

With stock semantics, it cause NameError when importing so author can
notice they need to quote "FakeType".

With PEP 649 semantics, author may not notice this annotation cause
error. User can not get any type hints at runtime.


Again, by "works on PEP 563 semantics", you mean "doesn't raise an 
error".  But the code /has/ an error.  It's just that it has been hidden 
by PEP 563 semantics.


I don't agree that changing Python to automatically hide errors is an 
improvement.  As the Zen says: "Errors should never pass silently."


This is really the heart of the debate over PEP 649 vs PEP 563. If you 
examine an annotation, and it references an undefined symbol, should 
that throw an error?  There is definitely a contingent of people who say 
"no, that's inconvenient for us".  I think it should raise an error.  
Again from the Zen: "Special cases aren't special enough to break the 
rules."  Annotations are expressions, and if evaluating an expression 
fails because of an undefined name, it should raise a NameError.




### Type alias

Another PEP 563 benefit is user can see simple type alias.
Consider this example.

```
from typing import *

AliasType = Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]]

def f() -> AliasType:
 pass

help(f)
```

Currently, help() calls `typing.get_type_hints()`. So it shows:

```
f() -> Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]]
```

But with PEP 563 semantics, we can stop evaluating annotations and
user can see more readable alias type.

```
f() -> AliasType
```


It's a matter of personal opinion whether "AliasType" or the full 
definition is better here.  And it could lead to ambiguity, if the 
programmer assigns to "AliasType" more than once.


Iif the programmer has a strong opinion that "AliasType" is better, they 
could use an annotation of 'AliasType'--in quotes. Although I haven't 
seen the topic discussed specifically, I assume that the static typing 
analysis tools will continue to support manually stringized annotations 
even if PEP 649 is accepted.


Either way, this hypothetical feature might be "nice-to-have", but I 
don't think it's very important.  I would certainly forego this behavior 
in favor of accepting PEP 649.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OOS3VTHDIL2MVBQQ5BG2XNCHSURSZO6X/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-12 Thread Larry Hastings

On 4/12/21 4:50 PM, Inada Naoki wrote:

As PEP 597 says, eval() is slow. But it can avoidable in many cases
with PEP 563 semantics.


PEP 597 is "Add optional EncodingWarning".  You said PEP 597 in one 
other place too.  Did you mean PEP 649 in both places?



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IPRE2ZGP7VXN6XBWATWLEWUUQN6NFDLJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-12 Thread Larry Hastings



On 4/11/21 7:55 PM, Paul Bryan wrote:

PEP 563 also requires using ``eval()`` or ``typing.get_type_hints()``
to examine annotations. Code updated to work with PEP 563 that calls
``eval()`` directly would have to be updated simply to remove the
``eval()`` call. Code using ``typing.get_type_hints()`` would
continue to work unchanged, though future use of that function
would become optional in most cases.


I think it is worth noting somewhere that string annotations are still 
valid, and should still be evaluated if so.



That's not up to me, it's up to the static type checkers who created 
that idiom.  But I assume they'll continue to support stringized 
annotations, whether manually or automatically created.




Because this PEP makes semantic changes to how annotations are
evaluated, this PEP will be initially gated with a per-module
``from __future__ import co_annotations`` before it eventually
becomes the default behavior.


Is it safe to assume that a module that does not import 
co_annotations, but imports a module that does, will exhibit PEP 649 
behavior when the former accesses an annotation defined in the latter?


Yes.



* *Code that sets annotations on module or class attributes
from inside any kind of flow control statement.* It's
currently possible to set module and class attributes with
annotations inside an ``if`` or ``try`` statement, and it works
as one would expect. It's untenable to support this behavior
when this PEP is active.


Is the following an example of the above?

@dataclass
class Foo:
 if some_condition:
 x: int
 else:
 x: float
If so, would the following still be valid?

if some_condition:
 type_ = int
else:
 type_ = float
@dataclass
class Foo:
 x: type_


Your example was valid, and I think your workaround should be fine.  Do 
you have a use case for this, or is this question motivated purely by 
curiosity?




* *Code in module or class scope that references or modifies the
local* ``__annotations__`` *dict directly.* Currently, when
setting annotations on module or class attributes, the generated
code simply creates a local ``__annotations__`` dict, then sets
mappings in it as needed. It's also possible for user code
to directly modify this dict, though this doesn't seem like it's
an intentional feature. Although it would be possible to support
this after a fashion when this PEP was active, the semantics
would likely be surprising and wouldn't make anyone happy.


I recognize the point you make later about its impact on static type 
checkers. Setting that aside, I'm wondering about caes where 
annotations can be dynamically generated, such as 
dataclasses.make_dataclass(...). And, I could see reasons for 
overwriting values in __annotations__, especially in the case where it 
may be stored as a string and one wants to later affix its evaluated 
value. These are considerations specific to runtime (dynamic) type 
checking.
It's fine to modify the __annotations__ dict after the creation of the 
class or module.  It's code that modifies "__annotations__" from within 
the class or module that is disallowed here.  Similarly for dataclasses; 
once it creates a class object, it can explicitly set and / or modify 
the annotations dict on that class.



I wonder if it would make sense for each item in __annotations__ to be 
evaluated separately on first access /of each key/, rather than all 
__annotations__ on first access to the dict. Basically the dict would 
act as a LazyDict. It could also provide the benefit of lessening the 
expense of evaluating complex but otherwise unused annotations.


This would cause an immense proliferation of code objects (with some 
pre-bound to function objects).  Rather than one code object per 
annotation dict, it would create one code object per annotation key.  
Also, we don't have a "lazy dict" object built in to Python, so we'd 
have to create one.


I don't have any problems that this would solve, so I'm not super 
interested in it.  Personally I'd want to see a real compelling use case 
for this feature before I'd consider adding it to Python.  Of course, 
I'm not on the steering committee, so my opinion is only worth so much.



//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WTPNW5YWODQGR66GMB2OJYYMUQDDSPHZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-11 Thread Larry Hastings


Attached is my second draft of PEP 649.  The PEP and the prototype have 
both seen a marked improvement since round 1 in January; PEP 649 now 
allows annotations to refer to any variable they could see under stock 
semantics:


 * Local variables in the current function scope or in enclosing
   function scopes become closures and use LOAD_DEFER.
 * Class variables in the current class scope are made available using
   a new mechanism, in which the class dict is attached to the bound
   annotation function, then loaded into f_locals when the annotation
   function is run.  Thus permitting LOAD_NAME opcodes to function
   normally.


I look forward to your comments,


//arry/

PEP: 649
Title: Deferred Evaluation Of Annotations Using Descriptors
Author: Larry Hastings 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 11-Jan-2021
Post-History: 11-Jan-2021, 11-Apr-2021


Abstract


As of Python 3.9, Python supports two different behaviors
for annotations:

* original or "stock" Python semantics, in which annotations
  are evaluated at the time they are bound, and
* PEP 563 semantics, currently enabled per-module by
  ``from __future__ import annotations``, in which annotations
  are converted back into strings and must be reparsed and
  executed by ``eval()`` to be used.

Original Python semantics created a circular references problem
for static typing analysis.  PEP 563 solved that problem--but
its novel semantics introduced new problems, including its
restriction that annotations can only reference names at
module-level scope.

This PEP proposes a third way that embodies the best of both
previous approaches.  It solves the same circular reference
problems solved by PEP 563, while otherwise preserving Python's
original annotation semantics, including allowing annotations
to refer to local and class variables.

In this new approach, the code to generate the annotations
dict is written to its own function which computes and returns
the annotations dict.  Then, ``__annotations__`` is a "data
descriptor" which calls this annotation function once and
retains the result.  This delays the evaluation of annotations
expressions until the annotations are examined, at which point
all circular references have likely been resolved.  And if
the annotations are never examined, the function is never
called and the annotations are never computed.

Annotations defined using this PEP's semantics have the same
visibility into the symbol table as annotations under "stock"
semantics--any name visible to an annotation in Python 3.9
is visible to an annotation under this PEP.  In addition,
annotations under this PEP can refer to names defined *after*
the annotation is defined, as long as the name is defined in
a scope visible to the annotation. Specifically, when this PEP
is active:

* An annotation can refer to a local variable defined in the
  current function scope.
* An annotation can refer to a local variable defined in an
  enclosing function scope.
* An annotation can refer to a class variable defined in the
  current class scope.
* An annotation can refer to a global variable.

And in all four of these cases, the variable referenced by
the annotation needn't be defined at the time the annotation
is defined--it can be defined afterwards.  The only restriction
is that the name or variable be defined before the annotation
is *evaluated.*

If accepted, these new semantics for annotations would initially
be gated behind ``from __future__ import co_annotations``.
However, these semantics would eventually be promoted to be
Python's default behavior.  Thus this PEP would *supersede*
PEP 563, and PEP 563's behavior would be deprecated and
eventually removed.

Overview


.. note:: The code presented in this section is simplified
   for clarity.  The intention is to communicate the high-level
   concepts involved without getting lost in with the details.
   The actual details are often quite different.  See the
   Implementation_ section later in this PEP for a much more
   accurate description of how this PEP works.

Consider this example code::

def foo(x: int = 3, y: MyType = None) -> float:
...
class MyType:
...
foo_y_type = foo.__annotations__['y']

As we see here, annotations are available at runtime through an
``__annotations__`` attribute on functions, classes, and modules.
When annotations are specified on one of these objects,
``__annotations__`` is a dictionary mapping the names of the
fields to the value specified as that field's annotation.

The default behavior in Python 3.9 is to evaluate the expressions
for the annotations, and build the annotations dict, at the time
the function, class, or module is bound.  At runtime the above
code actually works something like this::

annotations = {'x': int, 'y': MyType, 'return': float}
def foo(x = 3, y = "abc"):
...
foo.__annotations__ = annotations
 

[Python-Dev] Re: Status of PEP 649 -- Deferred Evaluation Of Annotations Using Descriptors

2021-03-29 Thread Larry Hastings


On 3/27/21 4:23 PM, Jelle Zijlstra wrote:
Now, PEP 649 doesn't actually fix this at the moment, since it still 
resolves all annotations in the global scope, but that's easily fixed 
by removing the special case in 
https://github.com/larryhastings/co_annotations/blob/co_annotations/Python/compile.c#L3815 
.



Maybe not "easily fixed", because in my experience dealing with the 
compiler and symtable modules in CPython, nothing is easy. That special 
case was instrumental in getting the first revision working.


Nevertheless, you're right in that it shouldn't be necessary, and "round 
2" of 649 will remove it.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5JVILUDOKTJ6LRURARLQXUZHFOZQKWUB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 654 -- Exception Groups and except* : request for feedback for SC submission

2021-02-27 Thread Larry Hastings


On 2/27/21 2:37 AM, Paul Moore wrote:

On Fri, 26 Feb 2021 at 23:36, Jim J. Jewett  wrote:

Whenever I've used except Exception or stronger, it was a sanitary barrier 
around code that might well do unpredictable or even stupid things.  Adding a 
new kind of exception that I hadn't predicted -- including ExceptionGroup -- 
would certainly fit this description, and I want my driver loop to do what I 
told it.  (Probably log an unexpected exception, and continue with the next 
record.  I honestly don't even want a page, let alone a crash, because data 
from outside that barrier ... is often bad, and almost never in ways the 
on-call person can safely fix.  And if they don't have time to find it in the 
logs, then it isn't a priority that week.)

This is my biggest concern. Disclaimer: I've yet to read the PEP,
because async makes my head hurt, but I am aware of some of the
background with Trio. Please take this as the perspective of someone
thinking "I don't use async/await in my code, can I assume this
doesn't affect me?"


I haven't read the PEP either.  But I assume it could (should?) affect 
anyone managing multiple simultaneous /things/ in Python:


 * async code, "fibers", "greenlets", Stackless "microthreads",
   "cooperative multitasking", or any other userspace mechanism where
   you manage multiple "threads" of execution with multiple stacks
 * code managing multiple OS-level threads
 * code managing multiple processes

It seems to me that any of those could raise multiple heterogeneous 
exceptions, and Python doesn't currently provide a mechanism to manage 
this situation.  My dim understanding is that ExceptionGroup proposes a 
mechanism to handle exactly this thing.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2EDUBSWHB6QP5ITFSDV6OPHUAJ2NXMKC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Happy 30th Birthday, Python!

2021-02-21 Thread Larry Hastings


I guess we forgot to observe it yesterday, but: February 19, 1991, was 
the day Guido first posted Python 0.9.1 to alt.sources:


   https://groups.google.com/g/alt.sources/c/O2ZSq7DiOwM/m/gcJTvCA27lMJ

Happy 30th birthday, Python!


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RAPFBOPRDICVP5F2GVRHICMAH5NDHXJS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python 0.9.1

2021-02-18 Thread Larry Hastings


On 2/17/21 4:45 PM, Brett Cannon wrote:
If we can get a clean copy of the original sources I think we should 
put them up under the Python org on GitHub for posterity.



Call me crazy, but... shouldn't they be checked in?  I thought we 
literally had every revision going back to day zero.  It /should/ be 
duck soup to recreate the original sources--all you need is the correct 
revision number.


CVS to SVN to HG to GIT, oh my,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NKJXEDZQE7ZEKEK5ARBQWLWY3JDQXKEV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Deferred Evaluation Of Annotations Using Descriptors

2021-02-15 Thread Larry Hastings


I don't work on these sorts of codebases, and I don't use type hints or 
static type checking.  So I'm not really qualified to judge how bad / 
widespread a problem this is.  It's my hope that the greater Python core 
dev / user community can ascertain how serious this is.


My main observation is that, for users facing this problem, they still 
have options.  Off the top of my head, they could:


 * maintain a lightweight "mock" version of expensive_module, or
 * stringize their type hints by hand, or
 * perhaps use with some hypothetical stringizing support library that
   makes it less-painful to maintain stringized annotations.

(I assume that static type checkers could continue to support stringized 
type hints even if PEP 649 was accepted.)


I admit I'd be very surprised if PEP 649 was judged to be unworkable, 
given how similar it is to stock Python semantics for annotations at 
runtime.



Cheers,


//arry/

On 2/15/21 8:14 PM, Guido van Rossum wrote:
On Sun, Feb 14, 2021 at 7:17 PM Inada Naoki > wrote:


On Mon, Feb 15, 2021 at 10:20 AM Joseph Perez mailto:jope...@hotmail.fr>> wrote:
>
> > How about having a pseudo-module called __typing__ that is
> > ignored by the compiler:
> >
> > from __typing__ import ...
> >
> > would be compiled to a no-op, but recognised by type checkers.
>
> If you want to do run-time typing stuff, you would use
> There is already a way of doing that: `if typing.TYPE_CHECKING:
...`
https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING

> But yes, the issue with it is that this constant is defined in
the `typing` module …
>
> However, I think this is a part of the solution. Indeed, the
language could define another builtin constants, let's name it
`__static__`, which would simply be always false (at runtime),
while linters/type checkers would use it the same way
`typing.TYPE_CHECKING` is used:
> ```python
> if __static__:
>     import typing
>     import expensive_module
> ```


Please note that this is a thread about PEP 649.

If PEP 649 accepted and PEP 563 dies, all such idioms breaks
annotation completely.

Users need to import all heavy modules and circular references used
only type hints, or user can not get even string form annotation which
is very useful for REPLs.


Hm, that's a rather serious problem with Larry's PEP 649 compared to 
`from __future__ import annotations`, actually.


Larry, what do you think?

--
--Guido van Rossum (python.org/~guido )
/Pronouns: he/him //(why is my pronoun here?)/ 

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FLMZYC2USYBTJABAQLVCNQEZUVVU26WD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 647 (type guards) -- final call for comments

2021-02-14 Thread Larry Hastings


On 2/14/21 2:34 PM, Guido van Rossum wrote:
On Sun, Feb 14, 2021 at 12:51 PM David Mertz > wrote:


On Sun, Feb 14, 2021, 2:53 PM Gregory P. Smith mailto:g...@krypto.org>> wrote:

*TL;DR of my TL;DR* - Not conveying bool-ness directly in the
return annotation is my only complaint.  A BoolTypeGuard
spelling would alleviate that.


This is exactly my feeling as well. In fact, I do not understand
why it cannot simply be a parameterized Bool. That would avoid all
confusion. Yes, it's not the technical jargon type system
designers use... But the existing proposal moves all the mental
effort to non-experts who may never use type checking tools.


But note that 'bool' in Python is not subclassable.



No, but this hypothetical 'Bool'--presumably added to typing.py--might 
well be.



Cheers,


//arry/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5OVN7MIXJZZE6C6PSA3UQSLQJRM26NQ4/
Code of Conduct: http://python.org/psf/codeofconduct/


  1   2   3   4   5   6   7   8   9   >