[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-04-26 Thread Ronald Oussoren via Python-Dev


> On 26 Apr 2022, at 20:52, Larry Hastings  wrote:
> 
> 
> 
> On 4/25/22 23:56, Ronald Oussoren wrote:
>> A problem with this trick is that you don’t know how large a class object 
>> can get because a subclass of type might add new slots. This is currently 
>> not possible to do in Python code (non-empty ``__slots__`` in a type 
>> subclass is rejected at runtime), but you can do this in C code. 
> Dang it!  __slots__!  Always there to ruin your best-laid plans.  *shakes 
> fist at heavens*
> 
> I admit I don't know how __slots__ is currently implemented, so I wasn't 
> aware of this.  However!  The first part of my proto-PEP already proposes 
> changing the implementation of __slots__, to allow adding __slots__ after the 
> class is created but before it's instantiated.  Since this is so 
> late-binding, it means the slots wouldn't be allocated at the same time as 
> the type, so happily we'd sidestep this problem.  On the other hand, this 
> raises the concern that we may need to change the C interface for creating 
> __slots__, which might break C extensions that use it.  (Maybe we can find a 
> way to support the old API while permitting the new late-binding behavior, 
> though from your description of the problem I'm kind of doubtful.)
> 
I used the term slots in a very loose way. In PyObjC I’m basically doing:

   typedef struct {
   PyHeapTypeObject base;
   /* Extra C fields go here */
   } PyObjCClassObject;

Those extra C fields don’t get exposed to Python, but could well be by using 
getset definitions. This has worked without problems since early in the 2.x 
release cycle (at least, that’s when I started doing this in PyObjC), and is 
how one subclasses other types as well.  

“Real” __slots__ don’t work when subclassing type() because type is a var 
object. That’s “just” an implementation limitation, it should be possible to 
add slots after the variable length bit (he says while wildly waving his 
hands). 

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XYK7NO4CJGP5N6CLDOI7IONOBJAZYNHK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-04-26 Thread Ronald Oussoren via Python-Dev


> On 26 Apr 2022, at 07:32, Larry Hastings  wrote:
> 
> 
[…]
> What could go wrong?  My biggest question so far: is there such a thing as a 
> metaclass written in C, besides type itself?  Are there metaclasses with a 
> __new__ that doesn't call super().__new__ or three-argument type?  If there 
> are are metaclasses that allocate their own class objects out of raw bytes, 
> they'd likely sidestep this entire process.  I suspect this is rare, if 
> indeed it has ever been done.  Anyway, that'd break this mechanism, so exotic 
> metaclasses like these wouldn't work with "forward-declared classes".  But at 
> least they needn't fail silently.  We just need to add a guard after the call 
> to metaclass.__new__: if we passed in "__forward__=C" into metaclass.__new__, 
> and metaclass.__new__ didn't return C, we raise an exception.
> 
There are third party metaclasses written in C, one example is PyObjC which has 
meta classes written in C and those meta classes create a type with additional 
entries in the C struct for the type. I haven’t yet tried to think about the 
impact of this proposal, other than the size of the type (as mentioned 
earlier).  The PyObjC meta class constructs both the Python class and a 
corresponding Objective-C class in lock step. On first glance this forward 
class proposal should not cause any problems here other than the size of the 
type object. 

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/33PUXWIV6OAVRFJAQWYIV7CMLDKBI2ER/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-04-26 Thread Ronald Oussoren via Python-Dev


> On 26 Apr 2022, at 07:32, Larry Hastings  wrote:
> 
> 

[… snip …]
> Next we have the "continue" class statement.  I'm going to spell it like this:
> 
> continue class C(BaseClass, ..., metaclass=MyMetaclass):
> # class body goes here
> ...
> 
> I'll mention other possible spellings later.  The first change I'll point out 
> here: we've moved the base classes and the metaclass from the "forward" 
> statement to the "continue" statement.  Technically we could put them either 
> place if we really cared to.  But moving them here seems better, for reasons 
> you'll see in a minute.
> 
> Other than that, this "continue class" statement is similar to what I (we) 
> proposed before.  For example, here C is an expression, not a name.
> 
> Now comes the one thing that we might call a "trick".  The trick: when we 
> allocate the ForwardClass instance C, we make it as big as a class object can 
> ever get.  (Mark Shannon assures me this is simply "heap type", and he knows 
> far more about CPython internals than I ever will.)  Then, when we get to the 
> "continue class" statement, we convince metaclass.__new__ call to reuse this 
> memory, and preserve the reference count, but to change the type of the 
> object to "type" (or what-have-you).  C has now been changed from a 
> "ForwardClass" object into a real type.  (Which almost certainly means C is 
> now mutable.)
> 
A problem with this trick is that you don’t know how large a class object can 
get because a subclass of type might add new slots. This is currently not 
possible to do in Python code (non-empty ``__slots__`` in a type subclass is 
rejected at runtime), but you can do this in C code.

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JMO4V4S6OHOFFSKQ3XGU2AEEQVYUAY6J/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Need help on security vulnerability zlib 1.2.11

2022-04-20 Thread Ronald Oussoren via Python-Dev


> On 19 Apr 2022, at 23:07, Prasad, PCRaghavendra 
>  wrote:
> 
> Hi All,
>  
> We are facing some issue with the zlib package 1.2.11. Recently there was a 
> vulnerability in zlib and we had to upgrade to 1.2.12 on all supported 
> platforms
> We did that in all platforms including windows, python39.dll is now showing 
> 1.2.12 but the problem is we use pyinstaller to generate application exe.
> This exe is still referring to 1.2.11 we tried lot of things to find how it 
> is linking to 1.2.11, there is no line of sight on this.
>  
> Can any one please provide some input on this 

Please ask the pyinstaller developers about this.

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PB6VU7RQNBRDT4GDZEFKNTH7N6D74ERZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Are "Batteries Included" still a Good Thing? [was: It's now time to deprecate the stdlib urllib module]

2022-03-31 Thread Ronald Oussoren via Python-Dev


> On 29 Mar 2022, at 19:51, Brett Cannon  wrote:
> 
> 
> 
> On Tue, Mar 29, 2022 at 8:58 AM Ronald Oussoren  <mailto:ronaldousso...@mac.com>> wrote:
> 
> 
>> On 29 Mar 2022, at 00:34, Brett Cannon > <mailto:br...@python.org>> wrote:
>> 
>> 
>> 
>> Once 
>> https://mail.python.org/archives/list/python-committ...@python.org/thread/5EUZLT5PNA4HT42NGB5WVN5YWW5ASTT5/
>>  
>> <https://mail.python.org/archives/list/python-committ...@python.org/thread/5EUZLT5PNA4HT42NGB5WVN5YWW5ASTT5/>
>>  is considered resolved, the next part of my "what is the stdlib" plan is to 
>> finally try to suss all of this out and more-or-less write a stdlib policy 
>> PEP so we stop talking about this. My guess it will be more of guidance 
>> about what we want the stdlib to be and thus guide what things belong in it. 
>> No ETA on that work since I also have four other big Python projects on the 
>> go right now whose work I am constantly alternating between.
> 
> Having such a policy is a good thing and helps in evolving the stdlib, but I 
> wonder if the lack of such a document is the real problem.   IMHO the main 
> problem is that the CPython team is very small and therefore has little 
> bandwidth for maintaining, let alone evolving, large parts of the stdlib.  In 
> that it doesn’t help that some parts of the stdlib have APIs that make it 
> hard to make modifications (such as distutils where effectively everything is 
> part of the public API).  Shrinking the stdlib helps in the maintenance 
> burden, but feels as a partial solution. 
> 
> You're right that is the fundamental problem. But for me this somewhat stems 
> from the fact that we don't have a shared understanding of what the stdlib 
> is,  and so the stdlib is a bit unbounded in its size and scope. That leads 
> to a stdlib which is hard to maintain.

That (the hard to maintain part) is not necessarily true, if we had enough 
resources…. In theory the stdlib could be split into logical parts with teams 
that feel responsible for those parts (similar to having maintainers sign up 
for various platforms).  That doesn’t work because of the small team, and 
partially also due to necessarily having very strict rules w.r.t. backward 
compatibility. 


> It's just like dealing with any scarce resource: you try to cut back on your 
> overall use as best as you can and then become more efficient with what you 
> must still consume; I personally think we don't have an answer to the "must 
> consume" part of that sentence that leads us to "cut back" to a size we can 
> actually keep maintained so we don't have 1.6K open PRs 
> <https://github.com/python/cpython/pulls>.

I agree, but this is something to state explicitly when describing what should 
and should not be in scope for the stdlib.   I’m a fan of a batteries included 
stdlib, but with our current resources we cannot afford to have some bits in 
the stdlib that would “obviously” be a candidate for a modern batteries 
included stdlib, such as a decent HTTP stack with support for HTTP/1, /2 and 
/3. 

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JS23QUVXW3HZHQFMZBOXUCKT332KGRYS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Are "Batteries Included" still a Good Thing? [was: It's now time to deprecate the stdlib urllib module]

2022-03-29 Thread Ronald Oussoren via Python-Dev


> On 29 Mar 2022, at 00:34, Brett Cannon  wrote:
> 
> 
> 
> On Mon, Mar 28, 2022 at 11:52 AM Christopher Barker  > wrote:
> On Mon, Mar 28, 2022 at 11:29 AM Paul Moore  > wrote:
> To be honest, I feel like I'm just reiterating stuff I've said before
> here, and I think the same is true of the points I'm responding to
>  ...
>  (I'm not *against* going over the debate again,
> it helps make sure people haven't changed their minds, but it's
> important to be clear that none of the practical facts have changed,
> if that is the case).
> 
> Maybe there's a way to make this discussion (it feels more like a discussion 
> than debate at the moment) more productive by writing some things down. I'm 
> not sure it's a PEP, but some kind of:
> 
> "policy for the stdlib" document in which we could capture the primary points 
> of view, places where there's consensus, etc. would be helpful to keep us 
> retreading this over and over again.
> 
> I suggest this without the bandwidth to actually shepherd the project, but if 
> someone wants to, I think it would be a great idea.
> 
> Once 
> https://mail.python.org/archives/list/python-committ...@python.org/thread/5EUZLT5PNA4HT42NGB5WVN5YWW5ASTT5/
>  
> 
>  is considered resolved, the next part of my "what is the stdlib" plan is to 
> finally try to suss all of this out and more-or-less write a stdlib policy 
> PEP so we stop talking about this. My guess it will be more of guidance about 
> what we want the stdlib to be and thus guide what things belong in it. No ETA 
> on that work since I also have four other big Python projects on the go right 
> now whose work I am constantly alternating between.

Having such a policy is a good thing and helps in evolving the stdlib, but I 
wonder if the lack of such a document is the real problem.   IMHO the main 
problem is that the CPython team is very small and therefore has little 
bandwidth for maintaining, let alone evolving, large parts of the stdlib.  In 
that it doesn’t help that some parts of the stdlib have APIs that make it hard 
to make modifications (such as distutils where effectively everything is part 
of the public API).  Shrinking the stdlib helps in the maintenance burden, but 
feels as a partial solution. 

That said, I have no ideas on how a better stdlib development  proces would 
look like, let alone on how to get there.

Ronald

> 
> -Brett
>  
> 
> -CHB
> 
> -- 
> Christopher Barker, PhD (Chris)
> 
> Python Language Consulting
>   - Teaching
>   - Scientific Software Development
>   - Desktop GUI and Web Development
>   - wxPython, numpy, scipy, Cython
> ___
> Python-Dev mailing list -- python-dev@python.org 
> 
> To unsubscribe send an email to python-dev-le...@python.org 
> 
> https://mail.python.org/mailman3/lists/python-dev.python.org/ 
> 
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/GAZAFRFVJVQZMEIHTQUJASP7VRAKA5RR/
>  
> 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
> 
> ___
> Python-Dev mailing list -- python-dev@python.org 
> 
> To unsubscribe send an email to python-dev-le...@python.org 
> 
> https://mail.python.org/mailman3/lists/python-dev.python.org/ 
> 
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/PC67DOLDEQXIAGXEB2QXCGS3C4B6PTCY/
>  
> 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
> 
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XHETPUVT6UYWQ5URQNF6IQBFBZRPGMN6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Restrict the type of __slots__

2022-03-18 Thread Ronald Oussoren via Python-Dev


> On 18 Mar 2022, at 14:37, Joao S. O. Bueno  wrote:

Please don’t toppost when responding to a normally threaded message. That makes 
it unnecessary hard to follow the conversation.

> 
> IMO this is a purism that have little, if any place in language restrictions.
> I see that not allowing. "run once" iterables could indeed void attempts to 
> write
> "deliberatly non cooperative code" - but can it even be reliably detected? 
> 
> The other changes seem just to break backwards compatibility for little or no 
> gain at all. 

It may not be worth the trouble to fix this, but Serhiy’s proposal does try to 
fix a ward.  

It may be better to rely on linter’s here, but one way to do this with few 
backward compatibility concerns:

- if __slots__ is a dict keep it as is
- Otherwise use tuple(__slots__) while constructing the class and store that 
value in the __slots__ attribute of the class

That way the value of the attribute reflects the slots that were created while 
not breaking code that uses __slots__ and doesn’t change the value after class 
creation.

Ronald

> 
> 
> 
> On Fri, Mar 18, 2022 at 6:57 AM Ronald Oussoren via Python-Dev 
> mailto:python-dev@python.org>> wrote:
> 
> 
>> On 18 Mar 2022, at 10:29, Serhiy Storchaka > <mailto:storch...@gmail.com>> wrote:
>> 
>> Currently __slots__ can be either string or an iterable of strings.
>> 
>> 1. If it is a string, it is a name of a single slot. Third-party code which 
>> iterates __slots__ will be confused.
>> 
>> 2. If it is an iterable, it should emit names of slots. Note that 
>> non-reiterable iterators are accepted too, but it causes weird bugs if 
>> __slots__ is iterated more than once. For example it breaks default pickling 
>> and copying.
>> 
>> I propose to restrict the type of __slots__. Require it always been a tuple 
>> of strings. Most __slots__ in real code are tuples. It is rarely we need 
>> only single slot and set __slots__ as a string.
>> 
>> It will break some code (there are 2 occurrences in the stdlib an 1 in 
>> scripts), but that code can be easily fixed.
> 
> Pydoc supports __slots__ that is a dict, and will use the values in the dict 
> als documentation for the slots.   I’ve also seen code using ``__slots__ =  
> “field1 field2”.split()``. I don’t particularly like this code pattern, but 
> your proposal would break this.
> 
> Also note that __slots__ only has a side effect during class definition, 
> changing it afterwards is possible but has no effect (“class Foo: pass; 
> Foo.__slots__ = 42”). This surprised my recently and I have no idea if this 
> feature is ever used.
> 
> Ronald
> 
>> 
>> ___
>> Python-Dev mailing list -- python-dev@python.org 
>> <mailto:python-dev@python.org>
>> To unsubscribe send an email to python-dev-le...@python.org 
>> <mailto:python-dev-le...@python.org>
>> https://mail.python.org/mailman3/lists/python-dev.python.org/ 
>> <https://mail.python.org/mailman3/lists/python-dev.python.org/>
>> Message archived at 
>> https://mail.python.org/archives/list/python-dev@python.org/message/E32BRLAWOU5GESMZ5MLAOIYPXSL37HOI/
>>  
>> <https://mail.python.org/archives/list/python-dev@python.org/message/E32BRLAWOU5GESMZ5MLAOIYPXSL37HOI/>
>> Code of Conduct: http://python.org/psf/codeofconduct/ 
>> <http://python.org/psf/codeofconduct/>
> 
> —
> 
> Twitter / micro.blog: @ronaldoussoren
> Blog: https://blog.ronaldoussoren.net/ <https://blog.ronaldoussoren.net/>
> ___
> Python-Dev mailing list -- python-dev@python.org 
> <mailto:python-dev@python.org>
> To unsubscribe send an email to python-dev-le...@python.org 
> <mailto:python-dev-le...@python.org>
> https://mail.python.org/mailman3/lists/python-dev.python.org/ 
> <https://mail.python.org/mailman3/lists/python-dev.python.org/>
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/YQUWR7CYKNM65HR5FZQ3BANR5SNNK6N6/
>  
> <https://mail.python.org/archives/list/python-dev@python.org/message/YQUWR7CYKNM65HR5FZQ3BANR5SNNK6N6/>
> Code of Conduct: http://python.org/psf/codeofconduct/ 
> <http://python.org/psf/codeofconduct/>

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FZFRSHSJ3HQU37V6RFZNHMFGJXUPJ32X/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Restrict the type of __slots__

2022-03-18 Thread Ronald Oussoren via Python-Dev


> On 18 Mar 2022, at 10:29, Serhiy Storchaka  wrote:
> 
> Currently __slots__ can be either string or an iterable of strings.
> 
> 1. If it is a string, it is a name of a single slot. Third-party code which 
> iterates __slots__ will be confused.
> 
> 2. If it is an iterable, it should emit names of slots. Note that 
> non-reiterable iterators are accepted too, but it causes weird bugs if 
> __slots__ is iterated more than once. For example it breaks default pickling 
> and copying.
> 
> I propose to restrict the type of __slots__. Require it always been a tuple 
> of strings. Most __slots__ in real code are tuples. It is rarely we need only 
> single slot and set __slots__ as a string.
> 
> It will break some code (there are 2 occurrences in the stdlib an 1 in 
> scripts), but that code can be easily fixed.

Pydoc supports __slots__ that is a dict, and will use the values in the dict 
als documentation for the slots.   I’ve also seen code using ``__slots__ =  
“field1 field2”.split()``. I don’t particularly like this code pattern, but 
your proposal would break this.

Also note that __slots__ only has a side effect during class definition, 
changing it afterwards is possible but has no effect (“class Foo: pass; 
Foo.__slots__ = 42”). This surprised my recently and I have no idea if this 
feature is ever used.

Ronald

> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/E32BRLAWOU5GESMZ5MLAOIYPXSL37HOI/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YQUWR7CYKNM65HR5FZQ3BANR5SNNK6N6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Defining tiered platform support

2022-03-04 Thread Ronald Oussoren via Python-Dev


> On 4 Mar 2022, at 00:30, Brett Cannon  wrote:
> 
> Do we officially support NetBSD? Do you know how to find out if we do? You 
> might think to look at 
> https://www.python.org/dev/peps/pep-0011/#supporting-platforms 
>  , but that 
> just loosely defines the criteria and it still doesn't list the actual 
> platforms we support. (BTW I don't know if we do officially support NetBSD, 
> hence this email.)
> 
> I think we should clarify this sort of thing and be a bit more upfront with 
> the level of support we expect/provide for a platform. As such, I want to 
> restructure PEP 11 to list the platforms we support, not just list the 
> platforms we stopped supporting. To do this I want define 3 different tiers 
> that outline what our support requirements and promises are for platforms.
> 
> Tier 1 is the stuff we run CI against: latest Windows, latest macOS, Linux w/ 
> the latest glibc (I don't know of a better way to define Linux support as I 
> don't know if a per-distro list is the right abstraction). These are 
> platforms we won't even let code be committed for if they would break; they 
> block releases if they don't work. These platforms we all implicitly promise 
> to support.
> 
> Tier 2 is the platforms we would revert a change within 24 hours if they 
> broke: latest FeeBSD, older Windows, older macOS, Linux w/ older glibc.This 
> is historically the "stable buildbot plus a core dev" group of platforms. The 
> change I would like to see is two core devs (in case one is on vacation), and 
> a policy as to how a platform ends up here (e.g. SC must okay it based on 
> consensus of everyone). The stable buildbot would still be needed to know if 
> a release is blocked as we would hold a release up if they were red. The 
> platform and the core devs supporting these platforms would be listed in PEP 
> 11.

What’s the difference between Tier 1 and 2 other than that PRs are checked with 
tier 1 platforms before committing and with tier 2 afterwards?   

In particular, tier 1 contains windows server and not desktop (even though I 
expect that those are compatible as far as our use is concerned), and does not 
contain the macOS versions that we actually support in the installers on 
python.org  (macOS 10.9 or later, both x86_64 and arm64). 
AFAIK support for macOS 10.9 in the python.org  installers 
is now primarily, if not only, tested by Ned. That could, and probably should, 
be automated but that’s a significant amount of work. 

[…]

> 
> 
> I don't know if we want to bother listing CPU architectures since we are a 
> pure C project which makes CPU architecture less of a thing, but I'm 
> personally open to the idea of CPU architectures being a part of the platform 
> definition.

CTypes is hardware specific, although through libiff. There’s also intermittent 
discussions about support for ancient hardware platforms. Would we block a 
release when (for example) support for Linux on sparc32 is broken?  

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VDCMUXNSAMJGSHD6A235WBDHI7YETLVQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Move the pythoncapi_compat project under the GitHub Python or PSF organization?

2022-02-14 Thread Ronald Oussoren via Python-Dev


> On 14 Feb 2022, at 14:07, Petr Viktorin  wrote:
> 
> 
> 
> On 14. 02. 22 13:37, Antoine Pitrou wrote:
>> On Mon, 14 Feb 2022 13:19:00 +0100
>> Petr Viktorin  wrote:
>>> 
>>> If we don't have much sympathy for projects that use private API where
>>> does that leave pythoncapi_compat?
>> If you look at pythoncapi_compat.h, it provides backports for
>> recently-introduced public APIs such as PyObject_CallOneArg().
> 
> Yes.
> On older Python versions, where the public API wasn't yet available, those 
> backports use private API. If we change the private API in a point release, 
> the backport will break.

Do you have an example of this? On first glance the pythoncapi_compat.h header 
only uses public APIs, other than (maybe) accessing fields of the thread state 
directly.

BTW. I’m +1 on providing this header, it makes it easier for projects to 
maintain compatibility with older Python versions. That said, we should 
continue to be careful and considerate when evolving the public API as 
migrating a project to a newer API is still work.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/B7TKVUPXADTHYANTLIKCAVE4ZCNUJ64M/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Moving away from _Py_IDENTIFIER().

2022-02-03 Thread Ronald Oussoren via Python-Dev


> On 2 Feb 2022, at 23:41, Eric Snow  wrote:
> 
> I
[…]
> 
> Cons:
> 
> * a little less convenient: adding a global string requires modifying
> a separate file from the one where you actually want to use the string
> * strings can get "orphaned" (I'm planning on checking in CI)
> * some strings may never get used for any given ./python invocation
> (not that big a difference though)

The first two cons can probably be fixed by adding some indirection, with some 
markers at the place of use and a script that uses those to generate the
C definitions. 

Although my gut feeling is that adding a the CI check you mention is good
enough and adding the tooling for generating code isn’t worth the additional
complexity.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6EJSL7LBAZM4HL5THZDZGTYFS5HRAIPY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Please update Cython *before* introcuding C API incompatible changes in Python

2022-02-02 Thread Ronald Oussoren via Python-Dev


> On 2 Feb 2022, at 11:50, Stefan Behnel  wrote:
> 
> Petr Viktorin schrieb am 02.02.22 um 10:22:
>> Moving off the internal (unstable) API would be great, but I don't think 
>> Cython needs to move all the way to the limited API.
>> There are three "levels" in the C API:
>> - limited API, with long-term ABI compatibility guarantees
> 
> That's what "-DCYTHON_LIMITED_API -DPy_LIMITED_API=..." is supposed to do, 
> which currently fails for much if not most code.
> 
> 
>> - "normal" public API, covered by the backwards compatibility policy (users 
>> need to recompile for every minor release, and watch for deprecation 
>> warnings)
> 
> That's probably close to what "-DCYTHON_LIMITED_API" does by itself as it 
> stands. I can see that being a nice feature that just deserves a more 
> suitable name. (The name was chosen because it was meant to also internally 
> define "Py_LIMITED_API" at some point. Not sure if it will ever do that.)
> 
> 
>> - internal API (underscore-prefixed names, `internal` headers, things 
>> documented as private)
>> AFAIK, only the last one is causing trouble here.
> 
> Yeah, and that's the current default mode on CPython.

Is is possible to automatically pick a different default version when building 
with a too new CPython version?  That way projects can at least be used and 
tested with pre-releases of CPython, although possibly with less performance.  

Ronald

> 
> Maybe we should advertise the two modes more. And make sure that both work. 
> There are certainly issues with the current state of the "limited API" 
> implementation, but that just needs work and testing.
> 
> Stefan
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/ESEPW36K3PH4RM7OFVKAOE4QMBI2WYVU/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DDIQ6RYX6ECQ5YSSB5PUDNN2OLZE725R/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should isinstance call __getattribute__?

2021-12-10 Thread Ronald Oussoren via Python-Dev


> On 10 Dec 2021, at 14:40, Steven D'Aprano  wrote:
> 
> On Thu, Dec 09, 2021 at 05:19:00PM +0100, Ronald Oussoren wrote:
> 
>> https://mail.python.org/pipermail/python-dev/2015-October/141953.html 
>> is an old thread about the difference between type(x)/Py_TYPE(x) and 
>> x.__class__ that contains some insight about this.
> 
> Thanks for the link Ronald, I remember that thread. It didn't really 
> clarify things to me at the time, and re-reading it, it still doesn't.

That’s because the difference between the two is not clear, and it doesn’t help 
that C extensions (and CPython itself) use an API that always look at the 
object’s type slot and not at the ``__class__`` attribute (with APIs such as 
``PyObject_TypeCheck`` and ``PyDict_Check``). 

> 
>> Proxy types are one use case, although with some sharp edges.
> 
> I'm not looking for use cases. I'm looking for a better understanding of 
> how type() and isinstance() (and presumably issubclass) work. The best I 
> can see is that type() sometimes believes __class__ but not always, that 
> you can sometimes change __class__ but not always, but the rules that 
> control when and why (or why not) are not clear or documented, as far 
> as I can see.
> 
> Is there a reference for how type(obj) and isinstance(obj, T) are 
> intended to work, or is the implementation the only reference?

I’m not sure how much of this is documented to be honest. If the documentation 
is lacking there’s a change for someone to dig deep and help writing the 
documentation ;-)

Changing the type of an instance by assigning to ``__class__`` is basically 
allowed when the C layout for instances of the two classes are compatible, the 
implementation contains the details about this.  IIRC this requires that both 
classes inherit from a shared base class and none of the intermediate classes 
introduce new slots (or the C equivalent of this). 

Ronald

> 
> Thanks in advance,
> 
> 
> -- 
> Steve
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/GB2S2SMNDGS5UV5GG6O7HQUQSZP27OOI/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MIBOFO7CZM3TBKA5C3IRIHGB62MPCZFS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should isinstance call __getattribute__?

2021-12-09 Thread Ronald Oussoren via Python-Dev


> On 9 Dec 2021, at 16:41, Steven D'Aprano  wrote:
> 
> I'm looking for some guidance on a bug report involving isinstance and 
> __getattribute__` please.
> 
> The issue is that if your class overloads `__getattribute__`, calling 
> isinstance on an instance will call the overloaded `__getattribute__` 
> method when looking up `__class__`, potentially causing isinstance to 
> fail, or return the wrong result.
> 
> See b.p.o. #32683 
> 
> https://bugs.python.org/issue32683
> 
> I see no reason why this isn't working as designed, __getattribute__ is 
> intended to overload attribute access, and that could include the 
> `__class__` attribute. Am I wrong?
> 
> It has been suggested that isinstance should call `object.__getattribute__` 
> and bypass the class' overloaded method, but I expect that would 
> probably break objects which intentionally lie about their class. 
> (Mocks? Stubs? Proxies?)

https://mail.python.org/pipermail/python-dev/2015-October/141953.html 
 is an 
old thread about the difference between type(x)/Py_TYPE(x) and x.__class__ that 
contains some insight about this.

Proxy types are one use case, although with some sharp edges.

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UTFAVD5CW6DWESWXEBIIF2RHSKO6FVCC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should python-config be an stdlib module?

2021-10-25 Thread Ronald Oussoren via Python-Dev


> On 23 Oct 2021, at 17:48, Filipe Laíns  wrote:
> 
> On Sat, 2021-10-23 at 12:25 -0300, Joannah Nanjekye wrote:
>> I remembered this issue on bpo with contracting opinions from when I first
>> looked in 2019.
>> 
>> See https://bugs.python.org/issue33439
> 
> Hi,
> 
> This script is currently not written in Python and hardcodes paths that are
> incorrect on virtual environments and relocated installs, so it would make 
> sense
> to rewrite it in Python, using sysconfig.
> 
> That does not require it to be exposed as a module, but would be a very cheap
> improvement.
> 
> As the bulk of the work would be to actually rewriting it in Python, and as
> there is reasoning to do this other than just exposing it as a module, I would
> be +1 FWIW.

Note that the original python implementation is still in the repository and is 
used for framework builds on macOS because of the hardcoded values in the shell 
script variant. On macOS the sysconfig values for compiler related settings 
(CC, CFLAGS, …) are adjusted dynamically to account for different compilers and 
system versions.

IIRC the shell script variant was introduced to support some cross compilation 
scenario’s  (see https://bugs.python.org/issue16235 
).

Ronald

> 
> Cheers,
> Filipe Laíns
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/QWA3HPPFTHGDWYMMAXJRIXKVORC2ATLA/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/T2MRJMT4ID7HSZ3XHES6IXOM3GTFPT6T/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python multithreading without the GIL

2021-10-11 Thread Ronald Oussoren via Python-Dev


> On 11 Oct 2021, at 18:58, Thomas Grainger  wrote:
> 
> Is D1.update(D2) still atomic with this implementation?  
> https://docs.python.org/3.11/faq/library.html#what-kinds-of-global-value-mutation-are-thread-safe
>  
> 
AFAIK this is already only atomic in specific circumstances, that are more 
limited than the FAQ appears to claim.

For dict.update to be atomic I’d expect that with two threads performing an 
update on the same keys you’d end up the update of either thread, but not a mix.

That is:

Thread 1:  d.update({“a”: 1, “b”: 1})
Thread 2:  d.update({“a”: 2, “b”: 2})

The result should have d[“a”] == d[“b”].

This can already end up with a mix of the two when “d” has keys that are 
objects that implement __eq__ in Python, because the interpreter could switch 
threads while interpreting __eq__. 

A pathological example:

# — start of script —

import threading
import time

stop = False
trigger = False
def runfunc():
while not stop:
if trigger:
d.update({"a": 2, "b": 2 })
print(d)
break

t = threading.Thread(target=runfunc)
t.start()


class X(str):
def __eq__(self, other):
if threading.current_thread() is t:
return str.__eq__(self, other)

global trigger
trigger = True
t.join()
return str.__eq__(self, other)

def __hash__(self):
return str.__hash__(self)


d = {X("b"):0}
print("before", d)
d.update({"a":1, "b": 1})
print("after", d)

stop = True
t.join()

# — end of script — 

This prints "after {'b': 1, 'a': 2}” on my machine.

Ronald


> 
> On Mon, 11 Oct 2021, 17:54 Sam Gross,  > wrote:
> On Fri, Oct 8, 2021 at 12:04 PM Nathaniel Smith  > wrote:
> I notice the fb.com  address -- is this a personal project or 
> something
> facebook is working on? what's the relationship to Cinder, if any?
> 
> It is a Facebook project, at least in the important sense that I work on it
> as an employee at Facebook. (I'm currently the only person working on it.)
> I keep in touch with some of the Cinder devs regularly and they've advised
> on the project, but otherwise the two projects are unrelated.
>  
> Regarding the tricky lock-free dict/list reads: I guess the more
> straightforward approach would be to use a plain ol' mutex that's
> optimized for this kind of fine-grained per-object lock with short
> critical sections and minimal contention, like WTF::Lock. Did you try
> alternatives like that? If so, I assume they didn't work well -- can
> you give more details?
> 
> I'm using WTF::Lock style locks for dict/list mutations. I did an experiment
> early on where I included locking around reads as well. I think it slowed down
> the pyperformance benchmarks by ~10% on average, but I can't find my notes
> so I plan to re-run the experiment.
> 
> Additionally, because dicts are used for things like global variables, I'd 
> expect
> that locks around reads prevent efficient scaling, but I haven't measured 
> this.
> 
> ___
> Python-Dev mailing list -- python-dev@python.org 
> 
> To unsubscribe send an email to python-dev-le...@python.org 
> 
> https://mail.python.org/mailman3/lists/python-dev.python.org/ 
> 
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/V76ZRBM6UMGYU7FTNENMOOW7OYEFYQ5Q/
>  
> 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/5RKLUR2DYJ53OIRX74WVZCVRGW7VUXLF/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4ECNMYHYOOPNL4XHE4GBB5AQN6NPX7QX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The Default for python -X frozen_modules.

2021-09-28 Thread Ronald Oussoren via Python-Dev


> On 28 Sep 2021, at 10:54, Antoine Pitrou  wrote:
> 
> On Tue, 28 Sep 2021 10:51:53 +0200
> Ronald Oussoren via Python-Dev  wrote:
>>> On 28 Sep 2021, at 10:05, Antoine Pitrou  wrote:
>>> 
>>> On Mon, 27 Sep 2021 10:51:43 -0600
>>> Eric Snow >> <mailto:ericsnowcurren...@gmail.com>> wrote:  
>>>> We've frozen most of the stdlib modules imported during "python -c
>>>> pass" [1][2], to make startup a bit faster.  Import of those modules
>>>> is controlled by "-X frozen_modules=[on|off]".  Currently it defaults
>>>> to "off" but we'd like to default to "on".  The blocker is the impact
>>>> on contributors.  I expect many will make changes to a stdlib module
>>>> and then puzzle over why those changes aren't getting used.  That's an
>>>> annoyance we can avoid, which is the point of this thread.
>>>> 
>>>> Possible solutions:
>>>> 
>>>> 1. always default to "on" (the annoyance for contributors isn't big 
>>>> enough?)
>>>> 2. default to "on" if it's a PGO build (and "off" otherwise)
>>>> 3. default to "on" unless running from the source tree
>>>> 
>>>> Thoughts?  
>>> 
>>> My vote is on #3 to minimize contributor annoyance and
>>> eventual puzzlement.  
>> 
>> I agree, but… Most CPython tests are run while running from the source tree, 
>> that means that there will have to be testrunner configurations that run 
>> with “-X frozen_modules=on”. 
> 
> Well, multiplying CI configurations is the price of adding options in
> general.


Of course. I mentioned it because the proposal is to add a new option that’s 
enabled after installation, and basically not when the testsuite is run.  
That’s not a problem, we could just enable the option in most CI jobs.

Ronald

> 
> Regards
> 
> Antoine.
> 
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/R6C5XNYBV7LGUWRAL4X4OQDABRKQW42C/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6YXGAXLK76MQECMBP32ZBZ5GFDQKCHPW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The Default for python -X frozen_modules.

2021-09-28 Thread Ronald Oussoren via Python-Dev


> On 28 Sep 2021, at 10:05, Antoine Pitrou  wrote:
> 
> On Mon, 27 Sep 2021 10:51:43 -0600
> Eric Snow mailto:ericsnowcurren...@gmail.com>> 
> wrote:
>> We've frozen most of the stdlib modules imported during "python -c
>> pass" [1][2], to make startup a bit faster.  Import of those modules
>> is controlled by "-X frozen_modules=[on|off]".  Currently it defaults
>> to "off" but we'd like to default to "on".  The blocker is the impact
>> on contributors.  I expect many will make changes to a stdlib module
>> and then puzzle over why those changes aren't getting used.  That's an
>> annoyance we can avoid, which is the point of this thread.
>> 
>> Possible solutions:
>> 
>> 1. always default to "on" (the annoyance for contributors isn't big enough?)
>> 2. default to "on" if it's a PGO build (and "off" otherwise)
>> 3. default to "on" unless running from the source tree
>> 
>> Thoughts?
> 
> My vote is on #3 to minimize contributor annoyance and
> eventual puzzlement.

I agree, but… Most CPython tests are run while running from the source tree, 
that means that there will have to be testrunner configurations that run with 
“-X frozen_modules=on”. 

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NPID5HAZAYOZ6J4OYHN7O4UNEWXMR7WS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Worried about Python release schedule and lack of stable C-API

2021-09-27 Thread Ronald Oussoren via Python-Dev


> On 26 Sep 2021, at 19:03, Christian Heimes  wrote:
> 
> On 26/09/2021 13.07, jack.jan...@cwi.nl wrote:
>> The problem with the stable ABI is that very few developers are targeting 
>> it. I’m not sure why not, whether it has to do with incompleteness of the 
>> ABI, or with issues targeting it easily and your builds and then having 
>> pip/PyPI do the right things with wheels and all that. I’ve been on the 
>> capi-sig mailing list since its inception in 2007, but the discussions are 
>> really going over my head. I don’t understand what the problems are that 
>> keep people from targeting the stable ABI (or the various other attempts at 
>> standardising extensions over Python versions).
> 
> It takes some effort to port old extensions to stable ABI. Several old APIs 
> are not supported in stable ABI extensions. For example developers have to 
> port static type definitions to heap types. It's not complicated, but it 
> takes some effort.

The stable ABI is also not complete, although it should be complete enough for 
a lot of projects.  A, fairly esoteric, issue I ran into is that it is 
currently not possible to define a class with a non-default meta class using 
the type-spec API (AFAIK), see #15870.

And as you write “it takes some effort”, that alone likely reduces the amount 
of projects that migrate to the stable ABI esp. for projects that already have 
a CI/CD setup that creates binary wheels for you (for example using 
cibuildwheel). 

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LFLOCVRO2UP2PKWTFUTAQ3EKO6NLMG4E/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Making PY_SSIZE_T_CLEAN not mandatory.

2021-06-09 Thread Ronald Oussoren via Python-Dev


> On 9 Jun 2021, at 12:28, Inada Naoki  wrote:
> 
> I think stable ABI keeps symbols, signatures, and memory layouts.
> I don't think stable ABI keeps all behaviors.

As often “it depends”.   Behaviour is IMHO part of the API/ABI contract.  That 
said, that does not necessarily mean that we cannot change behaviour at all, 
but that the cost to users for such changes should be taken into account. 

> 
> For example, Py_CompileString() is stable ABI.
> When we add `async` keyword, Py_CompileString() starts raising an
> Error for source code using `async` name.
> Is it ABI change? I don't think so.

I agree. But its not as easy as “it is not an ABI change because it only 
changes functionality of a function”.  The interface contract of 
Py_CompileString is that it compiles Python code. If the rules for what valid 
Python code is change (such as the introduction of ‘async’ as a hard keyword) 
the function can start to reject input that was accepted earlier.  That’s IMHO 
different from a change to Py_CompileString that starts raising an error 
unconditionally because we no longer want to expose the API.
 
> 
> I want to drop Py_UNICODE support in Python 3.12. It is another
> incompatible change in PyArg_Parse*() *API*.
> Users can not use "u" format after it.  It is an incompatible *API*
> change, but not an *ABI* change.

It is an ABI change: an extensions targetting the stable ABI no longer works 
due to a change in implementation.  That doesn’t necessarily mean the change 
cannot be made, especially  when a deprecation warning is emitted before the 
feature is removed.  

> 
> I suspect we had made many incompatible *API* changes in stable ABIs already.
> 
> If I am wrong, can we stop keeping stable ABI at Python 3.12?
> Python 4.0 won't come in foreseeable future. Stable ABI blocks Python 
> evolution.

The annoying part of the stable ABI is that it still exposes some 
implementation details and behaviour that make it harder to write correct code 
(such as borrowed references, these can be very convenient but are also easy to 
misuse).  That’s one reason why HPy is an interesting project, even when only 
targeting CPython.

And to be clear: I’m not opposed to the change for the “#” format character and 
the removal of the “u” format you mention earlier.  

Ronald
> 
> Regards,
> -- 
> Inada Naoki  

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AGTHK7XIS6OPS5H6Z2ZA3XWHILJ4R4OZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Making PY_SSIZE_T_CLEAN not mandatory.

2021-06-09 Thread Ronald Oussoren via Python-Dev


> On 9 Jun 2021, at 11:13, Victor Stinner  wrote:
> 
> On Wed, Jun 9, 2021 at 10:32 AM Ronald Oussoren via Python-Dev
>  wrote:
>> Its a bit late to complain (and I’m not affected by this myself), but those 
>> functions are part of the stable ABI. The change in 3.10 will break any 
>> extensions that use the stable ABI, use these functions and don’t use 
>> PY_SSIZE_T_CLEAN.  I have no idea how many of those exist, especially given 
>> that the stable ABI doesn’t seem to be used a lot.
> 
> Require PY_SSIZE_T_CLEAN macro to be defined in an incompatible *API*
> change. At the ABI level, what changed if that C extensions built
> (with Python 3.9 and older) without PY_SSIZE_T_CLEAN now raise an
> error on Python 3.10 (for a few specific argument formats using "#").
> Ah you are right, it's an incompatible ABI change.

:-(

>  
> 
> It might be possible to keep the backward compatibility at the ABI
> level by adding a 3rd flavor of "parse" functions:
> 
> * parse with size_t: no change
> * parse without size_t: stable ABI
> * parse without size_t which raises an exception on "#" formats: new
> Python 3.10 functions
> 
> It's already painful to have 2 flavors of each functions. Adding a 3rd
> flavor would make the maintenance burden even worse, whereas Inada-san
> wants to opposite (remove the 2nd flavor to only have one!).

I don’t think it is necessary to introduce a 3th variant, for 3.11+ we could do 
something like this:

* [3.11] Add deprecation warnings in the C headers to the few functions with a 
“PY_SSIZE_T_CLEAN” variant when “PY_SSIZE_T_CLEAN” is not defined
* [3.12+] Change the headers to behave as if “PY_SSIZE_T_CLEAN” is defined, and 
only keep the non-PY_SSIZE_T_CLEAN variants for the stable ABI (which would 
include dropping non-PY_SSIZE_T_CLEAN variants for private functions).  The 
PY_SSIZE_T_CLEAN variants would keep their "_PysizeT” suffix in the (stable or 
unstable) ABI. 

This wouldn’t allow dropping the non-PY_SSIZE_T_CLEAN variants entirely, at 
least not until we’re fine with breaking the stable ABI.  Another disadvantage 
is that this might require changes in code that doesn’t even use “#” in format 
strings in 3.11. 

> 
> A more general question is: do we still want to keep backward
> compatibility with Python 3.2 released 10 years ago, or is it ok to
> start with a new stable ABI which drops backward compatibility with
> Python 3.5 (for example)?
> 
> It's maybe time to replace "abi3" with "abi4" in Python 3.10?

Personally I’d say it is too soon for that, especially when the CPython speedup 
project (Guido, Mark, et.al.) is just started and HPy is far from finished.  
Either project might teach us what changes are needed for a long term stable 
ABI.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MF52ZHEGWEXPZ77FYQ32XLFFPJ66LSPS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Making PY_SSIZE_T_CLEAN not mandatory.

2021-06-09 Thread Ronald Oussoren via Python-Dev


> On 7 Jun 2021, at 05:05, Inada Naoki  wrote:
> 
> Hi, folks,
> 
> Since Python 3.8, PyArg_Parse*() APIs and Py_BuildValue() APIs emitted
> DeprecationWarning when
> '#' format is used without PY_SSIZE_T_CLEAN defined.
> In Python 3.10, they raise a RuntimeError, not a warning. Extension
> modules can not use '#' format with int.
> 
> So how about making PY_SSIZE_T_CLEAN not mandatory in Python 3.11?
> Extension modules can use '#' format with ssize_t, without
> PY_SSIZE_T_CLEAN defined.
> 
> Or should we wait one more version?

Its a bit late to complain (and I’m not affected by this myself), but those 
functions are part of the stable ABI. The change in 3.10 will break any 
extensions that use the stable ABI, use these functions and don’t use 
PY_SSIZE_T_CLEAN.  I have no idea how many of those exist, especially given 
that the stable ABI doesn’t seem to be used a lot. 

I guess this depends a little on what promises the stable ABI makes, the 
functions are still there but behave differently than before. 

P.S. I’d be in favour of just dropping PY_SSIZE_T_CLEAN completely (that is use 
Py_ssize_t unconditionally) to simplify the code base, apart from being 
slightly worried about the impact on the stable ABI. AFAIK The define was meant 
as a temporary transition mechanism when Py_ssize_t was introduced in the, by 
now, ancient past.

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QZAHDJFZ5AMMPJE363BRK4XE57QPJO4P/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The repr of a sentinel

2021-05-20 Thread Ronald Oussoren via Python-Dev


> On 20 May 2021, at 19:10, Luciano Ramalho  wrote:
> 
> I'd like to learn about use cases where `...` (a.k.a. `Ellipsis`) is
> not a good sentinel. It's a pickable singleton testable with `is`,
> readily available, and extremely unlikely to appear in a data stream.
> Its repr is "Ellipsis".
> 
> If you don't like the name for this purpose, you can always define a
> constant (that won't fix the `repr`, obviously, but helps with source
> code readability).
> 
> SENTINEL = ...
> 
> I can't think of any case where I'd rather have my own custom
> sentinel, or need a special API for sentinels. Probably my fault, of
> course. Please enlighten me!

One use case for a sentinel that is not a predefined (builtin) singleton is 
APIs where an arbitrary user specified value can be used.

One example of this is the definition of dataclasses.field:

dataclasses.field(*, default=MISSING, default_factory=MISSING, 
repr=True, hash=None, init=True, compare=True, metadata=None)

Here the “default” and “default_factory” can be an arbitrary value, and any 
builtin singleton could be used. Hence the use of a custom module-private 
sentinel that cannot clash with values used by users of the module (unless 
those users poke at private details of the module, but then all bets are off 
anyway).

That’s why I don’t particularly like the proposal of using Ellipsis as the 
sanctioned sentinel value. It would be weird at best that the default for a 
dataclass field can be any value, except for the builtin Ellipsis value.

Ronald

> 
> Cheers,
> 
> Luciano
> 
> On Thu, May 20, 2021 at 8:35 AM Victor Stinner  wrote:
>> 
>> IMO you should consider writing a PEP to enhance sentinels in Python,
>> and maybe even provide a public API for sentinels in general.
>> 
>> Victor
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at 
>> https://mail.python.org/archives/list/python-dev@python.org/message/EY6B6PRQ2B54FVG5JK42GR6ZM2VQ7VL2/
>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
> 
> 
> -- 
> Luciano Ramalho
> |  Author of Fluent Python (O'Reilly, 2015)
> | http://shop.oreilly.com/product/0636920032519.do
> |  Technical Principal at ThoughtWorks
> |  Twitter: @ramalhoorg
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/WNCFFIJHLS5NCA5QZG5JA45U2DKEETGI/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/U5KUT6F6SETJZFCPJISNAZNYJZMJD5BP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: gzip.py: allow deterministic compression (without time stamp)

2021-04-15 Thread Ronald Oussoren via Python-Dev


> On 15 Apr 2021, at 14:48, Antoine Pitrou  wrote:
> 
> On Thu, 15 Apr 2021 14:32:05 +0200
> Victor Stinner  wrote:
>> SOURCE_DATE_EPOCH is not a random variable, but is a *standardised*
>> environment variable:
>> https://reproducible-builds.org/docs/source-date-epoch/
> 
> Standardized by whom? This is not a POSIX nor Windows standard at
> least. Just because a Web page claims it is standardized doesn't mean
> that it is.
> 
>> More and more projects adopted it. As I wrote, the Python stdlib
>> already uses it in compileall and py_compile modules.
> 
> Those are higher-level modules.  Doing it in the gzip module directly
> sounds like the wrong place.

I agree. According to the documentation this variable is meant to be used for 
build tools to accomplish reproducible builds. This should IMHO not affect 
lower level APIs and libraries that aren’t build related. 

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZVMT7A46CAR7CAS5MJNO5IZRZGOP6XN6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: NamedTemporaryFile and context managers

2021-04-08 Thread Ronald Oussoren via Python-Dev


> On 8 Apr 2021, at 22:31, Ethan Furman  wrote:
> 
> In issue14243 [1] there are two issues being tracked:
> 
> - the difference in opening shared files between posix and Windows
> - the behavior of closing the underlying file in the middle of
>  NamedTemporaryFile's context management
> 
> I'd like to address and get feedback on the context management issue.
> 
> ```python
> from tempfile import NamedTemporaryFile
> 
> with NamedTemporaryFile() as fp:
>fp.write(b'some data’)
  fp.flush(). # needed to ensure data is actually in the file ;-)
>fp = open(fp.name())
>data = fp.read()
> 
> assert data == 'some_data'
> ```

I generally use a slightly different pattern for this:

with NamedTemporaryFile() as fp:
   fp.write(b'some data’)
   fp.flush()
   fp.seek(0)
   data = fp.read()

That is, reuse the same “fp” and just reset the stream.  An advantage of this 
approach is that you don’t need a named temporary file for this (and could even 
use a spooled one).   That said, I at times use this pattern with a named 
temporary file with a quick self-test for the file contents before handing of 
the file name to an external proces.


> 
> Occasionally, it is desirable to close and reopen the temporary file in order 
> to read the contents (there are OSes that cannot open a temp file for reading 
> while it is still open for writing).  This would look like:
> 
> ```python
> from tempfile import NamedTemporaryFile
> 
> with NamedTemporaryFile() as fp:
>fp.write(b'some data')
>fp.close()  # Windows workaround
>fp.open()
>data = fp.read()
> 
> assert data == 'some_data'
> ```
> 
> The problem is that, even though `fp.open()` is still inside the context 
> manager, the `close()` call deletes the file [2].  To handle this scenario, 
> my proposal is two-fold:
> 
> 1) stop using the TEMPFILE OS attribute so the OS doesn't delete the file on 
> close
> 2) add `.open()` to NamedTemporaryFile

I’ve never had the need for such an API, but must say that I barely use Windows 
and hence have not run into the “cannot open file for reading what it is still 
open for writing” issue.

> 
> A possible side effect of (1) is that temp files may accumulate if the 
> interpreter crashes, but given the file-management abilities in today's 
> software that seems like a minor annoyance at most.
> 
> The backwards compatibility issue of (1) is that the file is no longer 
> deleted after a manual `close()` -- but why one would call close() and then 
> stay inside the CM, outside of testing, I cannot fathom.  [3]
> 
> So, opinions on modifying NamedTemporaryFile to not delete on close() if 
> inside a CM, and add open() ?
> 
> --
> ~Ethan~
> 

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LOE7BKIHV66IG3J6LP3UAJSZL65AXWCK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 651 -- Robust Overflow Handling

2021-01-20 Thread Ronald Oussoren via Python-Dev


> On 19 Jan 2021, at 17:15, Antoine Pitrou  wrote:
> 
> On Tue, 19 Jan 2021 15:54:39 +
> Mark Shannon mailto:m...@hotpy.org>> wrote:
>> On 19/01/2021 3:40 pm, Antoine Pitrou wrote:
>>> On Tue, 19 Jan 2021 13:31:45 +
>>> Mark Shannon  wrote:  
 Hi everyone,
 
 It's time for yet another PEP :)
 
 Fortunately, this one is a small one that doesn't change much.
 It's aim is to make the VM more robust.  
>>> 
>>> On the principle, no objection.
>>> 
>>> In practice, can you show how an implementation of Py_CheckStackDepth()
>>> would look like?  
>> 
>> It would depend on the platform, but a portable-ish implementation is here:
>> 
>> https://github.com/markshannon/cpython/blob/pep-overflow-implementation/Include/internal/pycore_ceval.h#L71
> 
> This doesn't tell me how `stack_limit_pointer` is computed or estimated
> :-)

There already is an implementation of this for Windows (``PyOS_CheckStack``). 
For other platforms there will 
have to be a different implementation of this function.  I’ve looked into this 
in the past for macOS, and that platform
has an API to retrieve the size of the stack as well as a pointer to the start 
of the stack (AFAIK stacks aren’t auto-growing
on macOS)

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/F62XDAI3NF5YXMQXWEIIIDKY3M6KZ64O/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Unification of the Mac builds?

2021-01-15 Thread Ronald Oussoren via Python-Dev


> On 14 Jan 2021, at 23:03, Chris Barker via Python-Dev  
> wrote:
> 
> Ned,
> 
> Thanks -- I'll take further discussion to the python-mac list.
> 
> Ronald:
> 
> That’s a feature of the framework build. The unix build is exactly the same 
> as a unix build on other platform.  Adding the same feature to the unix build 
> should be possible, but would complicate the build.  I have no interest to 
> work on this, but would be willing to review PRs,
> 
> fair enough.
>  
> as long as those aim for feature parity with the framework build. That is, 
> both pythonw(1) and python(1) should redirect through an embedded app bundle.
> 
> yes, that's what I'm thinking.
> 
> Done, just configure “—enable-universalsdk —with-universal-archs=universal2” 
> without specifying a framework build.
> 
> Excellent, thanks!
> 
> Something I would like to work on, but don’t have enough free time for, is an 
> alternative installation with an application bundle on the top level instead 
> of a framework.  Installation would then entail dropping “Python X.Y.app” 
> into your application folder, and uninstallation would be to drop the same 
> bundle into the bin.
> 
> That would be nice -- and would, I think, actually buy us something useful 
> from this bundling :-)

To be honest I’d expect that this will also lead to complaining about how this 
build is not a unix build ;-) 

> 
> Just don’t use conda ;-).  To be blunt, doing this properly is trivial.
> 
> which "this" are you referring to -- and if it's trivial, then why hasn't 
> anyone figured out how in multiple conversations over years?
“This” is having a pythonw that’s the same as in framework builds.

I don’t know. I do know that nobody asked questions about this before.

> 
> But if it is -- tell us how and maybe we will do it (if it's really trivial, 
> maybe even I, with my limited build skills could do it)


For anyone reading along that isn’t familiar with macOS: On macOS a number of 
system libraries, and in particular GUI libraries, require that they are used 
from an application bundle (‘.app”).   For Python we want to be able to just 
use the command-line for simple scripts (and testing of more complicated 
applications), instead having to build an app bundle every time.

In framework builds this is solved by two changes relative to a unix build:
1. The regular python binary is in 
“{sys.prefix}/Resources/Python.app/Contents/MacOS/Python”
2. “{sys.executable}” (or “{sys.prefix}/bin/python3”) is a stub executable that 
does nothing but execv[1] the regular python binary. The source of this binary 
is in Mac/Tools/pythonw.c


Note: All of this requires using a shared library build (—enable-shared) 
because of the way the stub finds sys.prefix.

To add this to the unix build as well requires:
1. Make up a location for the Python.app
2. Change the Makefile(.pre.in) to build the stub executable install files in 
their new location, but only when installing on macOS with an —enable-shared 
build
3. Tweak the code in pythonw.c to calculate the correct path to Python.app 
4. Possibly tweak the calculation of sys.prefix for unix builds, IIRC the 
special macOS sauce is enabled for framework builds only
5. Do some debugging because this list is likely incomplete, I typed it from 
memory


Ronald

[1] for the pedantically correct: it actually uses posix_spawn in exec mode to 
ensure that the correct CPU architecture is used. That’s why “arch -x86_64 
python3.9” works when you use the universal2 installer for python 3.9, if we’d 
use execv the stub would launch the actual interpreter in native mode.

Ronald

> 
> Thanks,
> 
> -CHB
> 
> -- 
> 
> Christopher Barker, Ph.D.
> Oceanographer
> 
> Emergency Response Division
> NOAA/NOS/OR(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
> 
> chris.bar...@noaa.gov 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/7YZ4JB5PTL2LKGRCNZF7X4TP7ZYTBW3H/
> Code of Conduct: http://python.org/psf/codeofconduct/

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AYVB5ILUE4B4XNVNDSZ2LUJ2SHT3IS6W/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Unification of the Mac builds?

2021-01-09 Thread Ronald Oussoren via Python-Dev


> On 8 Jan 2021, at 20:38, Chris Barker via Python-Dev  
> wrote:
> 
> Sorry if I'm out of the loop here, but with Apple's new chip coming out, we 
> need new a build configuration (which I think has already been started, if 
> not done).
> 
> Perhaps we could take this opportunity to better modularize / unify the build 
> setup?
> 
> As it was last I checked, you really had only two options:
> 
> 1) Ignore anything mac specific, and get a "unix" build.
> 
> 2) Get a full "Framework" build, optionally with Universal support.
> 
> It would be nice to keep the Framework structure independent of the 
> Mac-specific features, if possible.
> 
> In particular, I'd love to get be able to get the "pythonw" executable 
> wrapper in an otherwise standard unix build [*].

That’s a feature of the framework build. The unix build is exactly the same as 
a unix build on other platform.  Adding the same feature to the unix build 
should be possible, but would complicate the build.  I have no interest to work 
on this, but would be willing to review PRs, as long as those aim for feature 
parity with the framework build. That is, both pythonw(1) and python(1) should 
redirect through an embedded app bundle.

> 
> It would also be nice if it were possible to get universal binaries in a 
> "unix style" build.

Let me sneak away in Guido’s time machine for a while.  

…

Done, just configure “—enable-universalsdk —with-universal-archs=universal2” 
without specifying a framework build.

> 
> (option 3 would be to simply abandon the Framework Build altogether -- it's 
> still not clear to me what this really buys mac users)

In some ways this is historic, but frameworks are still the way to build a 
self-contained library on macOS. A major thing this buys us is having 
side-by-side installs of Python. 

> 
> Any chance of this happening? I'm afraid I know nothing of autoconf, so can't 
> be much help, but I'd be willing to help out with testing, or documenting, or 
> anything else that I have the skills to do.

That’s not really something I intend to work on.  

Something I would like to work on, but don’t have enough free time for, is an 
alternative installation with an application bundle on the top level instead of 
a framework.  Installation would then entail dropping “Python X.Y.app” into 
your application folder, and uninstallation would be to drop the same bundle 
into the bin.   This might also make it possible to distribute Python through 
the Mac App Store, although I haven’t checked recently if sandboxing 
requirements have been relaxed enough to make that worthwhile. 
 
> 
> Thanks,
> 
> -Chris
> 
> [*] The pythonw issue has been a thorn in the side of conda for years. conda 
> uses a standard unix build on the Mac, for consistency with other unix 
> systems. But there is no configuration set up to build the pythonw wrapper 
> outside of a "Framework" build. So instead, conda has creates its own 
> "pythonw" wrapper -- but that is a bash script that re-directs to a different 
> executable. This works fine on the command line (or #! line), but it does not 
> work with setuptools' entry_points. And the setuptools community hasn't shown 
> any interest in hacking around it. Anyway, few enough people develop Desktop 
> apps (particularly with conda) so this has lingered, but it would be nice to 
> fix.

Just don’t use conda ;-).  To be blunt, doing this properly is trivial.

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/P62REKQYYJ2I66PL7AC5MHH4OHXQ5JMM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-28 Thread Ronald Oussoren via Python-Dev


> On 28 Dec 2020, at 14:00, Inada Naoki  wrote:
> 
> On Mon, Dec 28, 2020 at 8:52 PM Phil Thompson
>  wrote:
>> 
>> 
>> I would have thought that an object was defined by its behaviour rather
>> than by any particular implementation detail.
>> 
> 
> As my understanding, the policy "an object was defined by its
> behavior..." doesn't mean "put unlimited amount of implementation
> behind one concrete type."
> The policy means APIs shouldn't limit input to one concrete type
> without a reason. In other words, duck typing and structural subtyping
> are good.
> 
> For example, we can try making io.TextIOWrapper accepts not only
> Unicode objects (including subclass) but any objects implementing some
> protocol.
> We already have __index__ for integers and buffer protocol for
> byts-like objects. That is examples of the policy.

I agree that that would be the cleanest approach, although I worry about
how long it will take until 3th-party code is converted to the new protocol. 
That’s
why I wrote earlier that adding this feature to PyUnicode_Type is the most
pragmantic solution ;-)

There are two clear options for a new protocol:

1. Add something similar to __index__ of __fspath__, but for “string-like” 
objects

2. Add an extension to the buffer protocol

In either case an ABC for string-like objects would also be nice, to be able
to opt in to the fairly common pattern of excluding strings from types that 
can be iterated over, that is:

if isinstance(value, collections.abc.Iterable) and not isinstance(value, 
str):
for item in value:  proces_item(item)
else:
process_item(value)

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/ 


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BCN2WSLQ6YKEF6OO4E75EGYOGB6CFKXA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-28 Thread Ronald Oussoren via Python-Dev


> On 28 Dec 2020, at 03:58, Greg Ewing  wrote:
> 
> Rather than a full-blown buffer-protocol-like thing, could we
> get by with something simpler? How about just having a flag
> in the unicode object indicating that it doesn't own the
> memory that it points to?

I don’t know about the OP, but for me that wouldn’t be good enough as I’d still
have to copy the string value because of the semantics of ObjC strings.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/ 

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/D4LABINWQ6ZFPLDFRP6AL2PWCBA343DI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-27 Thread Ronald Oussoren via Python-Dev


> On 26 Dec 2020, at 18:43, Guido van Rossum  wrote:
> 
> On Sat, Dec 26, 2020 at 3:54 AM Phil Thompson via Python-Dev 
> mailto:python-dev@python.org>> wrote:
> It's worth comparing the situation with byte arrays. There is no problem 
> of translating different representations of an element, but there is 
> still the issue of who owns the memory. The Python buffer protocol 
> usually solves this problem, so something similar for unicode "arrays" 
> might suffice.
> 
> Exactly my thought on the matter. I have no doubt that between all of us we 
> could design a decent protocol.
> 
> The practical problem would be to convince enough people that this is worth 
> doing to actually get the code changed (str being one of the most popular 
> data types traveling across C API boundaries), in the CPython core (which 
> surely has a lot of places to modify) as well as in the vast collection of 
> affected 3rd party modules. Like many migrations it's an endless slog for the 
> developers involved, and in open source it's hard to assign resources for 
> such a project.

That’s a problem indeed.  An 80% solution could be reached by teaching 
PyArg_Parse* about the new protocol, it already uses the buffer protocol for 
bytes-like objects and could be thought about a variant of the protocol for 
strings.  That would require that the implementation of that new variant 
returns a pointer in the Py_view that can used after the view is released, but 
that’s already a restriction for the use of new style buffers in the 
PyArg_Parse* APIs.

That wouldn’t be a solution for code using the PyUnicode_* APIs of course, nor 
Python code explicitly checking for the str type.

In the end a new string “kind” (next to the 1, 2 and 4 byte variants) where 
callbacks are used to provide data might be the most pragmatic.  That will 
still break code peaking directly in the the PyUnicodeObject struct, but anyone 
doing that should know that that is not a stable API.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/
>  
> -- 
> --Guido van Rossum (python.org/~guido )
> Pronouns: he/him (why is my pronoun here?) 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/2FO5LQIO7UV4HKLROHUTPFKCBT2MH6DJ/
> Code of Conduct: http://python.org/psf/codeofconduct/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PB6U65EPFM7CP55QP7USUEPHJXHON4SZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-26 Thread Ronald Oussoren via Python-Dev

> On 25 Dec 2020, at 23:03, Nelson, Karl E. via Python-Dev 
>  wrote:
> 
> I was directed to post this request to the general Python development 
> community so hopefully this is on topic.
>  
> One of the weaknesses of the PyUnicode implementation is that the type is 
> concrete and there is no option for an abstract proxy string to a foreign 
> source.  This is an issue for an API like JPype in which java.lang.Strings 
> are passed back from Java.   Ideally these would be a type derived from the 
> Unicode type str, but that requires transferring the memory immediately from 
> Java to Python even when that handle is large and will never be accessed from 
> within Python.  For certain operations like XML parsing this can be 
> prohibitable, so instead of returning a str we return a JString.   (There is 
> a separate issue that Java method names and Python method names conflict so 
> direct inheritance creates some problems.)
>  
> The JString type can of course be transferred to Python space at any time as 
> both Python Unicode and Java string objects are immutable.  However the 
> CPython API which takes strings only accepts the Unicode type objects which 
> have a concrete implementation.  It is possible to extend strings, but those 
> extensions do not allow for proxing as far as I can tell.  Thus there is no 
> option currently to proxy to a string representation in another language.  
> The concept of the using the duck type ``__str__`` method is insufficient as 
> this indices that an object can become a string, rather than “this object is 
> effectively a string” for the purposes of the CPython API.
>  
> One way to address this is to use currently outdated copy of READY to extend 
> Unicode objects to other languages.  A class like JString would be an unready 
> Unicode object which when READY is called transfers the memory from Java, 
> sets up the flags and sets up a pointer to the code point representation.  
> Unfortunately the READY concept is scheduled for removal and thus the chance 
> to address the needs for proxying a Unicode to another languages 
> representation may be limited. There may be other methods to accomplish this 
> without using the concept of READY.  So long as access to the code points go 
> through the Unicode API and the Unicode object can be extended such that the 
> actual code points may be located outside of the Unicode object then a proxy 
> can still be achieved if there are hooks in it to decided when a transfer 
> should be performed.   Generally the transfer request only needs to happen 
> once  but the key issue being that the number of code points (nor the kind of 
> points) will not be known until the memory is transferred.
>  
> Java has much the same problem.   Although they defined an interface class 
> “java.lang.CharacterArray” the actually “java.lang.String” class is concrete 
> and almost all API methods take a String rather than the base interface even 
> when the base interface would have been adequate.  Thus just like Python has 
> difficulty treating a foreign string class as it would a native one, Java 
> cannot treat a Python string as native one as well.  So Python strings get 
> represented as CharacterArray type which effectively limits it use greatly.
>  
> Summary:
>  
> A String proxy would need the address of the memory in the “wstr” slot though 
> the code points may be char[], wchar[] or int[] depending the representation 
> in the proxy.
> API calls to interpret the data would need to check to see if the data is 
> transferred first, if not it would call the proxy dependent transfer method 
> which is responsible for creating a block of code points and set up flags 
> (kind, ascii, ready, and compact). 
> The memory block allocated would need to call the proxy dependent destructor 
> to clean up with the string is done.
> It is not clear if this would have impact on performance.   Python already 
> has the concept of a string which needs actions before it can be accessed, 
> but this is scheduled for removal.
>  
> Are there any plans currently to address the concept of a proxy string in 
> PyUnicode API?  

I have a similar problem in PyObjC which proxies Objective-C classes to Python 
(and the other way around). For interop with Python code I proxy Objective-C 
strings using a subclass of str() that is eagerly populated even if, as you 
mention as well, a lot of these proxy object are never used in a context where 
the str() representation is important.  A complicating factor for me is that 
Objective-C strings are, in general, mutable which can lead to interesting 
behaviour.Another disadvantage of subclassing str() for foreign string 
types is that this removes the proxy class from their logical location in the 
class hierarchy (in my case the proxy type is not a subclass of the proxy type 
for NSObject, even though all Objective-C classes inherit from NSObject).

I primarily chose to subclass the str type because that 

[Python-Dev] Re: macOS issues with 3.8.7rc1

2020-12-10 Thread Ronald Oussoren via Python-Dev


> On 10 Dec 2020, at 06:38, Greg Ewing  wrote:
> 
> On 10/12/20 10:28 am, Guido van Rossum wrote:
>> In my experience Apple hardware is very reliable and way outlives the OS 
>> updates. Even if an OS updates is still available, the newer OS often needs 
>> more memory, which can be a problem
> 
> Another problem, for me at least, is that OS updates often seem
> to remove functionality that I like and rely on. I stuck with
> 10.6 for a very long time, because it did everything I wanted,
> and there were some third party extensions I used that stopped
> working in 10.7 and there were no good replacements available.

Luckily there are no plans to remove support for macOS versions :-)

For master and 3.9 we can build on macOS 11 and deploy to macOS 10.9 and that  
won’t change. For 3.8 that doesn’t work yet, but backporting is just boring 
work. For 3.7 and earlier this will never work, those branches are closed for 
development.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/
> 
> -- 
> Greg
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/57DWVZDDJNXBNT6UKGSFH5STJHYREAHO/
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MQWRHCXJ57GBT3FV53MGWWWO7XPMNCTO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: macOS issues with 3.8.7rc1

2020-12-09 Thread Ronald Oussoren via Python-Dev


> On 9 Dec 2020, at 19:10, Gregory P. Smith  wrote:
> 
> 
> 
> As a meta question: Is there a good reason to support binaries running on 
> macOS earlier than ~ $latest_version-1?
> 
> Aren't systems running those old releases rather than upgrading unsupported 
> by Apple, never to be patched, and thus not wise to even have on a network?

There’s no documented policy about which versions are supported, but N-2 would 
be a good guess (even if Apple mostly just publishes patches for N-1).   What 
worries me a little about dropping older macOS versions is that I haven’t found 
good information about upgrade rates, but have seen annecdotes about users not 
upgrading or upgrading late. 

That said. We support older versions because its fairly easy to do and to keep 
supporting users stuck at older versions for other reasons.  
> 
> Yes, that means some very old hardware becomes useless as Apple drops 
> support. But that is what people signed up for when they bought it. Why 
> should that be our problem?
> 
> (It sounds like y'all will make it work, that's great! I'm really just 
> wondering where the motivation comes from)

The basic work is needed for N-1 (build on macOS 11, deploy to 10.15), doing 
the same for macOS 10.9 was fairly straightforward. This is work that needs to 
be done once, and is clearly labelled after that (making it easy to identify 
when we decide to move the minimal version forward). 

One reason to support 10.9 for the universal2 installers is that this allows us 
to move all users over, the universal2 installer can be used on all systems 
where the current x86_64 installers can be used and should be faster because we 
also (finally) enabled LTO/PGO for the macOS installer.

BTW. Removing support for old versions is also work. The universal2 patches 
finally dropped support for weak linking a couple of APIs that were introduced 
in macOS 10.4, which we haven’t supported in a very long while.

Ronald

> 
> -gps
> 
> On Wed, Dec 9, 2020, 9:25 AM Gregory Szorc  <mailto:gregory.sz...@gmail.com>> wrote:
> On Wed, Dec 9, 2020 at 4:13 AM Ronald Oussoren  <mailto:ronaldousso...@mac.com>> wrote:
> 
> 
>> On 8 Dec 2020, at 19:59, Gregory Szorc > <mailto:gregory.sz...@gmail.com>> wrote:
>> 
>> Regarding the 3.8.7rc1 release, I wanted to raise some issues regarding 
>> macOS.
>> 
>> Without the changes from https://github.com/python/cpython/pull/22855 
>> <https://github.com/python/cpython/pull/22855> backported, attempting to 
>> build a portable binary on macOS 11 (e.g. by setting 
>> `MACOSX_DEPLOYMENT_TARGET=10.9`) results in a myriad of `warning: 'XXX' is 
>> only available on macOS 10.13 or newer [-Wunguarded-availability-new]` 
>> warnings during the build. This warning could be innocuous if there is 
>> run-time probing in place (the symbols in question are weakly linked, which 
>> is good). But if I'm reading the code correctly, run-time probing was 
>> introduced by commits like eee543722 and isn't present in 3.8.7rc1.
>> 
>> I don't have a machine with older macOS sitting around to test, but I'm 
>> fairly certain the lack of these patches means binaries built on macOS 11 
>> will blow up at run-time when run on older macOS versions.
>> 
>> These same patches also taught CPython to build and run properly on Apple 
>> ARM hardware. I suspect some people will care about these being backported 
>> to 3.8.
>> 
> We know. Backporting the relevant changes to 3.8 is taking more time than I 
> had hoped. It doesn’t help that I’ve been busy at work and don’t have as much 
> energy during the weekend as I’d like.
> 
> The backport to 3.9 was fairly easy because there were few changes between 
> master and the 3.9 branch at the time. Sadly there have been conflicting 
> changes since 3.8 was forked (in particular in posixmodule.c).
> 
> The current best practice for building binaries that work on macOS 10.9 is to 
> build on that release (or rather, with that SDK).  That doesn’t help if you 
> want to build Universal 2 binaries though.
> 
> Thank you for your hard work devising the patches and working to backport 
> them.
> 
> I personally care a lot about these patches and I have the technical 
> competency to perform the backport. If you need help, I could potentially 
> find time to hack on it. Just email me privately (or ping @indygreg on 
> GitHub) and let me know. Even if they don't get into 3.8.7, I'll likely 
> cherry pick the patches for 
> https://github.com/indygreg/python-build-standalone 
> <https://github.com/indygreg/python-build-standalone>. And I'm sure other 
> downstream packagers will want them as well. So having them in an unreleased 
> 3.8 branch is

[Python-Dev] Re: macOS issues with 3.8.7rc1

2020-12-09 Thread Ronald Oussoren via Python-Dev


> On 8 Dec 2020, at 19:59, Gregory Szorc  wrote:
> 
> Regarding the 3.8.7rc1 release, I wanted to raise some issues regarding macOS.
> 
> Without the changes from https://github.com/python/cpython/pull/22855 
>  backported, attempting to 
> build a portable binary on macOS 11 (e.g. by setting 
> `MACOSX_DEPLOYMENT_TARGET=10.9`) results in a myriad of `warning: 'XXX' is 
> only available on macOS 10.13 or newer [-Wunguarded-availability-new]` 
> warnings during the build. This warning could be innocuous if there is 
> run-time probing in place (the symbols in question are weakly linked, which 
> is good). But if I'm reading the code correctly, run-time probing was 
> introduced by commits like eee543722 and isn't present in 3.8.7rc1.
> 
> I don't have a machine with older macOS sitting around to test, but I'm 
> fairly certain the lack of these patches means binaries built on macOS 11 
> will blow up at run-time when run on older macOS versions.
> 
> These same patches also taught CPython to build and run properly on Apple ARM 
> hardware. I suspect some people will care about these being backported to 3.8.
> 
We know. Backporting the relevant changes to 3.8 is taking more time than I had 
hoped. It doesn’t help that I’ve been busy at work and don’t have as much 
energy during the weekend as I’d like.

The backport to 3.9 was fairly easy because there were few changes between 
master and the 3.9 branch at the time. Sadly there have been conflicting 
changes since 3.8 was forked (in particular in posixmodule.c).

The current best practice for building binaries that work on macOS 10.9 is to 
build on that release (or rather, with that SDK).  That doesn’t help if you 
want to build Universal 2 binaries though.


> I suspect people in the Python application packaging/distribution space will 
> be significantly affected by this (I know I am with PyOxidizer). Is it worth 
> making the backport of these patches a 3.8.7 release blocker or a trigger for 
> a special 3.8.8 release shortly thereafter?


Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6R2OIQAFFJ2E2IYH3U5PPGLHQ5CYBCUC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 642: Constraint Pattern Syntax for Structural Pattern Matching

2020-11-05 Thread Ronald Oussoren via Python-Dev



> On 3 Nov 2020, at 16:36, Paul Svensson  wrote:
> 
> On Tue, 3 Nov 2020, Greg Ewing wrote:
> 
>> On 3/11/20 11:01 am, Ethan Furman wrote:
>> 
>>> I believe supporting
>>> 
>>> case x, x   # look ma!  no guard!
>>> is a possible future enhancement.
>> 
>> In which case there will be a need for *some* kind of true
>> "don't care" placeholder. If it's not "_" then it will have
>> to be something else like "?". And we need to decide about
>> it now, because once people start using "_" as a wildcard
>> in patterns, it will be too late to go back.
>> 
> 
> But will it, really ?
> It seems to me, that if we leave the "_" magic out,
> and leave "case x, x" to the linters,
> that leaves a clear path forward
> for whatever can be decided whenever it can be decided.

Leaving this to linters makes it harder to change the behaviour of “case x, x” 
later.  Also: not everyone uses a linter.

The particular example of “case x, x” also seems to be a bit of a red herring 
because that scenario is close to regular tuple unpacking. If I read the PEP 
correctly binding the same name multiple times is also forbidden in more 
complex scenario’s where multiple binding is not so easily recognised, such as 
"case Rect(Point(x, y), Size(x, w))”.

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HFZAGUKJUYO5D4BTMNDCGPNA74ZHTUCN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Site Packages under LIB folder

2020-10-30 Thread Ronald Oussoren via Python-Dev


> On 30 Oct 2020, at 08:37, rajesh.narasim...@gmail.com wrote:
> 
> I have installed new python version 3.9, I wanted to move all the 
> site-packages that I have used in 3.8 to 3.9 lib.  Is it possible?  

This is not a list for support on Python, but a list on the development of the 
Python language. It would be better to ask on the Python-list list.

That said: In general it is not possible to move site-packages from one python 
versions to another. The primary reason is that C extensions are in general not 
compatible between versions.

> 
> I also wanted to know why we need to have lib under every specific version, 
> it would be nice if we have common lib in which I can configure those based 
> on the version I use.  Is that available already if not is that something we 
> can implement in future version this will  help most of the developers in not 
> moving the site-packages from one version to another every time we upgrade 
> the version.
There is a stable ABI that is currently used by a small subset of C extensions. 
A number of contributors are working on enhancing the APIs for the stable ABI 
to make it possible to use for more C extensions. That may lead to a future 
where it will be easier to share extensions between Python versions, but even 
then it may not be possible to just copy libraries to the new version because 
some libraries need changes to adjust for changes in the new version.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/
> 
> Regards
> 
> Rajesh Narasimhan
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/LVM5GZMAXIVKDZ2BM7SJXPCMZFED4KOO/
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UB2NCXF24OZ2KIB4DIWDIJPZMGDNWHTR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-30 Thread Ronald Oussoren via Python-Dev


> On 30 Oct 2020, at 13:54, Victor Stinner  wrote:
> 
> Hi Ronald,
> 
> Le ven. 30 oct. 2020 à 12:59, Ronald Oussoren  a 
> écrit :
>> I agree. That’s what I tried to write, its not just providing a buildbot but 
>> also making sure that it keeps working and stays green.
> 
> This is really great!

Whoa, not so fast. I’m not volunteering work on Solaris support ;-).  There is 
another user on BPO that works on Solaris stuff, see 
https://bugs.python.org/issue41839 <https://bugs.python.org/issue41839>.

I have worked with Solaris (and other “classic” UNIX systems) in the past, but 
haven’t do so in at least 15 years.  For me “cross-platform” means macOS, RHEL 
and Windows these days. 

That said, I’m willing to review Solaris PRs (time permitting) but cannot test 
if changes actually work.

[…]
> 
> By the way, thanks Jakub Kulík (CC-ed to this email) who fixed
> multiple Solaris issues in the last 2 years ;-)

That’s the BPO user I mentioned earlier :-)

P.S. Another platform to consider dropping is IRIX, I noticed in your PR that 
there’s still some support for that platform in the code base and according to 
wikipedia IRIX has been out of development for over 14 years 
(https://en.wikipedia.org/wiki/IRIX <https://en.wikipedia.org/wiki/IRIX>).

Ronald___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RUII2QWF64LUVURBZBJW7XGWHSNYBFBT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-30 Thread Ronald Oussoren via Python-Dev


> On 30 Oct 2020, at 10:34, Pablo Galindo Salgado  wrote:
> 
> Regarding having a Solaris buildbot: if someone provides a Solaris buildbot 
> then the deal is that that someone or some other party must look after that 
> buildbot and fix problems that appear in it in a timely manner. Broken 
> buildbots stop releases and I don't want to be in a situation in which I need 
> to halt a release because the Solaris buildbot is broken and there is no-one 
> to fix it in time.
> 
> In the past, we had to dealt with similar situations and not only is 
> stressful but normally ends in me or Victor having to login in a buildbot for 
> a platform that we are not experts on to try to fix it in time.

I agree. That’s what I tried to write, its not just providing a buildbot but 
also making sure that it keeps working and stays green.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/ <https://blog.ronaldoussoren.net/>

> 
> On Fri, 30 Oct 2020, 09:29 Ronald Oussoren via Python-Dev, 
> mailto:python-dev@python.org>> wrote:
> 
> > On 29 Oct 2020, at 22:43, Victor Stinner  > <mailto:vstin...@python.org>> wrote:
> > 
> > Hi,
> > 
> > I propose to drop the Solaris support in Python to reduce the Python
> > maintenance burden:
> > 
> >   https://bugs.python.org/issue42173 <https://bugs.python.org/issue42173>
> > 
> > I wrote a draft PR to show how much code could be removed (around 700
> > lines in 65 files):
> > 
> >   https://github.com/python/cpython/pull/23002/files 
> > <https://github.com/python/cpython/pull/23002/files>
> 
> A significant fraction of that is in comments and documentation. A number of 
> the changes in documentation would be good to go in regardless of the 
> resolution of this proposal.
> 
> > 
> > In 2016, I asked if we still wanted to maintain the Solaris support in
> > Python, because Solaris buildbots were failing for longer than 6
> > months and nobody was able to fix them. It was requested to find a
> > core developer volunteer to fix Solaris issues and to set up a Solaris
> > buildbot.
> > 
> > https://mail.python.org/archives/list/python-dev@python.org/thread/NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS/#NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS
> >  
> > <https://mail.python.org/archives/list/python-dev@python.org/thread/NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS/#NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS>
> > 
> > Four years later, nothing has happened. Moreover, in 2018, Oracle laid
> > off the Solaris development engineering staff. There are around 25
> > open Python bugs specific to Solaris.
> 
> As another data point: There is someone on BPO that files issues about 
> Solaris on BPO, including PRs. It might be worthwhile to ask that person if 
> they can provide a buildbot (while making clear that this results in the 
> assumption that they’d look after Solaris port).
> 
> If Solaris would get dropped I’d prefer option 2
> 
> Ronald
> —
> 
> Twitter / micro.blog: @ronaldoussoren
> Blog: https://blog.ronaldoussoren.net/ <https://blog.ronaldoussoren.net/>
> 
> 
> ___
> Python-Dev mailing list -- python-dev@python.org 
> <mailto:python-dev@python.org>
> To unsubscribe send an email to python-dev-le...@python.org 
> <mailto:python-dev-le...@python.org>
> https://mail.python.org/mailman3/lists/python-dev.python.org/ 
> <https://mail.python.org/mailman3/lists/python-dev.python.org/>
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/XFRQ2VEBK6NKUWMC6HXOJDLAIOQHORCP/
>  
> <https://mail.python.org/archives/list/python-dev@python.org/message/XFRQ2VEBK6NKUWMC6HXOJDLAIOQHORCP/>
> Code of Conduct: http://python.org/psf/codeofconduct/ 
> <http://python.org/psf/codeofconduct/>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/A7Q7GW7ZI2EZDQSVYRCOI3S45MCS4WRZ/
> Code of Conduct: http://python.org/psf/codeofconduct/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DUKMN44S6GCC3N3XHNAQRD4GXZV3PSUP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-30 Thread Ronald Oussoren via Python-Dev

> On 29 Oct 2020, at 22:43, Victor Stinner  wrote:
> 
> Hi,
> 
> I propose to drop the Solaris support in Python to reduce the Python
> maintenance burden:
> 
>   https://bugs.python.org/issue42173
> 
> I wrote a draft PR to show how much code could be removed (around 700
> lines in 65 files):
> 
>   https://github.com/python/cpython/pull/23002/files

A significant fraction of that is in comments and documentation. A number of 
the changes in documentation would be good to go in regardless of the 
resolution of this proposal.

> 
> In 2016, I asked if we still wanted to maintain the Solaris support in
> Python, because Solaris buildbots were failing for longer than 6
> months and nobody was able to fix them. It was requested to find a
> core developer volunteer to fix Solaris issues and to set up a Solaris
> buildbot.
> 
> https://mail.python.org/archives/list/python-dev@python.org/thread/NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS/#NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS
> 
> Four years later, nothing has happened. Moreover, in 2018, Oracle laid
> off the Solaris development engineering staff. There are around 25
> open Python bugs specific to Solaris.

As another data point: There is someone on BPO that files issues about Solaris 
on BPO, including PRs. It might be worthwhile to ask that person if they can 
provide a buildbot (while making clear that this results in the assumption that 
they’d look after Solaris port).

If Solaris would get dropped I’d prefer option 2

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XFRQ2VEBK6NKUWMC6HXOJDLAIOQHORCP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Speeding up CPython

2020-10-22 Thread Ronald Oussoren via Python-Dev

> On 20 Oct 2020, at 14:53, Mark Shannon  wrote:
> 
> Hi everyone,
> 
> CPython is slow. We all know that, yet little is done to fix it.
> 
> I'd like to change that.
> I have a plan to speed up CPython by a factor of five over the next few 
> years. But it needs funding.
> 
> I am aware that there have been several promised speed ups in the past that 
> have failed. You might wonder why this is different.
> 
> Here are three reasons:
> 1. I already have working code for the first stage.
> 2. I'm not promising a silver bullet. I recognize that this is a substantial 
> amount of work and needs funding.
> 3. I have extensive experience in VM implementation, not to mention a PhD in 
> the subject.
> 
> My ideas for possible funding, as well as the actual plan of development, can 
> be found here:
> 
> https://github.com/markshannon/faster-cpython
> 
> I'd love to hear your thoughts on this.

I don’t have anything useful to add to the discussion, other than to say that 
I’m happing to see that
someone is willing to spent a significant amount of effort on making CPython 
faster.  Especially
when that someone has worked on faster Python implementation before (look for a 
HotPy talk at EuroPython).

I’m not too worried about the technical part and have no expertise at funding 
at all. I am worried that
merging this work will take a significant amount of effort. This is likely to 
result in fairly significant changes
to the core interpreter, and it might be hard to find enough core devs that 
willing and able to review
changes in a timely manner.

Ronald
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6KXRTAH7FGF2SIS7ZJ3SG54LT4ELBWGI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Speeding up CPython

2020-10-21 Thread Ronald Oussoren via Python-Dev


> On 21 Oct 2020, at 14:39, Larry Hastings  wrote:
> 
> On 10/21/20 4:04 AM, Antoine Pitrou wrote:
>> (apart from small fixes relating to borrowed references, and
>> that's mostly to make PyPy's life easier).
> 
> Speaking as the Gilectomy guy: borrowed references are evil.  The definition 
> of the valid lifetime of a borrowed reference doesn't exist, because they are 
> a hack (baked into the API!) that we mostly "get away with" just because of 
> the GIL.  If I still had wishes left on my monkey's paw I'd wish them away*.
> 
> 
Even with the GIL borrowed references are problematic. There are a lot of cases 
where using a borrowed reference after calling an API that might run Python 
code might invalidate the borrowed reference. In general the only safe thing to 
do with a borrowed reference is to turn it into a strong reference as soon as 
possible.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/
> 
> /arry
> 
> * Unfortunately, I used my last wish back in February, wishing I could spend 
> more time at home.
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/PRIVTI2RFGEGVNQRGUCHRRY5WBJNZKJS/
> Code of Conduct: http://python.org/psf/codeofconduct/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PV3DSRZVS4JLYTGU5Z6JPBKDZT4X4QY5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: My take on multiple interpreters (Was: Should we be making so many changes in pursuit of PEP 554?)

2020-06-10 Thread Ronald Oussoren via Python-Dev


> On 10 Jun 2020, at 14:33, Mark Shannon  wrote:
> 
> Hi Petr,
> 
> On 09/06/2020 2:24 pm, Petr Viktorin wrote:
>> On 2020-06-05 16:32, Mark Shannon wrote:
>>> Hi,
>>> 
>>> There have been a lot of changes both to the C API and to internal 
>>> implementations to allow multiple interpreters in a single O/S process.
>>> 
>>> These changes cause backwards compatibility changes, have a negative 
>>> performance impact, and cause a lot of churn.
>>> 
>>> While I'm in favour of PEP 554, or some similar model for parallelism in 
>>> Python, I am opposed to the changes we are currently making to support it.
>>> 
>>> 
>>> What are sub-interpreters?
>>> --
>>> 
>>> A sub-interpreter is a logically independent Python process which supports 
>>> inter-interpreter communication built on shared memory and channels. 
>>> Passing of Python objects is supported, but only by copying, not by 
>>> reference. Data can be shared via buffers.
>> Here's my biased take on the subject:
>> Interpreters are contexts in which Python runs. They contain configuration 
>> (e.g. the import path) and runtime state (e.g. the set of imported modules). 
>> An interpreter is created at Python startup (Py_InitializeEx), and you can 
>> create/destroy additional ones with Py_NewInterpreter/Py_EndInterpreter.
>> This is long-standing API that is used, most notably by mod_wsgi.
>> Many extension modules and some stdlib modules don't play well with the 
>> existence of multiple interpreters in a process, mainly because they use 
>> process-global state (C static variables) rather than some more granular 
>> scope.
>> This tends to result in nasty bugs (C-level crashes) when multiple 
>> interpreters are started in parallel (Py_NewInterpreter) or in sequence 
>> (several Py_InitializeEx/Py_FinalizeEx cycles). The bugs are similar in both 
>> cases.
>> Whether Python interpreters run sequentially or in parallel, having them 
>> work will enable a use case I would like to see: allowing me to call Python 
>> code from wherever I want, without thinking about global state. Think 
>> calling Python from an utility library that doesn't care about the rest of 
>> the application it's used in. I personally call this "the Lua use case", 
>> because light-weight, worry-free embedding is an area where Python loses to 
>> Lua. (And JS as well—that's a relatively recent development, but much more 
>> worrying.)
> 
> This seems like  a worthwhile goal. However I don't see why this requires 
> having multiple Python interpreters in a single O/S process.

The mod_wsgi use case seems to require this (he writes without having looked at 
its source code). I have another possible use case:  Independent plugins 
written in Python in native applications written in other languages.  That 
doesn’t mean that is worthwhile to complicate the CPython code base for these. 
I have no opinion on that, both because I haven’t been active for a while and 
because I haven’t looked at the impact the current work has had. 

Ronald


—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GLKVB4JNZCZCXHNCF4F6VUBF7V6NKN5F/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Ronald Oussoren via Python-Dev


> On 29 Apr 2020, at 03:50, Eric Snow  wrote:
> 
> On Wed, Apr 22, 2020 at 2:43 AM Ronald Oussoren  
> wrote:
>> My mail left out some important information, sorry about that.
> 
> No worries. :)
> 
>> PyObjC is a two-way bridge between Python and Objective-C. One half of this 
>> is that is bridging Objective-C classes (and instances) to Python. This is 
>> fairly straightforward, although the proxy objects are not static and can 
>> have methods defined in Python (helper methods that make the Objective-C 
>> classes nicer to use from Python, for example to define methods that make it 
>> possible to use an NSDictionary as if it were a regular Python dict).
> 
> Cool.  (also fairly straightforward!)

Well… Except that the proxy classes are created dynamically and the list of 
methods is updated dynamically as well both for performance reasons and because 
ObjC classes can be changed at runtime (similar to how you can add methods to 
Python classes).  But in the end this part is fairly straightforward and 
comparable to something like gobject introspection in the glib/gtk bindings.

And every Cocoa class is proxied with a regular class and a metaclass (with 
parallel class hierarchies). That’s needed to mirror Objective-C behaviour, 
where class- and instance methods have separate namespaces and some classes 
have class and instance methods with the same name.  

> 
>> The other half is that it is possible to implement Objective-C classes in 
>> Python:
>> 
>>   class MyClass (Cocoa.NSObject):
>>   def anAction_(self, sender): …
>> 
>> This defines a Python classes named “MyClass”, but also an Objective-C class 
>> of the same name that forwards Objective-C calls to Python.
> 
> Even cooler! :)
> 
>> The implementation for this uses PyGILState_Ensure, which AFAIK is not yet 
>> useable with sub-interpreters.
> 
> That is correct.  It is one of the few major subinterpreter
> bugs/"bugs" remaining to be addressed in the CPython code.  IIRC,
> there were several proposed solutions (between 2 BPO issues) that
> would fix it but we got distracted before the matter was settled.

This is not a hard technical problem, although designing a future proof API 
might be harder.

> 
>> PyObjC also has Objective-C proxy classes for generic Python objects, making 
>> it possible to pass a normal Python dictionary to an Objective-C API that 
>> expects an NSDictionary instance.
> 
> Also super cool.  How similar is this to Jython and IronPython?

I don’t know, I guess this is similar to how those projects proxy between their 
respective host languages and Python. 

> 
>> Things get interesting when combining the two with sub-interpreters: With 
>> the current implementation the Objective-C world would be a channel for 
>> passing “live” Python objects between sub-interpreters.
> 
> +1
> 
>> The translation tables for looking up existing proxies (mapping from Python 
>> to Objective-C and vice versa) are currently singletons.
>> 
>> This is probably fixable with another level of administration, by keeping 
>> track of the sub-interpreter that owns a Python object I could ensure that 
>> Python objects owned by a different sub-interpreter are proxied like any 
>> other Objective-C object which would close this loophole.  That would 
>> require significant changes to a code base that’s already fairly complex, 
>> but should be fairly straightforward.
> 
> Do you think there are any additions we could make to the C-API (more
> than have been done recently, e.g. PEP 573) that would make this
> easier.  From what I understand, this pattern of a cache/table of
> global Python objects is a relatively common one.  So anything we can
> do to help transition these to per-interpreter would be broadly
> beneficial.  Ideally it would be done in the least intrusive way
> possible, reducing churn and touch points.  (e.g. a macro to convert
> existing tables,etc. + an init func to call during module init.)

I can probably fix this entirely on my end:
- Use PEP 573 to move the translation tables to per-interpreter storage
- Store the sub-interpreter in the ObjC proxy object for Python objects, both 
to call back into the right subinterpreter in upcalls and to tweak the way 
“foreign” objects are proxied into a different subinterpreter. 

But once again, that’s without trying to actually do the work. As usual the 
devil’s in the details.

> 
> Also, FWIW, I've been thinking about possible approaches where the
> first/main interpreter uses the existing static types, etc. and
> further subinterpreters use a heap type (etc.) derived mostly
> automatically from the static one.  It's been on my mind because this
> is on

[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Ronald Oussoren via Python-Dev

> On 28 Apr 2020, at 20:38, Jim J. Jewett  wrote:
> 
> Why do sub-interpreters require (separate and) heap-allocated types?  
> 
> It seems types that are statically allocated are a pretty good use for 
> immortal objects, where you never change the refcount ... and then I don't 
> see why you need more than one copy.

I guess it depends…  One reason is type.__subclasses__(), that returns a list 
of all subclasses and when a type is shared between sub-interpreters the return 
value might refer to objects in another interpreter. That could be fixed by 
another level of indirection I guess.  But extension types could contain other 
references to Python objects, and it is a lot easier to keep track of which 
subinterpreter those belong to when every subinterpreter has its own copy of 
the type.  

If subinterpreters get their own GIL maintaining the refcount is another reason 
for not sharing types between subinterpreters.  “Never changing the refcount” 
could be expensive in its own right, that adds a branch to every invocation of 
Py_INCREF and Py_DECREF.  See also the benchmark data in 
>  
(which contains a patch that disables refcount updates for arbitrary objects).

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DAKZI7352EWIMJ7Y2YLHPCHJST7DIZWB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-22 Thread Ronald Oussoren via Python-Dev


> On 21 Apr 2020, at 16:58, Eric Snow  wrote:
> 
> Thanks for explaining that, Ronald.  It sounds like a lot of the
> effort would relate to making classes work.  I have some comments
> in-line below.
> 
> -eric
> 
> On Tue, Apr 21, 2020 at 2:34 AM Ronald Oussoren  
> wrote:
>>> On 21 Apr 2020, at 03:21, Eric Snow  wrote:
>>> Honest question: how many C extensions have process-global state that
>>> will cause problems under subinterpreters?  In other words, how many
>>> already break in mod_wsgi?
>> 
>> Fully supporting sub-interpreters in PyObjC will likely be a lot of work, 
>> mostly
>> due to being able to subclass Objective-C classes from Python.  With sub-
>> interpreters a Python script in an interpreter could see an Objective-C 
>> class in
>> a different sub-interpreter.   The current PyObjC architecture assumes that
>> there’s exactly one (sub-)interpreter, that’s probably fixable but is far 
>> from trivial.
> 
> Are the Objective-C classes immutable?  Are the wrappers stateful at
> all?  
> Without context I'm not clear on how you would be impacted by
> operation under subinterpreters (i.e. PEP 554), but it sounds like the
> classes do have global state that is in fact interpreter-specific.  I
> expect you would also be impacted by subinterpreters not sharing the
> GIL but that is a separate matter (see below).

My mail left out some important information, sorry about that. 

PyObjC is a two-way bridge between Python and Objective-C. One half of this is 
that is bridging Objective-C classes (and instances) to Python. This is fairly 
straightforward, although the proxy objects are not static and can have methods 
defined in Python (helper methods that make the Objective-C classes nicer to 
use from Python, for example to define methods that make it possible to use an 
NSDictionary as if it were a regular Python dict).

The other half is that it is possible to implement Objective-C classes in 
Python:

   class MyClass (Cocoa.NSObject):
   def anAction_(self, sender): …

This defines a Python classes named “MyClass”, but also an Objective-C class of 
the same name that forwards Objective-C calls to Python.  The implementation 
for this uses PyGILState_Ensure, which AFAIK is not yet useable with 
sub-interpreters.

PyObjC also has Objective-C proxy classes for generic Python objects, making it 
possible to pass a normal Python dictionary to an Objective-C API that expects 
an NSDictionary instance.

Things get interesting when combining the two with sub-interpreters: With the 
current implementation the Objective-C world would be a channel for passing 
“live” Python objects between sub-interpreters.

The translation tables for looking up existing proxies (mapping from Python to 
Objective-C and vice versa) are currently singletons.

This is probably fixable with another level of administration, by keeping track 
of the sub-interpreter that owns a Python object I could ensure that Python 
objects owned by a different sub-interpreter are proxied like any other 
Objective-C object which would close this loophole.  That would require 
significant changes to a code base that’s already fairly complex, but should be 
fairly straightforward.

> 
> Regardless, I expect there are others in a similar situation.  It
> would be good to understand your use case and help with a solution.
> Is there a specific example you can point to of code that would be
> problematic under subinterpreters?
> 
>> With the current API it might not even be possible to add sub-interpreter 
>> support
> 
> What additional API would be needed?

See above, the main problem is PyGILState_Ensure.  I haven’t spent a lot of 
time thinking about this though, I might find other issues when I try to 
support sub-interpreters.

> 
>> (although I write this without having read the PEP).
> 
> Currently PEP 554 does not talk about how to make extension modules
> compatible with subinterpreters.  That may be worth doing, though it
> would definitely have to happen in the docs and (to an extent) the 3.9
> "What's New" page.  There is already some discussion on what should be
> in those docs (see
> https://github.com/ericsnowcurrently/multi-core-python/issues/53).
> 
> Note that, until the GIL becomes per-interpreter, sharing objects
> isn't a problem.  We were not planning on having a PEP for the
> stop-sharing-the-GIL effort, but I'm starting to think that it may be
> worth it, to cover the impact on extension modules (e.g. mitigations).
> 
> So if you leave out the complications due to not sharing the GIL, the
> main problem extension authors face with subinterpreters (exposed by
> PEP 554) is when their module has process-global state that breaks
> under subinterpreters.  From your descript

[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-21 Thread Ronald Oussoren via Python-Dev

> On 21 Apr 2020, at 03:21, Eric Snow  wrote:
> 
[…]
> On Mon, Apr 20, 2020 at 4:30 PM Nathaniel Smith  wrote:
> 
[…]

>> 
>> But notice that this means that no-one can use subinterpreters at all,
>> until all of their C extensions have had significant reworks to use
>> the new API, which will take years and tons of work -- it's similar to
>> the Python 3 transition. Many libraries will never make the jump.
> 
> Again, that is a grand statement that makes things sound much worse
> than they really are.  I expect very very few extensions will need
> "significant reworks".  Adding PEP 489 support will not take much
> effort, on the order of minutes.  Dealing with process-global state
> will depend on how much, if any.
> 
> Honest question: how many C extensions have process-global state that
> will cause problems under subinterpreters?  In other words, how many
> already break in mod_wsgi?

Fully supporting sub-interpreters in PyObjC will likely be a lot of work, 
mostly due to being able to subclass Objective-C classes from Python.  With 
sub-interpreters a Python script in an interpreter could see an Objective-C 
class in a different sub-interpreter.   The current PyObjC architecture assumes 
that there’s exactly one (sub-)interpreter, that’s probably fixable but is far 
from trivial.

With the current API it might not even be possible to add sub-interpreter 
support (although I write this without having read the PEP). As far as I 
understand proper support for subinterpreters also requires moving away from 
static type definitions to avoid sharing objects between interpreters (that is, 
use the PyType_FromSpec to build types). At first glance this API does not 
support everything I do in PyObjC (fun with metaclasses, in C code).

BTW. I don’t have a an opinion on the PEP itself at this time, mostly because 
it doesn’t match my current use cases.

Ronald

—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IK3SSUVHZ3B6G77X6PNYDXMA42TFD23B/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-15 Thread Ronald Oussoren via Python-Dev


> On 15 Apr 2020, at 03:39, Victor Stinner  wrote:
> 
> Hi Ronald,
> 
> Le mar. 14 avr. 2020 à 18:25, Ronald Oussoren  a 
> écrit :
>> Making “PyObject” opaque will also affect the stable ABI because even types 
>> defined using the PyTypeSpec API embed a “PyObject” value in the structure 
>> defining the instance layout. It is easy enough to change this in a way that 
>> preserves source-code compatibility, but I’m  not sure it is possible to 
>> avoid breaking the stable ABI.
> 
> Oh, that's a good point. I tracked this issue at:
> https://bugs.python.org/issue39573#msg366473
> 
>> BTW. This will require growing the PyTypeSpec ABI a little, there are 
>> features you cannot implement using that API for example the buffer protocol.
> 
> I tracked this feature request at:
> https://bugs.python.org/issue40170#msg366474

Another issue with making structures opaque is that this makes it at best 
harder to subclass builtin types in an extension while adding additional data 
fields to the subclass. This is a similar issue as the fragile base class issue 
that was fixed in Objective-C 2.0 by adding a level of indirection, and could 
probably be fixed in a similar way in Python.

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/ <https://blog.ronaldoussoren.net/>


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HBOKY4WKIV7Z4Y5RSYISEU7D5ABI2B2I/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-14 Thread Ronald Oussoren via Python-Dev


> On 10 Apr 2020, at 19:20, Victor Stinner  wrote:
> 
[…]

> 
> 
> PEP xxx: Modify the C API to hide implementation details
> 
> 
> Abstract
> 
> 
> * Hide implementation details from the C API to be able to `optimize
>  CPython`_ and make PyPy more efficient.
> * The expectation is that `most C extensions don't rely directly on
>  CPython internals`_ and so will remain compatible.
> * Continue to support old unmodified C extensions by continuing to
>  provide the fully compatible "regular" CPython runtime.
> * Provide a `new optimized CPython runtime`_ using the same CPython code
>  base: faster but can only import C extensions which don't use
>  implementation details. Since both CPython runtimes share the same
>  code base, features implemented in CPython will be available in both
>  runtimes.
> * `Stable ABI`_: Only build a C extension once and use it on multiple
>  Python runtimes and different versions of the same runtime.
> * Better advertise alternative Python runtimes and better communicate on
>  the differences between the Python language and the Python
>  implementation (especially CPython).
> 
> Note: Cython and cffi should be preferred to write new C extensions.

I’m too old… I still prefer the CPython ABI over the other two mostly because 
that’s what I know best but also the reduce dependencies. 

> This PEP is about existing C extensions which cannot be rewritten with
> Cython.

I’m not sure what this PEP  proposes beyond “lets make the stable ABI the 
default API” and provide a mechanism to get access to the current API.  I guess 
the proposal also expands the scope for the stable ABI, some internals that are 
currently exposed in the stable ABI would no longer be so. 

I’m not  opposed to this as long as it is still possible to use the current 
API, possibly with clean-ups and correctness fixes, As you write the CPython 
API has some features that make writing correct code harder, in particular the 
concept of borrowed references. There’s still good reasons to want be as close 
to the metal as possible, both to get maximal performance and to accomplish 
things that aren’t possible using the stable ABI.

[…]
> 
> API and ABI incompatible changes
> 
> 
> * Make structures opaque: move them to the internal C API.
> * Remove functions from the public C API which are tied to CPython
>  internals. Maybe begin by marking these functions as private (rename
>  ``PyXXX`` to ``_PyXXX``) or move them to the internal C API.
> * Ban statically allocated types (by making ``PyTypeObject`` opaque):
>  enforce usage of ``PyType_FromSpec()``.
> 
> Examples of issues to make structures opaque:
> 
> * ``PyGC_Head``: https://bugs.python.org/issue40241
> * ``PyObject``: https://bugs.python.org/issue39573
> * ``PyTypeObject``: https://bugs.python.org/issue40170
> * ``PyThreadState``: https://bugs.python.org/issue39573
> 
> Another example are ``Py_REFCNT()`` and ``Py_TYPE()`` macros which can
> currently be used l-value to modify an object reference count or type.
> Python 3.9 has new ``Py_SET_REFCNT()`` and ``Py_SET_TYPE()`` macros
> which should be used instead. ``Py_REFCNT()`` and ``Py_TYPE()`` macros
> should be converted to static inline functions to prevent their usage as
> l-value.
> 
> **Backward compatibility:** backward incompatible on purpose. Break the
> limited C API and the stable ABI, with the assumption that `Most C
> extensions don't rely directly on CPython internals`_ and so will remain
> compatible.

This is definitely backward incompatible in a way that affects all extensions 
defining types without using  PyTypeSpec due to having PyObject ad PyTypeObject 
in the list. I wonder how large a percentage of existing extensions is affected 
by this.  

Making “PyObject” opaque will also affect the stable ABI because even types 
defined using the PyTypeSpec API embed a “PyObject” value in the structure 
defining the instance layout. It is easy enough to change this in a way that 
preserves source-code compatibility, but I’m  not sure it is possible to avoid 
breaking the stable ABI. 

BTW. This will require growing the PyTypeSpec ABI a little, there are features 
you cannot implement using that API for example the buffer protocol. 

[…]
> 
> 
> CPython specific behavior
> =
> 
> Some C functions and some Python functions have a behavior which is
> closely tied to the current CPython implementation.
> 
> is operator
> ---
> 
> The "x is y" operator is closed tied to how CPython allocates objects
> and to ``PyObject*``.
> 
> For example, CPython uses singletons for numbers in [-5; 256] range::
> 
 x=1; (x + 1) is 2
>True
 x=1000; (x + 1) is 1001
>False
> 
> Python 3.8 compiler now emits a ``SyntaxWarning`` when the right operand
> of the ``is`` and ``is not`` operators is a literal (ex: integer or
> 

[Python-Dev] Re: How official binaries are built?

2019-10-17 Thread Ronald Oussoren via Python-Dev


> On 17 Oct 2019, at 08:13, Inada Naoki  wrote:
> 
> Thank you for your response.
> And I'm sorry about ignoring this. Gmail marked it as spam.
> 
> On Tue, Oct 15, 2019 at 6:20 PM Ned Deily  wrote:
>> 
>> We currently do not use those options to build the binaries for the 
>> python.org macOS installers.  The main reason is that the Pythons we provide 
>> are built to support a wide-range of macOS releases and to do so safely we 
>> build the binaries on the oldest version of macOS supported by that 
>> installer.  So, for example, the 10.9+ installer variant is built on a 10.9 
>> system.  Some of the optimization features either aren't available or are 
>> less robust on older build tools.
> 
> It makes sense.

Somewhat. I would be better to build with the latest compiler because that’s 
generally better than older compilers, but that requires some source changes 
and testing to ensure that the build still works on older versions of macOS. At 
this point in time nobody seems to have available time and need to work on 
this, including myself. 

Ronald

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZXJ4OJV7YWKCUX6NOYD5IGP4MNOMOCYM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 587 (Python Initialization Configuration) updated to be future proof again

2019-10-06 Thread Ronald Oussoren via Python-Dev


> On 1 Oct 2019, at 10:55, Thomas Wouters  wrote:
> 
> 
> 
> On Mon, Sep 30, 2019 at 11:00 PM Brett Cannon  > wrote:
> Victor Stinner wrote:
> > Hi Nick,
> > Le dim. 29 sept. 2019 à 08:47, Nick Coghlan ncogh...@gmail.com 
> >  a écrit :
> > > I don't quite understand the purpose of this change,
> > > as there's no
> > > stable ABI for applications embedding CPython.
> > > Well, I would like to prepare Python to provide a stable ABI for
> > embedded Python. While it's not a design goal yet
> > (Include/cpython/initconfig.h is currently excluded from
> > Py_LIMITED_API), this change is a step towards that.
> 
> So then isn't this a very last-minute, premature optimization if it isn't a 
> design goal yet? If that's the case then I would vote not to make the change 
> and wait until there's been feedback on 3.8 and then look at stabilizing the 
> embedding API such that it can be considered stable.
> 
> I just want to chime in here and confirm, as a Professional CPython 
> Embedder(tm) that embedders cannot currently rely on a stable ABI, or the 
> limited API, and often not even the public API. It's not just about exposed 
> symbols and struct sizes, it's often also about the semantics of internals 
> (e.g. how importlib handles custom module finders and loaders) that subtly 
> changes. For anything but the simplest PyRun_SimpleString-based embedding 
> this is more the case when embedding than extending, and even than a regular 
> (complex) Python program. (I already showed Victor and a few others some of 
> the hoops we have to jump through at Google to embed CPython correctly, and 
> only half of those things are necessary just because of Google's environment.)

On the other hand, py2app uses a single executable that loads a python shared 
library, initialises the interpreter and runs a script with minimal changes 
over the years. I’ve basically only needed to recompile to support new macOS 
architectures and to add support for Python 3. 

That said, py2app’s stub executable is basically just “simple 
PyRun_SimpleString-based embedding” with ugly code to make it possible to use 
dlopen to load its dependencies. 

Ronald

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/N2GK4MDYGQYUXOJDKHXKW4ZIM7OQYGJY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [Webmaster] Python for Mac OS X Catalina

2019-08-16 Thread Ronald Oussoren via Python-Dev
Hi,

As a workaround it is possible to install Python by choosing “open” from the 
context-menu of in the Finder instead of double clicking on the installer.   
This is currently necessary because the macOS Catalina has stricter 
requirements for signing installers, and the Python.org  
installers do not yet comply with those stricter requirements.

Ronald
—

Twitter: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

> On 16 Aug 2019, at 13:24, Steve Holden  wrote:
> 
> Hi Ana,
> 
> This is something that the development team are best placed to answer, and 
> I'm sure that they'd be happy to know if there are bumps in the road coming 
> up with Catalina.
> 
> I am copying this reply to the core developer list to alert them. It would be 
> helpful if you could make as full a report as possible via bugs.python.org 
> . A reply-all to this email can be used to let the 
> development team know the report reference should you do so, and the bug 
> tracker is a better medium for such focused communication than the list.
> 
> I've moved the webmaster address to Bcc to relieve it of further traffic. 
> Good luck!
> 
> Steve Holden
> 
> 
> On Fri, Aug 16, 2019 at 11:04 AM Ana Simion via Webmaster 
> mailto:webmas...@python.org>> wrote:
> Hello, 
> 
> Can you advise when you’re going to update Python to work with Mac OS X 
> Catalina? I am running the beta of  Mac OS X Catalina but am unable to 
> install the python software - I get a message advising the Developer hasn’t 
> updated the software to work with the new OS yet. I’d be most grateful if you 
> could get back to me as soon as possible.
> 
> Many thanks
> 
> Ana Simion
> 
> 
> ___
> Webmaster mailing list
> webmas...@python.org 
> https://mail.python.org/mailman/listinfo/webmaster 
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/CA3S22D63KY3K5ZOT2SDKBDEV5ICOOT4/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UWKLIJMPDGIF2QJDLNHP5LXBWMJ74Q5A/


[Python-Dev] Re: typing: how to use names in result-tuples?

2019-08-09 Thread Ronald Oussoren via Python-Dev


> On 8 Aug 2019, at 17:42, Christian Tismer  wrote:
> 
> On 08.08.19 17:20, Ronald Oussoren via Python-Dev wrote:
>> 
>> 
>>> On 8 Aug 2019, at 17:12, Christian Tismer >> <mailto:tis...@stackless.com>> wrote:
>>> 
>>> Hi Ronald,
>>> 
>>> sure, the tuple is usually not very interesting; people look it up
>>> once and use that info in the code.
>>> 
>>> But I think things can be made quite efficient and pretty at the
>>> same time. Such a return tuple could be hidden like the stat_result
>>> example that Guido mentioned:
>>> 
>>> https://github.com/python/typeshed/blob/master/stdlib/3/os/__init__.pyi
>>> 
>>>def stat(self, *, follow_symlinks: bool = ...) -> stat_result: ...
>>> 
>>> The stat_result is a huge structure where you don't want to see much
>>> unless you are working with a stat_result.
>>> 
>>> Other with common, repeating patterns like (x, y, width, height)
>>> or your examples:
>>> 
>>>def getpoint(v: pointless) -> (int, int)
>>>def getvalue(v: someclass) -> (bool, int)
>>> 
>>> would be at first sight
>>> 
>>>def getpoint(v: pointless) -> getpoint_result
>>>def getvalue(v: someclass) -> getvalue_result
>>> 
>>> But actually, a much nicer, speaking outcome would be written as
>>> the single function StructSequence(...) with arguments, producing:
>>> 
>>>def getpoint(v: pointless) -> StructSequence(x=int, y=int)
>>>def getvalue(v: someclass) -> StructSequence(result=bool, val=int)
>>> 
>>> That would have the nice effect of a very visible structure in
>>> the .pyi file. When you actually get such an object and look at it,
>>> then you have
>> 
>> But will you ever look at these objects, other then when exploring APIs
>> in the REPL? As I wrote earlier the normal usage for a similar pattern
>> in PyObjC is to always immediately deconstruct the tuple into its
>> separate values. 
> 
> 
> Yes, it is for exploring the interface. In Qt5, you have *very* many
> functions, and they are pretty unpythonic as well.

You haven’t seen the interfaces generated by PyObjC, compared to the PyQt 
interfaces are pretty pythonic :-). 

> That was the reason at all for me to write that __signature__ module for
> PySide, that does everything with introspection.
> 
> When I'm programming with it, then half as a user who wants to see
> "yes, it really returns such a value" and as a developer "shit, we claim
> that interface, but lied".
> 
> In a sense, it was also a way to test this huge library automatically,
> and I enjoy it when things can explain themselves.
> That is absolutely not necessary.
> 
> As the whole typing idea and typeshed is not necessary, but for me
> it was a huge win to have a typed interface, and IDE users seem to
> love it when PyCharm suddenly talks to them ;-)

I like the idea, even if it is just used for introspection and interactive use. 
I’m definitely adding exploring this for PyObjC to my list.

Ronald

—

Twitter: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/
> 
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/E5TBYYUJEWMNTI5RRQZBSHUHC26BIJ7W/


[Python-Dev] Re: typing: how to use names in result-tuples?

2019-08-08 Thread Ronald Oussoren via Python-Dev


> On 8 Aug 2019, at 17:12, Christian Tismer  wrote:
> 
> Hi Ronald,
> 
> sure, the tuple is usually not very interesting; people look it up
> once and use that info in the code.
> 
> But I think things can be made quite efficient and pretty at the
> same time. Such a return tuple could be hidden like the stat_result
> example that Guido mentioned:
> 
> https://github.com/python/typeshed/blob/master/stdlib/3/os/__init__.pyi 
> 
> 
>def stat(self, *, follow_symlinks: bool = ...) -> stat_result: ...
> 
> The stat_result is a huge structure where you don't want to see much
> unless you are working with a stat_result.
> 
> Other with common, repeating patterns like (x, y, width, height)
> or your examples:
> 
>def getpoint(v: pointless) -> (int, int)
>def getvalue(v: someclass) -> (bool, int)
> 
> would be at first sight
> 
>def getpoint(v: pointless) -> getpoint_result
>def getvalue(v: someclass) -> getvalue_result
> 
> But actually, a much nicer, speaking outcome would be written as
> the single function StructSequence(...) with arguments, producing:
> 
>def getpoint(v: pointless) -> StructSequence(x=int, y=int)
>def getvalue(v: someclass) -> StructSequence(result=bool, val=int)
> 
> That would have the nice effect of a very visible structure in
> the .pyi file. When you actually get such an object and look at it,
> then you have

But will you ever look at these objects, other then when exploring APIs in the 
REPL? As I wrote earlier the normal usage for a similar pattern in PyObjC is to 
always immediately deconstruct the tuple into its separate values. 

BTW. I’m primarily trying to understand your use case because it is so similar 
to what I’m doing in PyObjC, and such understanding can lead to an improvement 
in PyObjC ;-).

Ronald


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/W74WEFH4X7QU5WBOR4JSBW5BLPQW3UW3/


[Python-Dev] Re: typing: how to use names in result-tuples?

2019-08-08 Thread Ronald Oussoren via Python-Dev
Chrisian,

How would these namedtuple/structseq values be used? I have a similar design 
with PyObjC: pass-by-reference “return” values are returned in a tuple, e.g.:

void getpoint(pointclass* v, int* x, int *y)=>def get point(v: 
pointless) -> (int, int)
BOOL getvalue(someclass* v, int* val)  =>def getvalue(v: someclass) 
-> (bool, int)

I rarely, if ever, see code that actually stores the return tuple as-is. The 
return tuple is just deconstructed immediately, like “x, y = 
getpoint(mypoint)”. 

Ronald
—

Twitter: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

> On 8 Aug 2019, at 10:42, Christian Tismer  wrote:
> 
> Hi Guido,
> 
> If a C++ function already has a return value, plus some primitive
> pointer variables that need to be moved into the result in Python,
> then we have the case with a first, single unnamed field.
> Only one such field can exist.
> 
> I'm not sure if that case exists in the ~25000 Qt5 functions, but in
> that case, I think to give that single field the name "unnamed"
> or maybe better "result".
> 
> Thank you very much for pointing me to that example!
> 
> Cheers -- Chris
> 
> 
> On 08.08.19 06:41, Guido van Rossum wrote:
>> Alas, we didn't think of struct sequences when we designed PEP 484. It
>> seems they are a hybrid of Tuple and NamedTuple; both of these are
>> currently special-cased in mypy in ways that cannot easily be combined.
>> 
>> Do you really need anonymous fields?
>> 
>> I see an example in typeshed/stdlib/3/os/__init__.pyi (in
>> github.com/python/typeshed ), for
>> stat_result. It defines names for all the fields, plus a __getitem__()
>> method that indicates that indexing returns an int. This doesn't help if
>> anonymous fields could have different types, not does it teach the type
>> checker about the number of anonymous fields.
>> 
>> --Guido
>> 
>> On Wed, Aug 7, 2019 at 1:51 AM Christian Tismer > > wrote:
>> 
>>Hi all,
>> 
>>Ok, I am about to implement generation of such structures
>>automatically using the struct sequence concept.
>> 
>> 
>>One more question:
>>--
>> 
>>Struct sequences are not yet members of the typing types.
>>I would like to add that, because a major use case is also to
>>show nice .pyi files with all the functions and types.
>> 
>>* namedtuple has made the transition to NamedTuple
>> 
>>* What would I need to do that for StructSequence as well?
>> 
>>Things get also a bit more complicated since struct sequence
>>objects can contain unnamed fields.
>> 
>>Any advice would be appreciated, I am no typing expert (yet :-)
>> 
>>cheers -- Chris
>> 
>> 
>>On 30.07.19 17:10, Guido van Rossum wrote:
>>> I think I have to agree with Petr. Define explicit type names.
>>> 
>>> On Tue, Jul 30, 2019 at 2:45 AM Paul Moore >
>>> >> wrote:
>>> 
>>>  On Tue, 30 Jul 2019 at 09:33, Christian Tismer
>>mailto:tis...@stackless.com>
>>>  >>
>>wrote:
>>>  > >>> typing.NamedTuple("__f", x=int, y=int)
>>>  > 
>>>  > >>> typing.NamedTuple("__f", x=int, y=int) is
>>typing.NamedTuple("__f",
>>>  > x=int, y=int)
>>>  > False
>>> 
>>>  This appears to go right back to collections.namedtuple:
>>> 
>>>  >>> from collections import namedtuple
>>>  >>> n1 = namedtuple('f', ['a', 'b', 'c'])
>>>  >>> n2 = namedtuple('f', ['a', 'b', 'c'])
>>>  >>> n1 is n2
>>>  False
>>> 
>>>  I found that surprising, as I expected the named tuple type to be
>>>  cached based on the declared name 'f'. But it's been that way
>>forever
>>>  so obviously my intuition here is wrong. But maybe it would be
>>useful
>>>  for this case if there *was* a way to base named tuple
>>identity off
>>>  the name/fields? It could be as simple as caching the results:
>>> 
>>>  >>> from functools import lru_cache
>>>  >>> cached_namedtuple = lru_cache(None)(namedtuple)
>>>  >>> n1 = cached_namedtuple('f', ('a', 'b', 'c')) # A tuple rather
>>>  than a list of field names, as lists aren't hashable
>>>  >>> n2 = cached_namedtuple('f', ('a', 'b', 'c'))
>>>  >>> n1 is n2
>>>  True
>>> 
>>>  Paul
>>> 
>>> 
>>> 
>>> --
>>> --Guido van Rossum (python.org/~guido 
>>)
>>> /Pronouns: he/him/his //(why is my pronoun here?)/
>>> 
>>
>> 
>>> 
>>> ___
>>> Python-Dev mailing list -- python-dev@python.org
>>
>>> To unsubscribe send an email to python-dev-le...@python.org
>>
>>> 

[Python-Dev] Re: Comparing dict.values()

2019-07-24 Thread Ronald Oussoren via Python-Dev


Op 24 jul. 2019 om 02:27 heeft Steven D'Aprano  het 
volgende geschreven:

> But I can suggest at least one useful invariant. If a, b are two dicts:
> 
>a.items() == b.items()
> 
> ought to be equivalent to:
> 
>(a.keys() == b.keys()) and (a.values() == b.values)

I don’t think this invariant holds unless comparison is order dependent.  {1:2, 
3:4} and {1:4, 3:2} have the same keys and values, but not the same items.  

Ronald
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TRAL2TDEXGOWBSISTU3CCL6BCAXK4CVN/


Re: [Python-Dev] Adding test.support.safe_rmpath()

2019-02-14 Thread Ronald Oussoren via Python-Dev

—

Twitter: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

> On 13 Feb 2019, at 16:10, Giampaolo Rodola'  wrote:
> 
> 
> 
> On Wed, Feb 13, 2019 at 2:27 PM Ronald Oussoren  <mailto:ronaldousso...@mac.com>> wrote:
> 
> 
>> On 13 Feb 2019, at 13:24, Giampaolo Rodola' > <mailto:g.rod...@gmail.com>> wrote:
>> 
>> 
>> Hello,
>> after discovering os.makedirs() has no unit-tests 
>> (https://bugs.python.org/issue35982 <https://bugs.python.org/issue35982>) I 
>> was thinking about working on a PR to increase the test coverage of 
>> fs-related os.* functions. In order to do so I think it would be useful to 
>> add a convenience function to "just delete something if it exists", 
>> regardless if it's a file, directory, directory tree, etc., and include it 
>> into test.support module.
> 
> Something like shutil.rmtree() with ignore_errors=True?
> 
> shutil.rmtree() is about directories and can't be used against files. 
> support.rmpath() would take a path (meaning anything) and try to remove it.

You’re right.  

I usually use shutil.rmtree for tests that need to create temporary files, and 
create a temporary directory for those files (that is, use tempfile.mkdtemp in 
setUp() and use shutil.rmtree in tearDown()). That way I don’t have to adjust 
house-keeping code when I make changes to test code.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding test.support.safe_rmpath()

2019-02-13 Thread Ronald Oussoren via Python-Dev


> On 13 Feb 2019, at 13:24, Giampaolo Rodola'  wrote:
> 
> 
> Hello,
> after discovering os.makedirs() has no unit-tests 
> (https://bugs.python.org/issue35982 ) I 
> was thinking about working on a PR to increase the test coverage of 
> fs-related os.* functions. In order to do so I think it would be useful to 
> add a convenience function to "just delete something if it exists", 
> regardless if it's a file, directory, directory tree, etc., and include it 
> into test.support module.

Something like shutil.rmtree() with ignore_errors=True?

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asking for reversion

2019-02-04 Thread Ronald Oussoren via Python-Dev



> On 4 Feb 2019, at 04:25, Davin Potts  
> wrote:
> 
> On 2/3/2019 7:55 PM, Guido van Rossum wrote:
> > Also, did anyone ask Davin directly to roll it back?
> 
> Simply put:  no.  There have been a number of reactionary comments in the 
> last 16 hours but no attempt to reach out to me directly during that time.
> 

I asked a question about the commit yesterday night in the tracker and was 
waiting for a response (which I fully expected to take some time due to 
timezone differences and this being a volunteer driven project). 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asking for reversion

2019-02-03 Thread Ronald Oussoren via Python-Dev

> On 4 Feb 2019, at 03:10, Raymond Hettinger  
> wrote:
> 
> 
>> On Feb 3, 2019, at 5:40 PM, Terry Reedy  wrote:
>> 
>> On 2/3/2019 7:55 PM, Guido van Rossum wrote:
>>> Also, did anyone ask Davin directly to roll it back?
>> 
>> Antoine posted on the issue, along with Robert O.  Robert reviewed and make 
>> several suggestions.

@Terry: Robert is usually called Ronald :-)

> 
> I think the PR sat in a stable state for many months, and it looks like RO's 
> review comments came *after* the commit.  

That’s because I only noticed the PR after commit: The PR was merged within an 
hour of creating the BPO issue. 

> 
> FWIW, with dataclasses we decided to get the PR committed early, long before 
> most of the tests and all of the docs. The principle was that bigger changes 
> needed to go in as early as possible in the release cycle so that we could 
> thoroughly exercise it (something that almost never happens while something 
> is in the PR stage).  It would be great if the same came happen here.  IIRC, 
> shared memory has long been the holy grail for multiprocessing, helping to 
> mitigate its principle disadvantage (the cost of moving data between 
> processes).  It's something we really want.

But with dataclasses there was public discussion on the API.  This is a new API 
with no documentation in a part of the library that is known to be complex in 
nature.

Ronald
--

Twitter: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python startup time

2018-10-10 Thread Ronald Oussoren via Python-Dev


> On 9 Oct 2018, at 23:02, Gregory Szorc  wrote:
> 
> 
> 
> While we're here, CPython might want to look into getdirentriesattr() as
> a replacement for readdir(). We switched to it in Mercurial several
> years ago to make `hg status` operations significantly faster [2]. I'm
> not sure if it will yield a speedup on APFS though. But it's worth a
> try. (If it does, you could probably make
> os.listdir()/os.scandir()/os.walk() significantly faster on macOS.)

Note that getdirentriesattr is deprecated as of macOS 10.10, getattrlistbulk
is the non-deprecated replacement (introduced in 10.10). 

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A Subtle Bug in Class Initializations

2018-08-29 Thread Ronald Oussoren via Python-Dev


> On 28 Aug 2018, at 22:09, Eddie Elizondo  wrote:
> 
> Hi everyone, 
> 
> Sorry for the delay - I finally sent out a patch: 
> https://bugs.python.org/issue34522.

As I mentioned on the tracker this patch is incorrect because it redefines 
PyVarObject_HEAD_INIT in a way that changes the semantics of existing code. 
This will break C extensions (I’m the author of one, and there may be others). 

> 
> Also, I'm +1 for modifying all extension modules to use PyType_FromSpec! I 
> might lend a hand to start moving them :)

Somewhat off-topic, but I personally don’t like PyType_FromSpec and prefer the 
older PyTypeObject construction but using C99 initialisers. That gives short 
and readable code where the compiler can (and does) warn when initialising 
slots to values that don’t match the declared type of those slots.  The only 
advantage of using PyType_FromSpec is that this can result in C extensions that 
are binary compatible between python versions, which IMHO is less useful than 
it used to be in the past thanks to CI/CD systems. 

PyType_FromSpec also cannot deal with some esoteric use cases, such as defining 
types with a custom metatype in C code. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Use of Cython

2018-08-06 Thread Ronald Oussoren via Python-Dev


> On 6 Aug 2018, at 17:13, Stefan Behnel  wrote:
> 
> Ronald Oussoren via Python-Dev schrieb am 06.08.2018 um 15:25:
>>> On 5 Aug 2018, at 18:14, Nick Coghlan wrote:
>>> On 5 August 2018 at 18:06, Ronald Oussoren wrote:
>>>> I’m not sure if I understand this, ctypes and cffi are used to access C 
>>>> APIs
>>>> without writing C code including the CPython API (see for example
>>>> <https://github.com/abarnert/superhackyinternals/blob/master/internals.py>).
>>>> 
>>>> The code code below should be mostly equivalent to the Cython example 
>>>> posted
>>>> earlier:
>>>> 
>>>> import unittest
>>>> import ctypes
>>>> from ctypes import pythonapi
>>>> 
>>>> class PyObject(ctypes.Structure):
>>>>   _fields_ = (
>>>>   ('ob_refcnt', ctypes.c_ssize_t),
>>>>   )
>>>> 
>>>> pythonapi.PyList_Append.argtypes = [ctypes.py_object, ctypes.py_object]
>>>> 
>>>> def refcount(v):
>>>>   return PyObject.from_address(id(v)).ob_refcnt
>>> 
>>> The quoted code is what I was referring to in:
>>> 
>>> ctypes & cffi likely wouldn't help as much in the case, since they
>>> don't eliminate the need to come up with custom code for parts 3 & 4,
>>> they just let you write that logic in Python rather than C.
>>> 
>> 
>> And earlier Nick wrote:
>>> 1. The test case itself (what action to take, which assertions to make 
>>> about it)
>>> 2. The C code to make the API call you want to test
>>> 3. The Python->C interface for the test case from 1 to pass test
>>> values in to the code from 2
>>> 4. The C->Python interface to get state of interest from 2 back to the
>>> test case from 1
>> 
>> For all of Cython, ctypes and cffi you almost never have to write (2), and 
>> hence (3) and (4), but can write that code in Python.
> 
> Which then means that you have a mix of Python and C in many cases.

Not really, for many cases the only C code required is the actual CPython API. 

> I guess
> that's what you meant with your next sentence:
> 
>> This is at the code of making it harder to know which bits of the CPython 
>> API are used in step (2), which makes it harder to review a testcase. 

Not really. What I was trying to say is that when you use C code to make the 
API call and collect information you have complete control of which APIs are 
used (in the PyList_Append test case you can be 100% sure that the test code 
only calls that API), with ctypes/cffi/Cython you have less control over the 
exact API calls made which could affect the test. Making assertions about 
refcounts is one example, my test case with ctypes works but you have to think 
about what the code does before you understand why the test case works and 
tests what you want to test.  In general that’s not a good quality for test 
code, that should be as obviously correct as possible. 

Cython is probably the best option in this regard (from what I’ve seen so far) 
because you can in the end basically write C code with Python syntax where 
needed.

> 
> Not sure I understand this correctly, but I think we're on the same page
> here: writing test code in C is cumbersome, writing test code in a mix of C
> and Python across different files is aweful. And making it difficult to
> write or even just review test code simply means that people will either
> waste their precious contribution time on it, or try to get around it.

Awful is a strong word. The separation of the test case in two different 
languages in two different source files is definitely less than ideal. 

> 
> 
>> BTW. In other projects I use tests there almost all of the test code is in 
>> C, the unittest runner only calls a C function and uses the result of that 
>> function to deduce if the test passed or failed. This only works nicely for 
>> fairly simple tests (such as the example test in this thread), not for more 
>> complicated and interesting tests due to having to write more C code.
> 
> I totally agree with that. For less trivial tests, people will often want
> to stear the test case at the C level, because some things are really
> difficult to do from Python. Good luck making assertions about reference
> counts when you're orchestrating the C-API through ctypes.

I agree, see above.


> And this is
> where Cython shines – your code *always* ends up running in C, regardless
> of how much of it is plain Python. But at any point, you can do pretty
> arbitrary C things, all in the same function. And unittest can execute that
>

Re: [Python-Dev] Use of Cython

2018-08-06 Thread Ronald Oussoren via Python-Dev


> On 5 Aug 2018, at 18:14, Nick Coghlan  wrote:
> 
> On 5 August 2018 at 18:06, Ronald Oussoren  wrote:
>> I’m not sure if I understand this, ctypes and cffi are used to access C APIs
>> without writing C code including the CPython API (see for example
>> <https://github.com/abarnert/superhackyinternals/blob/master/internals.py>).
>> 
>> The code code below should be mostly equivalent to the Cython example posted
>> earlier:
>> 
>> import unittest
>> import ctypes
>> from ctypes import pythonapi
>> 
>> class PyObject(ctypes.Structure):
>>_fields_ = (
>>('ob_refcnt', ctypes.c_ssize_t),
>>)
>> 
>> pythonapi.PyList_Append.argtypes = [ctypes.py_object, ctypes.py_object]
>> 
>> def refcount(v):
>>return PyObject.from_address(id(v)).ob_refcnt
> 
> The quoted code is what I was referring to in:
> 
> ctypes & cffi likely wouldn't help as much in the case, since they
> don't eliminate the need to come up with custom code for parts 3 & 4,
> they just let you write that logic in Python rather than C.
> 

And earlier Nick wrote:
> 1. The test case itself (what action to take, which assertions to make about 
> it)
> 2. The C code to make the API call you want to test
> 3. The Python->C interface for the test case from 1 to pass test
> values in to the code from 2
> 4. The C->Python interface to get state of interest from 2 back to the
> test case from 1

For all of Cython, ctypes and cffi you almost never have to write (2), and 
hence (3) and (4), but can write that code in Python. This is at the code of 
making it harder to know which bits of the CPython API are used in step (2), 
which makes it harder to review a testcase. 

BTW. In other projects I use tests there almost all of the test code is in C, 
the unittest runner only calls a C function and uses the result of that 
function to deduce if the test passed or failed. This only works nicely for 
fairly simple tests (such as the example test in this thread), not for more 
complicated and interesting tests due to having to write more C code.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Use of Cython

2018-08-05 Thread Ronald Oussoren via Python-Dev


> On 5 Aug 2018, at 03:15, Nick Coghlan  wrote:
> 
> On 5 August 2018 at 00:46, Stefan Behnel  wrote:
>> Antoine Pitrou schrieb am 04.08.2018 um 15:57:
>>> Actually, I think testing the C API is precisely the kind of area where
>>> you don't want to involve a third-party, especially not a moving target
>>> (Cython is actively maintained and generated code will vary after each
>>> new Cython release).  Besides, Cython itself calls the C API, which
>>> means you might end up involuntarily testing the C API against itself.
>>> 
>>> If anything, testing the C API using ctypes or cffi would probably be
>>> more reasonable... assuming we get ctypes / cffi to compile everywhere,
>>> which currently isn't the case.
>> 
>> I agree that you would rather not want to let Cython (or another tool)
>> generate the specific code that tests a specific C-API call, but you could
>> still use Cython to get around writing the setup, validation and unittest
>> boilerplate code in C. Basically, a test could then look something like
>> this (probably works, although I didn't test it):
>> 
>>from cpython.object cimport PyObject
>>from cpython.list cimport PyList_Append
>> 
>>def test_PyList_Append_on_empty_list():
>># setup code
>>l = []
>>assert len(l) == 0
>>value = "abc"
>>pyobj_value =  value
>>refcount_before = pyobj_value.ob_refcnt
>> 
>># conservative test call, translates to the expected C code,
>># although with exception propagation if it returns -1:
>>errcode = PyList_Append(l, value)
>> 
>># validation
>>assert errcode == 0
>>assert len(l) == 1
>>assert l[0] is value
>>assert pyobj_value.ob_refcnt == refcount_before + 1
>> 
>> 
>> If you don't want the exception handling, you can define your own
>> declaration of PyList_Append() that does not have it. But personally, I'd
>> rather use try-except in my test code than manually taking care of cleaning
>> up (unexpected) exceptions.
> 
> Exactly, that's the kind of thing I had in mind. At the moment,
> writing a new dedicated C API test requires designing 4 things:
> 
> 1. The test case itself (what action to take, which assertions to make about 
> it)
> 2. The C code to make the API call you want to test
> 3. The Python->C interface for the test case from 1 to pass test
> values in to the code from 2
> 4. The C->Python interface to get state of interest from 2 back to the
> test case from 1
> 
> If we were able to use Cython to handle 3 & 4 rather than having to
> hand craft it for every test, then I believe it would significantly
> lower the barrier to testing the C API directly rather than only
> testing it indirectly through the CPython implementation.
> 
> Having such a test suite available would then hopefully make it easier
> for other implementations to provide robust emulations of the public C
> API.
> 
> ctypes & cffi likely wouldn't help as much in the case, since they
> don't eliminate the need to come up with custom code for parts 3 & 4,
> they just let you write that logic in Python rather than C.

I’m not sure if I understand this, ctypes and cffi are used to access C APIs 
without writing C code including the CPython API (see for example 
>). 

The code code below should be mostly equivalent to the Cython example posted 
earlier:

import unittest
import ctypes
from ctypes import pythonapi

class PyObject(ctypes.Structure):
_fields_ = (
('ob_refcnt', ctypes.c_ssize_t),
)

pythonapi.PyList_Append.argtypes = [ctypes.py_object, ctypes.py_object]

def refcount(v):
return PyObject.from_address(id(v)).ob_refcnt


def test_PyList_Append_on_empty_list():
   # setup code
   l = []
   assert len(l) == 0
   value = "abc"


   refcount_before = refcount(value)

   errcode = pythonapi.PyList_Append(l, value)

   assert errcode == 0
   assert len(l) == 1
   assert l[0] is value
   assert refcount(value) == refcount_before + 1

I write “mostly” because I rarely use ctypes and am not 100% sure that I use 
the API correctly.

A problem with using ctypes is that this tests the ABI and to the API, which 
for example means you cannot test C macros this way.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's change to C API!

2018-07-30 Thread Ronald Oussoren via Python-Dev


> On 30 Jul 2018, at 10:20, Victor Stinner  wrote:
> 
> The API leaking all implementation details will remain available as an opt-in 
> option for Cython, cffi and debug tools. But this API will only be usable on 
> the "slow" Python runtime, the one which keeps maximum backward 
> compatibility. To get new optimizations, you have to use Py_INCREF() and 
> avoid accessing C strucuture fields, which may or may not need to modify your 
> code.
> 
> Hum, please, join the capi-sig mailing list, since I already explained that 
> in my long reply to Stefan on capi-sig ;-)

Interesting. I didn’t know that list exists. I’ll respond to your message on 
that list.

Ronald


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] USE_STACKCHECK and running out of stack

2018-07-28 Thread Ronald Oussoren via Python-Dev
Hi,

I’m looking at PyOS_CheckStack because this feature might be useful on macOS 
(and when I created bpo-33955 for this someone ran with it and created a patch).

Does anyone remember why the interpreter raises MemoryError and not 
RecursionError when PyOS_CheckStack detects that we’re about to run out of 
stack space? 

The reason I’m looking into this is that the default stack size on macOS is 
fairly small and I’d like to avoid crashing the interpreter when running out of 
stackspace on threads created by the system (this is less of a risk on threads 
created by Python itself because we can arrange for a large enough stack in 
that case).

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Const access to CPython objects outside of GIL?

2018-07-18 Thread Ronald Oussoren via Python-Dev
Op 17 jul. 2018 om 09:40 heeft Radim Řehůřek  het 
volgende geschreven:

> 
> 
> To be honest, I did do some CPython source code staring already. And at least 
> for the functions we care about, it wasn't all that painful (PyDict_GetItem 
> being the trickiest). Doing this wholesale might be a superhuman task, but 
> I'm thinking a practical subset could be relatively straightforward while 
> still useful, 80/20.

This is would likely lead to a fairly vague document.  Using PyDict_GetItem as 
an example, even if it might sometimes be possible to call without holding the 
GIL there are definitely use cases where it is not, for example when there are 
keys with a type implemented in python, or when the dict is modified in another 
thread. 

Ronald 
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-06-17 Thread Ronald Oussoren


> On 17 Jun 2018, at 16:31, Stefan Behnel  wrote:
> 
> Ronald Oussoren schrieb am 17.06.2018 um 14:50:
>> Why did you add a tp_ccalloffset slot to the type with the actual 
>> information in instances instead of storing the information in a slot? 
> 
> If the configuration of the callable was in the type, you would need a
> separate type for each kind of callable. That would quickly explode. Think
> of this as a generalised PyCFunction interface to arbitrary callables.
> There is a function pointer and some meta data, and both are specific to an
> instance.

That’s true for PyCFunction, but not necessarily as a general replacement for 
the tp_call slot.  I my code I’d basically use the same function pointer and 
metadata for all instances (that is, more like PyFunction than PyCFunction). 

> 
> Also, there are usually only a limited number of callables around, so
> memory doesn't matter. (And memory usage would be a striking reason to have
> something in a type rather than an instance.)

I was mostly surprised that something that seems to be a replacement for 
tp_call stores the interesting information in instances instead of the type 
itself. 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Some data points for the "annual release cadence" concept

2018-06-17 Thread Ronald Oussoren


> On 15 Jun 2018, at 13:00, Nick Coghlan  wrote:
> 
> On 14 June 2018 at 06:30, Ronald Oussoren  <mailto:ronaldousso...@mac.com>> wrote:
>> On 13 Jun 2018, at 15:42, Nick Coghlan > <mailto:ncogh...@gmail.com>> wrote:
>> 
>> Yeah, pretty much - once we can get to the point where it's routine for 
>> folks to be building "abiX" or "abiXY" wheels (with the latter not actually 
>> being a defined compatibility tag yet, but having the meaning of "targets 
>> the stable ABI as first defined in CPython X.Y"), rather than feature 
>> release specific "cpXYm" ones, then a *lot* of the extension module 
>> maintenance pain otherwise arising from more frequent CPython releases 
>> should be avoided.
>> 
>> There'd still be a lot of other details to work out to turn the proposed 
>> release cadence change into a practical reality, but this is the key piece 
>> that I think is a primarily technical hurdle: simplifying the current 
>> "wheel-per-python-version-per-target-platform" community project build 
>> matrices to instead be "wheel-per-target-platform”.
> 
> This requires getting people to mostly stop using the non-stable ABI, and 
> that could be a lot of work for projects that have existing C extensions that 
> don’t use the stable ABI or cython/cffi/… 
> 
> That said, the CPython API tends to be fairly stable over releases and even 
> without using the stable ABI supporting faster CPython feature releases 
> shouldn’t be too onerous, especially for projects with some kind of 
> automation for creating release artefacts (such as a CI system).
> 
> Right, there would still be a non-zero impact on projects that ship binary 
> artifacts.
> 
> Having a viable stable ABI as a target just allows third party projects to 
> make the trade-off between the upfront cost of migrating to the stable ABI 
> (but then only needing to rebuild binaries when their own code changes), and 
> the ongoing cost of maintaining an extra few sets of binary wheel archives. I 
> think asking folks to make that trade-off on a case by case basis is 
> reasonable, whereas back in the previous discussion I considered *only* 
> offering the second option to be unreasonable.

I agree.  I haven’t seriously looked at the stable ABI yet, so I don’t know if 
there are reasons for now migrating to it beyond Py2 support and the effort 
required.  For my own projects (both public and not) I have some that could 
possibly migratie to the stable ABI, and some that cannot because they access 
information that isn’t public in the stable ABI. 

I generally still use the non-stable C API when I write extensions, basically 
because I already know how to do so. 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-06-17 Thread Ronald Oussoren



> On 17 Jun 2018, at 11:00, Jeroen Demeyer  wrote:
> 
> Hello,
> 
> I have been working on a slightly different PEP to use a new type slot 
> tp_ccalloffset instead the base_function base class. You can see the work in 
> progress here:
> 
> https://github.com/jdemeyer/PEP-ccall
> 
> By creating a new protocol that each class can implement, there is a full 
> decoupling between the features of a class and between the class hierarchy 
> (such coupling was complained about during the PEP 575 discussion). So I got 
> convinced that this is a better approach.
> 
> It also has the advantage that changes can be made more gradually: this PEP 
> changes nothing at all on the Python side, it only changes the CPython 
> implementation. I still think that it would be a good idea to refactor the 
> class hierarchy, but that's now an independent issue.
> 
> Another advantage is that it's more general and easier for existing classes 
> to use the protocol (PEP 575 on the other hand requires subclassing from 
> base_function which may not be compatible with an existing class hierarchy).

This looks interesting. Why did you add a tp_ccalloffset slot to the type with 
the actual information in instances instead of storing the information in a 
slot? 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Some data points for the "annual release cadence" concept

2018-06-13 Thread Ronald Oussoren


> On 13 Jun 2018, at 15:42, Nick Coghlan  wrote:
> 
> On 13 June 2018 at 02:23, Guido van Rossum  > wrote:
> So, to summarize, we need something like six for C?
> 
> Yeah, pretty much - once we can get to the point where it's routine for folks 
> to be building "abiX" or "abiXY" wheels (with the latter not actually being a 
> defined compatibility tag yet, but having the meaning of "targets the stable 
> ABI as first defined in CPython X.Y"), rather than feature release specific 
> "cpXYm" ones, then a *lot* of the extension module maintenance pain otherwise 
> arising from more frequent CPython releases should be avoided.
> 
> There'd still be a lot of other details to work out to turn the proposed 
> release cadence change into a practical reality, but this is the key piece 
> that I think is a primarily technical hurdle: simplifying the current 
> "wheel-per-python-version-per-target-platform" community project build 
> matrices to instead be "wheel-per-target-platform”.

This requires getting people to mostly stop using the non-stable ABI, and that 
could be a lot of work for projects that have existing C extensions that don’t 
use the stable ABI or cython/cffi/… 

That said, the CPython API tends to be fairly stable over releases and even 
without using the stable ABI supporting faster CPython feature releases 
shouldn’t be too onerous, especially for projects with some kind of automation 
for creating release artefacts (such as a CI system).

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Idea: reduce GC threshold in development mode (-X dev)

2018-06-08 Thread Ronald Oussoren


> On 8 Jun 2018, at 12:36, Serhiy Storchaka  wrote:
> 
> 08.06.18 11:31, Victor Stinner пише:
>> Do you suggest to trigger a fake "GC collection" which would just
>> visit all objects with a no-op visit callback? I like the idea!
>> 
>> Yeah, that would help to detect objects in an inconsistent state and
>> reuse the existing implemented visit methods of all types.
>> 
>> Would you be interested to try to implement this new debug feature?
> 
> It is simple:
> 
> #ifdef Py_DEBUG
> void
> _PyGC_CheckConsistency(void)
> {
> int i;
> if (_PyRuntime.gc.collecting) {
> return;
> }
> _PyRuntime.gc.collecting = 1;
> for (i = 0; i < NUM_GENERATIONS; ++i) {
> update_refs(GEN_HEAD(i));
> }
> for (i = 0; i < NUM_GENERATIONS; ++i) {
> subtract_refs(GEN_HEAD(i));
> }
> for (i = 0; i < NUM_GENERATIONS; ++i) {
> revive_garbage(GEN_HEAD(i));
> }
> _PyRuntime.gc.collecting = 0;
> }
> #endif

Wouldn’t it be enough to visit just the the newly tracked object in 
PyObject_GC_Track with a visitor function that does something minimal to verify 
that the object value is sane, for example by checking 
PyType_Ready(Py_TYPE(op)).

That would find issues where objects are tracked before they are initialised 
far enough to be save to visit, without changing GC behavior. I have no idea 
what the performance impact of this is though.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stable ABI

2018-06-04 Thread Ronald Oussoren


> On 3 Jun 2018, at 17:04, Eric V. Smith  wrote:
> 
> On 6/3/2018 10:55 AM, Christian Tismer wrote:
>> On 03.06.18 13:18, Ronald Oussoren wrote:
>>> 
>>> 
>>>> On 3 Jun 2018, at 12:03, Christian Tismer  wrote:
>> ...
>>>> 
>>>> I have written a script that scans all relevant header files
>>>> and analyses all sections which are reachable in the limited API
>>>> context.
>>>> All macros that don't begin with an underscore which contain
>>>> a "->tp_" string are the locations which will break.
>>>> 
>>>> I found exactly 7 locations where this is the case.
>>>> 
>>>> My PR will contain the 7 fixes plus the analysis script
>>>> to go into tools. Preparind that in the evening.
>>> 
>>> Having tests would still be nice to detect changes to the stable ABI when 
>>> they are made.
>>> 
>>> Writing those tests is quite some work though, especially if those at least 
>>> smoke test the limited ABI by compiling snippets the use all symbols that 
>>> should be exposed by the limited ABI. Writing those tests should be fairly 
>>> simple for someone that knows how to write C extensions, but is some work.
>>> 
>>> Writing a tests that complain when the headers expose symbols that 
>>> shouldn’t be exposed is harder, due to the need to parse headers (either by 
>>> hacking something together using regular expressions, or by using tools 
>>> like gccxml or clang’s C API).
>> What do you mean?
>> My script does that with all "tp_*" type fields.
>> What else would you want to check?
> 
> I think Ronald is saying we're trying to answer a few questions:
> 
> 1. Did we accidentally drop anything from the stable ABI?
> 
> 2. Did we add anything to the stable ABI that we didn't mean to?
> 
> 3. (and one of mine): Does the stable ABI already contain things that we 
> don't expect it to?

That’s correct.  There have been instances of the second item over the year, 
and not all of them have been caught before releases.  What doesn’t help for 
all of these is that the stable ABI documentation says that every documented 
symbol is part of the stable ABI unless there’s explicit documentation to the 
contrary. This makes researching if functions are intended to be part of the 
stable ABI harder.

And also:

4. Does the stable ABI actually work?

Christian’s script finds cases where exposed names don’t actually work when you 
try to use them.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stable ABI

2018-06-04 Thread Ronald Oussoren


> On 4 Jun 2018, at 08:35, Ronald Oussoren  wrote:
> 
> 
> 
>> On 3 Jun 2018, at 17:04, Eric V. Smith > <mailto:e...@trueblade.com>> wrote:
>> 
>> On 6/3/2018 10:55 AM, Christian Tismer wrote:
>>> On 03.06.18 13:18, Ronald Oussoren wrote:
>>>> 
>>>> 
>>>>> On 3 Jun 2018, at 12:03, Christian Tismer >>>> <mailto:tis...@stackless.com>> wrote:
>>> ...
>>>>> 
>>>>> I have written a script that scans all relevant header files
>>>>> and analyses all sections which are reachable in the limited API
>>>>> context.
>>>>> All macros that don't begin with an underscore which contain
>>>>> a "->tp_" string are the locations which will break.
>>>>> 
>>>>> I found exactly 7 locations where this is the case.
>>>>> 
>>>>> My PR will contain the 7 fixes plus the analysis script
>>>>> to go into tools. Preparind that in the evening.
>>>> 
>>>> Having tests would still be nice to detect changes to the stable ABI when 
>>>> they are made.
>>>> 
>>>> Writing those tests is quite some work though, especially if those at 
>>>> least smoke test the limited ABI by compiling snippets the use all symbols 
>>>> that should be exposed by the limited ABI. Writing those tests should be 
>>>> fairly simple for someone that knows how to write C extensions, but is 
>>>> some work.
>>>> 
>>>> Writing a tests that complain when the headers expose symbols that 
>>>> shouldn’t be exposed is harder, due to the need to parse headers (either 
>>>> by hacking something together using regular expressions, or by using tools 
>>>> like gccxml or clang’s C API).
>>> What do you mean?
>>> My script does that with all "tp_*" type fields.
>>> What else would you want to check?
>> 
>> I think Ronald is saying we're trying to answer a few questions:
>> 
>> 1. Did we accidentally drop anything from the stable ABI?
>> 
>> 2. Did we add anything to the stable ABI that we didn't mean to?
>> 
>> 3. (and one of mine): Does the stable ABI already contain things that we 
>> don't expect it to?
> 
> That’s correct.  There have been instances of the second item over the year, 
> and not all of them have been caught before releases.  What doesn’t help for 
> all of these is that the stable ABI documentation says that every documented 
> symbol is part of the stable ABI unless there’s explicit documentation to the 
> contrary. This makes researching if functions are intended to be part of the 
> stable ABI harder.
> 
> And also:
> 
> 4. Does the stable ABI actually work?
> 
> Christian’s script finds cases where exposed names don’t actually work when 
> you try to use them.

To reply to myself, the gist below is a very crude version of what I was trying 
to suggest:

https://gist.github.com/ronaldoussoren/fe4f80351a7ee72c245025df7b2ef1ed#file-gistfile1-txt
 
<https://gist.github.com/ronaldoussoren/fe4f80351a7ee72c245025df7b2ef1ed#file-gistfile1-txt>

The gist is far from usable, but shows some tests that check that symbols in 
the stable ABI can be used, and tests that everything exported in the stable 
ABI is actually tested. 

Again, the code in the gist is a crude hack and I have currently no plans to 
turn this into something that could be added to the testsuite.

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stable ABI

2018-06-03 Thread Ronald Oussoren


> On 3 Jun 2018, at 12:03, Christian Tismer  wrote:
> 
> On 02.06.18 05:47, Nick Coghlan wrote:
>> On 2 June 2018 at 03:45, Jeroen Demeyer > > wrote:
>> 
>>On 2018-06-01 17:18, Nathaniel Smith wrote:
>> 
>>Unfortunately, very few people use the stable ABI currently, so it's
>>easy for things like this to get missed.
>> 
>> 
>>So there are no tests for the stable ABI in Python?
>> 
>> 
>> Unfortunately not.
>> 
>> https://bugs.python.org/issue21142 is an old issue suggesting automating
>> those checks (so we don't inadvertently add or remove symbols for
>> previously published stable ABI definitions), but it's not yet made it
>> to the state of being sufficiently well automated that it can be a
>> release checklist item in PEP 101.
>> 
>> Cheers,
>> Nick.
> 
> Actually, I think we don't need such a test any more, or we
> could use this one as a heuristic test:
> 
> I have written a script that scans all relevant header files
> and analyses all sections which are reachable in the limited API
> context.
> All macros that don't begin with an underscore which contain
> a "->tp_" string are the locations which will break.
> 
> I found exactly 7 locations where this is the case.
> 
> My PR will contain the 7 fixes plus the analysis script
> to go into tools. Preparind that in the evening.

Having tests would still be nice to detect changes to the stable ABI when they 
are made. 

Writing those tests is quite some work though, especially if those at least 
smoke test the limited ABI by compiling snippets the use all symbols that 
should be exposed by the limited ABI. Writing those tests should be fairly 
simple for someone that knows how to write C extensions, but is some work.

Writing a tests that complain when the headers expose symbols that shouldn’t be 
exposed is harder, due to the need to parse headers (either by hacking 
something together using regular expressions, or by using tools like gccxml or 
clang’s C API).  

BTW. The problem with the tool in issue 21142 is that this looks at the symbols 
exposed by lib python on linux, and that exposed more symbols than just the 
limited ABI. 
 
Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop/deprecate Tkinter?

2018-05-02 Thread Ronald Oussoren


> On 2 May 2018, at 22:51, Ivan Pozdeev via Python-Dev  
> wrote:
> 
> As https://bugs.python.org/issue33257 and https://bugs.python.org/issue33316 
> showed, Tkinter is broken, for both Py2 and Py3, with both threaded and 
> non-threaded Tcl, since 2002 at least, and no-one gives a damn.

The second issue number doesn’t refer to a Tkinter issue, the former is about a 
month old and has reactions from a core developer. That’s not “nobody cares”. 

> 
> This seems to be a testament that very few people are actually interested in 
> or are using it.

Not necessarily, it primarily reflects that CPython is volunteer-driven 
project.  This appears to be related to the interaction of Tkinter and threads, 
and requires hacking on C code.  That seriously shrinks the pool of people that 
feel qualified to work on this.

> 
> If that's so, there's no use keeping it in the standard library -- if 
> anything, because there's not enough incentive and/or resources to support 
> it. And to avoid screwing people (=me) up when they have the foolishness to 
> think they can rely on it in their projects -- nowhere in the docs it is said 
> that the module is only partly functional.

Tkinter is used fairly often as an easily available GUI library and is not much 
as you imply. 

I don’t know how save calling GUI code from multiple threads is in general 
(separate from this Tkinter issue), but do know that this is definitely not 
save across platforms: at least on macOS calling GUI methods in Apple’s 
libraries from secondary threads is unsafe unless those methods are explicitly 
documented as thread-safe.

Ronald

> 
> -- 
> 
> Regards,
> Ivan
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How can we use 48bit pointer safely?

2018-03-30 Thread Ronald Oussoren



On Mar 30, 2018, at 01:40 PM, Antoine Pitrou  wrote:


A safer alternative is to use the *lower* bits of pointers. The bottom
3 bits are always available for storing ancillary information, since
typically all heap-allocated data will be at least 8-bytes aligned
(probably 16-bytes aligned on 64-bit processes). However, you also get
less bits :-)

The lower bits are more interesting to use. I'm still hoping to find some time 
to experiment with tagged pointers some day, that could be interesting w.r.t. 
performance and memory use (at the cost of being ABI incompatible). 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How can we use 48bit pointer safely?

2018-03-30 Thread Ronald Oussoren



On Mar 30, 2018, at 03:11 PM, "Joao S. O. Bueno"  wrote:

Not only that, but afaik Linux could simply raise that 57bit virtual
to 64bit virtual without previous
warning on any version change.

The change from 48-bit to 57-bit virtual addresses was not done without any 
warning because that would have broken too much code (IIRC due to at least some 
JS environments assuming 48bit pointers).

Ronald ___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How can we use 48bit pointer safely?

2018-03-30 Thread Ronald Oussoren



On Mar 30, 2018, at 08:31 AM, INADA Naoki  wrote:

Hi,

As far as I know, most amd64 and arm64 systems use only 48bit address spaces.
(except [1])

[1] 
https://software.intel.com/sites/default/files/managed/2b/80/5-level_paging_white_paper.pdf

It means there are some chance to compact some data structures.
I point two examples below.

My question is; can we use 48bit pointer safely?

Not really, at least some CPUs can also address more memory than that. See 
 which talks about Linux support for 57-bit 
virtual addresses and 52-bit physical addresses. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] [RELEASE] Python 3.7.0b1 is now available for testing

2018-02-04 Thread Ronald Oussoren


> On 1 Feb 2018, at 02:34, Ned Deily  wrote:
> 
> […]
> 
> Attention macOS users: with 3.7.0b1, we are providing a choice of
> two binary installers.  The new variant provides a 64-bit-only
> version for macOS 10.9 and later systems; this variant also now
> includes its own built-in version of Tcl/Tk 8.6.  We welcome your
> feedback.
> 

Why macOS 10.9 or later?  MacOS 10.10 introduced a number of useful APIs, in 
particular openat(2) and the like which are exposed using the “dir_fd” 
parameter of functions in the posix module.

That said, macOS 10.9 seems to be a fairly common minimal platform requirement 
these days for developers not tracking Apple’s releases closely.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS-X builds for 3.7.0

2018-02-04 Thread Ronald Oussoren


> On 30 Jan 2018, at 18:42, Chris Barker  wrote:
> 
> Ned,
> 
> It looks like you're still building OS-X the same way as in the past:
> 
> Intel 32+64 bit, 10.6 compatibility
> 
> Is that right?
> 
> Might it be time for an update?
> 
> Do we still need to support 32 bit?  From:
> 
> https://apple.stackexchange.com/questions/99640/how-old-are-macs-that-cannot-run-64-bit-applications
>  
> 
> 
> There has not been a 32 bit-only Mac sold since 2006, and a out-of the box 32 
> bit OS since 2006 or 2007
> 
> I can't find out what the older OS version Apple supports, but I know my IT 
> dept has been making me upgrade, so I"m going to guess 10.8 or newer…

A binary with a newer deployment target than 10.6 would be nice because AFAIK 
the installers are still build on a system running that old version of OSX. 
This results in binaries that cannot access newer system APIs like openat (and 
hence don’t support the “dir_fd” parameter in a number of function in the os 
module.

> 
> And maybe we could even get rid of the "Framework" builds……

Why?  IMHO Framework builds are a nice way to get isolated side-by-side 
installations. Furthermore a number of Apple APIs (including the GUI libraries) 
don’t work unless you’re running from an application bundle, which the 
framework builds arranges for and normal unix builds don’t. 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [ssl] The weird case of IDNA

2018-01-02 Thread Ronald Oussoren


> On 31 Dec 2017, at 18:07, Nathaniel Smith  wrote:
> 
> On Dec 31, 2017 7:37 AM, "Stephen J. Turnbull" 
>  > wrote:
> Nathaniel Smith writes:
> 
>  > Issue 1: Python's built-in IDNA implementation is wrong (implements
>  > IDNA 2003, not IDNA 2008).
> 
> Is "wrong" the right word here?  I'll grant you that 2008 is *better*,
> but typically in practice versions coexist for years.  Ie, is there no
> backward compatibility issue with registries that specified IDNA 2003?
> 
> Well, yeah, I was simplifying, but at the least we can say that always and 
> only using IDNA 2003 certainly isn't right :-). I think in most cases the 
> preferred way to deal with these kinds of issues is not to carry around an 
> IDNA 2003 implementation, but instead to use an IDNA 2008 implementation with 
> the "transitional compatibility" flag enabled in the UTS46 preprocessor? But 
> this is rapidly exceeding my knowledge.
> 
> This is another reason why we ought to let users do their own IDNA handling 
> if they want…

Do you know what the major browser do w.r.t. IDNA support? If those 
unconditionally use IDNA 2008 is should be fairly safe to move to that in 
Python as well because that would mean we’re less likely to run into backward 
compatibility issues.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff

2017-11-02 Thread Ronald Oussoren

> On 1 Nov 2017, at 22:47, Ned Deily  wrote:
> 
> Happy belated Halloween to those who celebrate it; I hope it wasn't too 
> scary!  Also possibly scary: we have just a little over 12 weeks remaining 
> until Python 3.7's feature code cutoff, 2018-01-29.  Those 12 weeks include a 
> number of traditional holidays around the world so, if you are planning on 
> writing another PEP for 3.7 or working on getting an existing one approved or 
> getting feature code reviewed, please plan accordingly.If you have 
> something in the pipeline, please either let me know or, when implemented, 
> add the feature to PEP 537, the 3.7 Release Schedule PEP.

I’d still like to finish PEP 447, but don’t know if I can manage to find enough 
free time to do so.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the design purpose of metaclasses vs code generating decorators? (was Re: PEP 557: Data Classes)

2017-10-20 Thread Ronald Oussoren

> On 14 Oct 2017, at 16:37, Martin Teichmann  wrote:
> 
>> Things that will not work if Enum does not have a metaclass:
>> 
>> list(EnumClass) -> list of enum members
>> dir(EnumClass)  -> custom list of "interesting" items
>> len(EnumClass)  -> number of members
>> member in EnumClass -> True or False
>> 
>> - protection from adding, deleting, and changing members
>> - guards against reusing the same name twice
>> - possible to have properties and members with the same name (i.e. "value"
>> and "name")
> 
> In current Python this is true. But if we would go down the route of
> PEP 560 (which I just found, I wasn't involved in its discussion),
> then we could just add all the needed functionality to classes.
> 
> I would do it slightly different than proposed in PEP 560:
> classmethods are very similar to methods on a metaclass. They are just
> not called by the special method machinery. I propose that the
> following is possible:
> 
 class Spam:
> ...   @classmethod
> ...   def __getitem__(self, item):
> ...   return "Ham"
> 
 Spam[3]
>Ham
> 
> this should solve most of your usecases.

Except when you want to implement __getitem__ for instances as well :-). An 
important difference between @classmethod and methods on the metaclass is that 
@classmethod methods live in the same namespace as instance methods, while 
methods on the metaclass don’t.

I ran into similar problems in PyObjC: Apple’s Cocoa libraries use instance and 
class methods with the same name. That when using methods on a metaclass, but 
not when using something similar to @classmethod.  Because of this PyObjC is a 
heavy user of metaclasses (generated from C for additional fun). A major 
disadvantage of this is that tends to confuse smart editors. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Investigating time for `import requests`

2017-10-20 Thread Ronald Oussoren
Op 10 okt. 2017 om 01:48 heeft Brett Cannon <br...@python.org 
<mailto:br...@python.org>> het volgende geschreven:

> 
> 
> On Mon, Oct 2, 2017, 17:49 Ronald Oussoren, <ronaldousso...@mac.com 
> <mailto:ronaldousso...@mac.com>> wrote:
> Op 3 okt. 2017 om 04:29 heeft Barry Warsaw <ba...@python.org 
> <mailto:ba...@python.org>> het volgende geschreven:
> 
> > On Oct 2, 2017, at 14:56, Brett Cannon <br...@python.org 
> > <mailto:br...@python.org>> wrote:
> >
> >> So Mercurial specifically is an odd duck because they already do lazy 
> >> importing (in fact they are using the lazy loading support from 
> >> importlib). In terms of all of this discussion of tweaking import to be 
> >> lazy, I think the best approach would be providing an opt-in solution that 
> >> CLI tools can turn on ASAP while the default stays eager. That way 
> >> everyone gets what they want while the stdlib provides a shared solution 
> >> that's maintained alongside import itself to make sure it functions 
> >> appropriately.
> >
> > The problem I think is that to get full benefit of lazy loading, it has to 
> > be turned on globally for bare ‘import’ statements.  A typical application 
> > has tons of dependencies and all those libraries are also doing module 
> > global imports, so unless lazy loading somehow covers them, it’ll be an 
> > incomplete gain.  But of course it’ll take forever for all your 
> > dependencies to use whatever new API we come up with, and if it’s not as 
> > convenient to write as ‘import foo’ then I suspect it won’t much catch on 
> > anyway.
> >
> 
> One thing to keep in mind is that imports can have important side-effects. 
> Turning every import statement into a lazy import will not be backward 
> compatible.
> 
> Yep, and that's a lesson Mercurial shared with me at PyCon US this year. My 
> planned approach has a blacklist for modules to only load eagerly.

I’m not sure if i understand. Do you want to turn on lazy loading for the 
stdlib only (with a blacklist for modules that won’t work that way), or 
generally? In the latter case this would still not be backward compatible. 

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Investigating time for `import requests`

2017-10-02 Thread Ronald Oussoren
Op 3 okt. 2017 om 04:29 heeft Barry Warsaw  het volgende 
geschreven:

> On Oct 2, 2017, at 14:56, Brett Cannon  wrote:
> 
>> So Mercurial specifically is an odd duck because they already do lazy 
>> importing (in fact they are using the lazy loading support from importlib). 
>> In terms of all of this discussion of tweaking import to be lazy, I think 
>> the best approach would be providing an opt-in solution that CLI tools can 
>> turn on ASAP while the default stays eager. That way everyone gets what they 
>> want while the stdlib provides a shared solution that's maintained alongside 
>> import itself to make sure it functions appropriately.
> 
> The problem I think is that to get full benefit of lazy loading, it has to be 
> turned on globally for bare ‘import’ statements.  A typical application has 
> tons of dependencies and all those libraries are also doing module global 
> imports, so unless lazy loading somehow covers them, it’ll be an incomplete 
> gain.  But of course it’ll take forever for all your dependencies to use 
> whatever new API we come up with, and if it’s not as convenient to write as 
> ‘import foo’ then I suspect it won’t much catch on anyway.
> 

One thing to keep in mind is that imports can have important side-effects. 
Turning every import statement into a lazy import will not be backward 
compatible. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 549: Instance Properties (aka: module properties)

2017-09-06 Thread Ronald Oussoren

> On 6 Sep 2017, at 00:03, Larry Hastings  wrote:
> 
> 
> 
> I've written a PEP proposing a language change:
> https://www.python.org/dev/peps/pep-0549/ 
> 
> The TL;DR summary: add support for property objects to modules.  I've already 
> posted a prototype.
> 
> 
> How's that sound?

To be honest this sounds like a fairly crude hack. Updating the __class__ of a 
module object feels dirty, but at least you get normal behavior w.r.t. 
properties.

Why is there no mechanism to add new descriptors that can work in this context? 

BTW. The interaction with import is interesting… Module properties only work as 
naive users expect when accessing them as attributes of the module object, in 
particular importing the name using “from module import prop” would only call 
the property getter once and that may not be the intended behavior.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Remove embedded expat library?

2017-06-11 Thread Ronald Oussoren

> On 11 Jun 2017, at 12:10, Victor Stinner <victor.stin...@gmail.com> wrote:
> 
> Le 11 juin 2017 09:38, "Ronald Oussoren" <ronaldousso...@mac.com 
> <mailto:ronaldousso...@mac.com>> a écrit :
> I don’t think it would be a good idea to rely on the system provided libexpat 
> on macOS, as Apple is not exactly fast w.r.t. upgrading their external 
> dependencies and could easily stop updating libraries when the no longer need 
> them (see for example the mess w.r.t. OpenSSL).
> 
> 
> Ok, but can't we download expat instead of keeping an old copy in our 
> repisitory?

Sure. The script that creates the installer already downloads a number of 
libraries, adding one more shouldn’t be a problem. 

> 
> Having a copy is useful when we modify it. I don't that it is the case here.

I don’t know why expat was included in the CPython tree and if those reasons 
are still valid. I therefore have no opinion on keeping it, other than that 
expat shouldn’t be kept in the CPython tree unless there’s a good reason for 
doing so. 

BTW. Removing 3th-party libraries from the source tree doesn’t fully isolate us 
from security issues in those libraries due to shipping the libraries in binary 
installers on Windows and macOS.  The removal does make maintenance easier (no 
need to guess whether or not there are local patches).

Ronald

> 
> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Remove embedded expat library?

2017-06-11 Thread Ronald Oussoren

> On 9 Jun 2017, at 18:03, Ned Deily  wrote:
> 
> On Jun 9, 2017, at 08:43, Victor Stinner  wrote:
> 
>> I expect that all Linux distributions build Python using
>> --with-system-expat. It may become the default? What about macOS and
>> other operating systems?
> 
> The current default is --with-system-expat=no so, unless builders of Python 
> take explicit action, the bundled version of expat is used.  Using the 
> bundled version is also currently the case for the python.org macOS 
> installer, no idea what other distributors do.  Apple supplies a version of 
> expat with macOS so we presumably we could use the system version for the 
> installer.  Presumably (Zach?) we would need to continue to supply a version 
> of expat for Windows builds.  But do we need to for others?  If it were only 
> Windows, *then* perhaps it might make sense to make all the changes to move 
> expat out of cpython into the common repo for third-party Windows libs.

I don’t think it would be a good idea to rely on the system provided libexpat 
on macOS, as Apple is not exactly fast w.r.t. upgrading their external 
dependencies and could easily stop updating libraries when the no longer need 
them (see for example the mess w.r.t. OpenSSL).

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can we use "designated initializer" widely in core modules?

2017-01-18 Thread Ronald Oussoren

> On 18 Jan 2017, at 02:16, Victor Stinner  wrote:
> 
> 2017-01-18 1:59 GMT+01:00 INADA Naoki :
>> I think mixing two forms is OK only if new form is used only at bottom.
>> (like keyword arguments are allowed after all positional arguments in
>> Python function calling)
>> 
>> Complete rewriting makes diff huge.  And there is PyVarObject_HEAD_INIT 
>> already.
> 
> I'm in favor of replacing all long list of fields with the /* tp_xxx
> */ comments to use designated initializers. It would allow to remove a
> lot of "0, /* tp_xxx */" lines and make the code much more
> readable! It should help to prevent bugs when the code is modified.

I agree. I’ve done this in my own projects and that made the code a lot easier 
to read.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Document C API that is not part of the limited API

2016-12-31 Thread Ronald Oussoren

> On 30 Dec 2016, at 15:33, Nick Coghlan <ncogh...@gmail.com> wrote:
> 
> On 29 December 2016 at 06:41, Steve Dower <steve.do...@python.org 
> <mailto:steve.do...@python.org>> wrote:
> On 28Dec2016 1145, Brett Cannon wrote:
> On Tue, 27 Dec 2016 at 12:15 Ronald Oussoren <ronaldousso...@mac.com 
> <mailto:ronaldousso...@mac.com>
> <mailto:ronaldousso...@mac.com <mailto:ronaldousso...@mac.com>>> wrote:
> A directive would make it easier to ensure that the text about the
> stable API is consistent.  I’d also consider adding that directive
> to all API’s that *are* part of the stable API instead of the other
> way around (that would also require changes to …/stable.html). That
> would have two advantages: firstly it makes it easier to document
> from which version an API is part of the stable ABI, and secondly
> forgetting the annotation would imply that an API is not part of the
> stable ABI instead of accidentally claiming to increase the stable ABI.
> 
> 
> I like Ronald's suggestion of both using a directive and making it for
> the stable ABI since it should be an opt-in thing for the API to be
> stable instead of opt-out.
> 
> The directive is a good idea, but I'm a little concerned about the stable API 
> being opt-out in the headers and opt-in in the documentation.
> 
> Perhaps we should also figure out the preprocessor gymnastics we need to make 
> it opt-in in the headers too? Though once we get the build validation to 
> detect when the stable API changes accidentally it'll be less of an issue.
> 
> Making it opt-in in the documentation could make the build validation easier: 
> check the list from the *docs* against the actual symbols being exported by 
> the headers.
> 
> That would have a few benefits:
> 
> - if you accidentally add a new function to the stable ABI, you get a test 
> failure
> - if you deliberately add a new function to the stable ABI, but forget to 
> document it as such, you get a test failure
> - if the stable ABI version added directive in the docs doesn't match the 
> stable ABI version used in the headers, you get a test failure
> 
> (That suggests the tests would need to check the headers with all stable ABI 
> versions from 3.2 up to the current release and ensure they match the current 
> C API documentation)

This is probably a lot easier than trying to coax the headers into defining 
symbols as not part of the stable API by default (“opt-in”), especially when 
trying to do so in a portable way.  With GCC and clang it seems to be possible 
to use function attributes to force compile time errors when using functions 
not in the limited API, but that wouldn’t work for other declarations (struct 
definitions, …) and still is error prone.

Ronald


> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan   |   ncogh...@gmail.com <mailto:ncogh...@gmail.com>   |   
> Brisbane, Australia
> ___
> Python-Dev mailing list
> Python-Dev@python.org <mailto:Python-Dev@python.org>
> https://mail.python.org/mailman/listinfo/python-dev 
> <https://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com 
> <https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Document C API that is not part of the limited API

2016-12-27 Thread Ronald Oussoren

> On 27 Dec 2016, at 20:04, Serhiy Storchaka  wrote:
> 
> From the documentation:
> 
> https://docs.python.org/3/c-api/stable.html
> 
>In the C API documentation, API elements that are not part of the limited 
> API are marked as "Not part of the limited API."
> 
> But they don't.
> 
> I prepared a sample patch that adds the notes to Unicode Objects and Codecs C 
> API (http://bugs.python.org/issue29086). I'm going to add them to all C API.
> 
> What are your though about formatting the note? Should it be before the 
> description, after the description, but before the "deprecated" directive (as 
> in the patch), or after the first paragraph of the description? Should it be 
> on the separate line or be appended at the end of the previous paragraph, or 
> starts the first paragraph (if before the description)? May be introduce a 
> special directive for it?

A directive would make it easier to ensure that the text about the stable API 
is consistent.  I’d also consider adding that directive to all API’s that *are* 
part of the stable API instead of the other way around (that would also require 
changes to …/stable.html). That would have two advantages: firstly it makes it 
easier to document from which version an API is part of the stable ABI, and 
secondly forgetting the annotation would imply that an API is not part of the 
stable ABI instead of accidentally claiming to increase the stable ABI.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making sure dictionary adds/deletes during iteration always raise exception

2016-12-14 Thread Ronald Oussoren

> On 13 Dec 2016, at 17:52, Eric V. Smith  wrote:
> 
> 
>> On Dec 13, 2016, at 11:42 AM, Raymond Hettinger 
>>  wrote:
>> 
>> 
>>> On Dec 13, 2016, at 1:51 AM, Max Moroz  wrote:
>>> 
>>> Would it be worth ensuring that an exception is ALWAYS raised if a key
>>> is added to or deleted from a dictionary during iteration?
>>> 
>>> I suspect the cost of a more comprehensive error reporting is not
>>> worth the benefit, but I thought I'd ask anyway.
>> 
>> I think what we have has proven itself to be good enough to detect the 
>> common cases, and it isn't worth it to have dicts grow an extra field which 
>> has to be checked or updated on every operation.
>> 
> 
> I agree that we shouldn't complicate things, but wouldn't PEP 509 be a cheap 
> way to check this?

Doesn’t that update the version with every change to a dict instance? That is, 
not just when the set of keys for the dict changes.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses

2016-07-24 Thread Ronald Oussoren

> On 24 Jul 2016, at 13:06, Ronald Oussoren <ronaldousso...@mac.com 
> <mailto:ronaldousso...@mac.com>> wrote:
> 
…

> But on the other hand, that’s why wanted to use PyObjC to validate
> the PEP in the first place.

I’ve hit a fairly significant issue with this, PyObjC’s super contains more 
magic than just this magic that would be fixed by PEP 447. I don’t think I’ll 
be able to finish work on PEP 447 this week because of that, and in the worst 
case will have to retire the PEP.

The problem is as follows: to be able to map all of Cocoa’s methods to Python 
PyObjC creates two proxy classes for every Cocoa class: the regular class and 
its metaclass. The latter is used to store class methods. This is needed 
because Objective-C classes can have instance and class methods with the same 
name, as an example:

@interface NSObject
-(NSString*)description;
+(NSString*)description
@end

The first declaration for “description” is an instance method, the second is a 
class method.  The Python metaclass is mostly a hidden detail, users don’t 
explicitly interact with these classes and use the normal Python convention for 
defining class methods.

This works fine, problems starts when you want to subclass in Python and 
override the class method:

class MyClass (NSObject):
   @classmethod
   def description(cls): 
  return “hello there from %r” % (super(MyClass, cls).description())

If you’re used to normal Python code there’s nothing wrong here, but getting 
this to work required some magic in objc.super to ensure that its 
__getattribute__ looks in the metaclass in this case and not the regular class. 
 The current PEP447-ised version of PyObjC has a number of test failures 
because builtin.super obviously doesn’t contain this hack (and shouldn’t). 

I think I can fix this for modern code that uses an argumentless call to super 
by replacing the cell containing the __class__ reference when moving the method 
from the regular class to the instance class. That would obviously not work for 
the code I showed earlier, but that at least won’t fail silently and the error 
message is specific enough that I can include it in PyObjC’s documentation.

Ronald




> 
> Back to wrangling C code,
> 
>   Ronald
> 
> 
>> 
>> Cheers,
>> Nick.
>> 
>> -- 
>> Nick Coghlan   |   ncogh...@gmail.com <mailto:ncogh...@gmail.com>   |   
>> Brisbane, Australia
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org <mailto:Python-Dev@python.org>
> https://mail.python.org/mailman/listinfo/python-dev 
> <https://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com 
> <https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses

2016-07-24 Thread Ronald Oussoren
On 24 Jul 2016, at 13:06, Ronald Oussoren <ronaldousso...@mac.com> wrote:…But on the other hand, that’s why wanted to use PyObjC to validatethe PEP in the first place.I’ve hit a fairly significant issue with this, PyObjC’s super contains more magic than just this magic that would be fixed by PEP 447. I don’t think I’ll be able to finish work on PEP 447 this week because of that, and in the worst case will have to retire the PEP.The problem is as follows: to be able to map all of Cocoa’s methods to Python PyObjC creates two proxy classes for every Cocoa class: the regular class and its metaclass. The latter is used to store class methods. This is needed because Objective-C classes can have instance and class methods with the same name, as an example:@interface NSObject-(NSString*)description;+(NSString*)description@endThe first declaration for “description” is an instance method, the second is a class method.  The Python metaclass is mostly a hidden detail, users don’t explicitly interact with these classes and use the normal Python convention for defining class methods.This works fine, problems starts when you want to subclass in Python and override the class method:class MyClass (NSObject):   @classmethod   def description(cls):       return “hello there from %r” % (super(MyClass, cls).description())If you’re used to normal Python code there’s nothing wrong here, but getting this to work required some magic in objc.super to ensure that its __getattribute__ looks in the metaclass in this case and not the regular class.  The current PEP447-ised version of PyObjC has a number of test failures because builtin.super obviously doesn’t contain this hack (and shouldn’t). I think I can fix this for modern code that uses an argumentless call to super by replacing the cell containing the __class__ reference when moving the method from the regular class to the instance class. That would obviously not work for the code I showed earlier, but that at least won’t fail silently and the error message is specific enough that I can include it in PyObjC’s documentation.The next steps for me are to fix the remaining issues in PyObjC’s support for PEP 447, and trying to implement the replacement of __class__. If that works out I’ll finish the PEP and ask for a pronouncement. If that doesn’t work out I’ll at least report on that.RonaldBack to wrangling C code,  RonaldCheers,Nick.-- Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia___Python-Dev mailing listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-devUnsubscribe: https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses

2016-07-24 Thread Ronald Oussoren

> On 24 Jul 2016, at 12:37, Nick Coghlan <ncogh...@gmail.com> wrote:
> 
> On 23 July 2016 at 22:26, Ronald Oussoren <ronaldousso...@mac.com> wrote:
>> I’m currently working on getting the patch in 18181 up-to-date w.r.t. the
>> current trunk, the patch in the issue no longer applies cleanly. After that
>> I’ll try to think up some tests that seriously try to break the new
>> behaviour, and I want to update a patch I have for PyObjC to make use of the
>> new functionality to make sure that the PEP actually fixes the issues I had
>> w.r.t. builtin.super’s behavior.
> 
> You may also want to check compatibility with Martin's patch for PEP
> 487 (__init_subclass__ and __set_name__) at
> http://bugs.python.org/issue27366
> 
> I don't *think* it will conflict, but "try it and see what happens" is
> generally a better idea for the descriptor machinery than assuming
> changes are going to be non-conflicting :)

I also don’t think the two will conflict, but that’s based on a superficial 
read 
of that PEP the last time it was posted on python-dev. PEP 487 and 447 affect
different parts of the object model, in particular PEP 487 doesn’t affect 
attribute lookup.

> 
>> What is the best way forward after that? As before this is a change in
>> behavior that, unsurprisingly, few core devs appear to be comfortable with
>> evaluating, combined with new functionality that will likely see little use
>> beyond PyObjC.
> 
> You may want to explicitly ping the
> https://github.com/ipython/traitlets developers to see if this change
> would let them do anything they currently find impractical or
> impossible.

I’ll ask them.

> 
> As far as Mark's concern about a non-terminating method definition
> goes, I do think you need to double check how the semantics of
> object.__getattribute__ are formally defined.
> 
>>>> class Meta(type):
>... def __getattribute__(self, attr):
>... print("Via metaclass!")
>... return super().__getattribute__(attr)
>...
>>>> class Example(metaclass=Meta): pass
>...
>>>> Example.mro()
>Via metaclass!
>[, ]
> 
> Where the current PEP risks falling into unbounded recursion is that
> it appears to propose that the default type.__getdescriptor__
> implementation be defined in terms of accessing cls.__dict__, but a
> normal Python level access to "cls.__dict__" would go through the
> descriptor machinery, triggering an infinite regress.
> 
> The PEP needs to be explicit that where "cls.__dict__" is written in
> the definitions of both the old and new lookup semantics, it is *not*
> referring to a normal class attribute lookup, but rather to the
> interpreter's privileged access to the class namespace (e.g. direct
> 'tp_dict' access in CPython).

On first glance the same is true for all access to dunder attributes in sample
code for the PEP, a similar example could be written for __get__ or __set__. 
I have to think a bit more about how to clearly describe this.

I’m currently coaxing PyObjC into using PEP 447 when that’s available
and that involves several layers of metaclasses in C and that’s annoyingly 
hard to debug when the code doesn’t do what I want like it does now.

But on the other hand, that’s why wanted to use PyObjC to validate
the PEP in the first place.

Back to wrangling C code,

   Ronald


> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 447: Add __getdescriptor__ to metaclasses

2016-07-23 Thread Ronald Oussoren
Hi,

It’s getting a tradition for me to work on PEP 447 during the EuroPython 
sprints and disappear afterwards. Hopefully I can manage to avoid the latter 
step this year…

Last year the conclusion appeared to be that this is an acceptable PEP, but 
Mark Shannon had a concern about a default implementation for __getdescriptor__ 
on type in  
(follow the link for more context):
 
> "__getdescriptor__" is fundamentally different from "__getattribute__" in that
> is defined in terms of itself.
> 
> object.__getattribute__ is defined in terms of type.__getattribute__, but
> type.__getattribute__ just does 
> dictionary lookups. However defining type.__getattribute__ in terms of
> __descriptor__ causes a circularity as
> __descriptor__ has to be looked up on a type.
> 
> So, not only must the cycle be broken by special casing "type", but that
> "__getdescriptor__" can be defined
> not only by a subclass, but also a metaclass that uses "__getdescriptor__" to
> define  "__getdescriptor__" on the class.
> (and so on for meta-meta classes, etc.)
My reaction that year is in 
. As I 
wrote there I did not fully understand the concerns Mark has, probably because 
I’m focussed too much on the implementation in CPython.  If removing 
type.__getdescriptor__ and leaving this special method as an optional hook for 
subclasses of type fixes the conceptual concerns then that’s fine by me. I used 
type.__getdescriptor__ as the default implementation both because it appears to 
be cleaner to me and because this gives subclasses an easy way to access the 
default implementation.

The implementation of the PEP in issue 18181 
 does special-case type.__getdescriptor__ 
but only as an optimisation, the code would work just as well without that 
special casing because the normal attribute lookup machinery is not used when 
accessing special methods written in C. That is, the implementation of 
object.__getattribute__ directly accesses fields of the type struct at the C 
level.  Some magic behavior appears to be necessary even without the addition 
of __getdescriptor__ (type is a subclass of itself, object.__getattribute__ has 
direct access to dict.__getitem__, …).

I’m currently working on getting the patch in 18181 up-to-date w.r.t. the 
current trunk, the patch in the issue no longer applies cleanly. After that 
I’ll try to think up some tests that seriously try to break the new behaviour, 
and I want to update a patch I have for PyObjC to make use of the new 
functionality to make sure that the PEP actually fixes the issues I had w.r.t. 
builtin.super’s behavior.

What is the best way forward after that? As before this is a change in behavior 
that, unsurprisingly, few core devs appear to be comfortable with evaluating, 
combined with new functionality that will likely see little use beyond PyObjC 
(although my opinions of that shouldn’t carry much weight, I thought that 
decorators would have limited appeal when those where introduced and couldn’t 
have been more wrong about that).

Ronald

P.S. The PEP itself: ___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-08-05 Thread Ronald Oussoren

 On 26 Jul 2015, at 14:18, Mark Shannon m...@hotpy.org wrote:
 
 On 26 July 2015 at 10:41 Ronald Oussoren ronaldousso...@mac.com wrote:
 
 
 
 On 26 Jul 2015, at 09:14, Ronald Oussoren ronaldousso...@mac.com wrote:
 
 
 On 25 Jul 2015, at 17:39, Mark Shannon m...@hotpy.org
 mailto:m...@hotpy.org wrote:
 
 Hi,
 
 On 22/07/15 09:25, Ronald Oussoren wrote: Hi,
 
 Another summer with another EuroPython, which means its time again to 
 try to revive PEP 447…
 
 
 IMO, there are two main issues with the PEP and implementation.
 
 1. The implementation as outlined in the PEP is infinitely recursive, since
 the
 lookup of __getdescriptor__ on type must necessarily call
 type.__getdescriptor__.
 The implementation (in C) special cases classes that inherit
 __getdescriptor__
 from type. This special casing should be mentioned in the PEP.
 
 Sure.  An alternative is to slightly change the the PEP: use
 __getdescriptor__ when
 present and directly peek into __dict__ when it is not, and then remove the
 default
 __getdescriptor__. 
 
 The reason I didn’t do this in the PEP is that I prefer a programming model
 where
 I can explicitly call the default behaviour. 
 
 I’m not sure there is a problem after all (but am willing to use the
 alternative I describe above),
 although that might be because I’m too much focussed on CPython semantics.
 
 The __getdescriptor__ method is a slot in the type object and because of that
 the
 normal attribute lookup mechanism is side-stepped for methods implemented in
 C. A
 __getdescriptor__ that is implemented on Python is looked up the normal way 
 by
 the 
 C function that gets added to the type struct for such methods, but that’s 
 not
 a problem for
 type itself.
 
 That’s not new for __getdescriptor__ but happens for most other special
 methods as well,
 as I noted in my previous mail, and also happens for the __dict__ lookup
 that’s currently
 used (t.__dict__ is an attribute and should be lookup up using
 __getattribute__, …)
 
 
 __getdescriptor__ is fundamentally different from __getattribute__ in that
 is defined in terms of itself.
 
 object.__getattribute__ is defined in terms of type.__getattribute__, but
 type.__getattribute__ just does 
 dictionary lookups.

object.__getattribute__ is actually defined in terms of type.__dict__ and 
object.__dict__. Type.__getattribute__ is at best used to to find type.__dict__.

 However defining type.__getattribute__ in terms of
 __descriptor__ causes a circularity as
 __descriptor__ has to be looked up on a type.
 
 So, not only must the cycle be broken by special casing type, but that
 __getdescriptor__ can be defined
 not only by a subclass, but also a metaclass that uses __getdescriptor__ to
 define  __getdescriptor__ on the class.
 (and so on for meta-meta classes, etc.)

Are the semantics of special methods backed by a member in PyTypeObject part of 
Python’s semantics, or are those CPython implementation details/warts? In 
particular that such methods are access directly without using __getattribute__ 
at all (or at least only indirectly when the method is implemented in Python).  
That is:

 class Dict (dict):
...def __getattribute__(self, nm):
...   print(Get, nm)
...   return dict.__getattribute__(self, nm)
... 
 
 d = Dict(a=4)
 d.__getitem__('a')
Get __getitem__
4
 d['a']
4
 

(And likewise for other special methods, which amongst others means that 
neither __getattribute__ nor __getdescriptor__ can be used to dynamicly add 
such methods to a class)

In my proposed patch I do special case “type”, but that’s only intended as a 
(for now unbenchmarked) speed hack.  The code would work just as well without 
the hack because the metatype’s  __getdescriptor__ is looked up directly in the 
PyTypeObject on the C level, without using __getattribute__ and hence without 
having to use recursion.

BTW. I wouldn’t mind dropping the default “type.__getdescriptor__” completely 
and reword my proposal to state that __getdescriptor__ is used when present, 
and otherwise __dict__ is accessed directly.  That would remove the infinite 
recursion, as all metaclass chains should at some end up at “type” which then 
wouldn’t have a “__getdescriptor__”.   

The reason I added “type.__getdescriptor__” is that IMHO gives a nicer 
programming model where you can call the superclass implementation in the 
implementation of __getdescriptor__ in a subclass.  Given the minimal semantics 
of “type.__getdescriptor__”  loosing that wouldn’t be too bad to get a better 
object model.

Ronald

 
 Cheers,
 Mark
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options

Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-29 Thread Ronald Oussoren

 On 28 Jul 2015, at 03:13, Tres Seaver tsea...@palladion.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 07/27/2015 06:11 PM, Ronald Oussoren wrote:
 
 Treating time as UTC with conversions at the application edge might
 be cleaner in some sense, but can make code harder to read for 
 application domain experts.
 
 It might be nice to have time zone aware datetime objects with the 
 right(TM) semantics, but those can and should not replace the naive 
 objects we know and love.
 
 Interesting.  My experience is exactly the opposite:  the datetimes which
 application domain experts cared about *always* needed to be non-naive
 (zone captured explicitly or from the user's machine and converted to
 UTC/GMT for storage).  As with encoded bytes, allowing a naive instance
 inside the borders the system was always a time-bomb bug (stuff would
 blow up at a point far removed from which it was introduced).
 
 The instances which could have safely been naive were all
 logging-related, where the zone was implied by the system's timezone
 (nearly always UTC).  I guess the difference is that I'm usually writing
 apps whose users can't be presumed to be in any one timezone.  Even in
 those cases, having the logged datetimes be incomparable to user-facing
 ones would make them less useful.

I usually write application used by local users where the timezone is completely
irrelevant, including DST.  Stuff needs to be done at (say) 8PM, ands that’s
8PM local time. Switching to and from UTC just adds complications. 

I’m lucky enough that most datetime calculations happen within one work week
and therefore never have to cross DST transitions.  For longer periods I usually
only care about dates, and almost never about the number of seconds between
two datetime instances.   That makes the naive datetime from the stdlib a 
very convenient programming model.

And I’m in a country that’s small enough to have only one timezone.

IMHO Unicode is different in that regard, there the application logic can 
clearly
be expressed as text and the encoding to/from bytes can safely be hidden in
the I/O layer. Often the users I deal with can follow the application logic 
w.r.t.
text handling, but have no idea about encodings (but do care about accented
characters). With some luck they can provide a sample file that allows me to 
deduce the encoding that should be used, and most applications are moving 
to UTF-8.

BTW. Note that I’m not saying that a timezone aware datetime is bad, just
that they are not always necessary.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   5   >