[Python-Dev] Re: [SPAM] Re: Switching to Discourse

2022-07-15 Thread Phil Thompson via Python-Dev

On 15/07/2022 16:09, Rob Boehne via Python-Dev wrote:

100% agree – dealing with 5 or more platforms for discussion groups is
a nightmare, and I tend not to follow any of them as closely for that
reason.


I agree. I don't mind having to use Discourse if I want to take part in 
a discussion but 99% of the time I just want to keep up to date with 
what is going on. In that case I want the information to come to me - I 
don't want to have to hunt for it. Can there be an RSS feed for 
everything, not just PEPs?


Phil


From: Skip Montanaro 
Date: Friday, July 15, 2022 at 9:26 AM
To: Petr Viktorin 
Cc: python-dev@python.org 
Subject: [SPAM] [Python-Dev] Re: Switching to Discourse
The
discuss.python.org
experiment has been going on for quite a while,
and while the platform is not without its issues, we consider it a
success. The Core Development category is busier than python-dev.
According to staff,
discuss.python.org
is much easier to moderate.. If
you're following python-dev but not
discuss.python.org,
you're missing out.

Personally, I think you are focused too narrowly and aren't seeing the
forest for the trees. Email protocols were long ago standardized. As a
result, people can use any of a large number of applications to read
and organize their email. To my knowledge, there is no standardization
amongst the various forum tools out there. I'm not suggesting discuss
is necessarily better or worse than other (often not open source)
forum tools, but each one implements its own walled garden. I'm
referring more broadly than just Python, or even Python development,
though even within the Python community it's now difficult to
manage/monitor all the various discussion sources (email, discuss,
GitHub, Stack Overflow, ...)

Get off my lawn! ;-)

Skip, kinda glad he's retired now...


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at
https://mail.python.org/archives/list/python-dev@python.org/message/5R376DBMGYMJCJTXCZPNRUBNUPV5OSAJ/
Code of Conduct: http://python.org/psf/codeofconduct/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PZ246BKJSWB3AQZSYMWUTX35RMWCPPQ6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Worried about Python release schedule and lack of stable C-API

2021-10-05 Thread Phil Thompson via Python-Dev

On 05/10/2021 07:59, Nick Coghlan wrote:

On Tue, 28 Sep 2021, 6:55 am Brett Cannon,  wrote:




On Sun, Sep 26, 2021 at 3:51 AM Phil Thompson via Python-Dev <
python-dev@python.org> wrote:



However the stable ABI is still a second class citizen as it is still
not possible (AFAIK) to specify a wheel name that doesn't need to
explicitly include each supported Python version (rather than a 
minimum

stable ABI version).



Actually you can do this. The list of compatible wheels for a platform
starts at CPython 3.2 when the stable ABI was introduced and goes 
forward
to the version of Python you are running. So you can build a wheel 
file
that targets the oldest version of CPython that you are targeting and 
its

version of the stable ABI and it is considered forward compatible. See
`python -m pip debug --verbose` for the complete list of wheel tags 
that

are supported for an interpreter.



I think Phil's point is a build side one: as far as I know, the process 
for
getting one of those more generic file names is still to build a wheel 
with

an overly precise name for the stable ABI declarations used, and then
rename it.

The correspondence between "I used these stable ABI declarations in my
module build" and "I can use this more broadly accepted wheel name" is
currently obscure enough that I couldn't tell you off the top of my 
head
how to do it, and I contributed to the design of both sides of the 
equation.


Actually improving the build ergonomics would be hard (and outside
CPython's own scope), but offering a table in the stable ABI docs 
giving

suggested wheel tags for different stable ABI declarations should be
feasible, and would be useful to both folks renaming already built 
wheels

and anyone working on improving the build automation tools.


Actually I was able to do what I wanted without renaming wheels...

Specify 'py_limited_api=True' as an argument to Extension() (using 
setuptools v57.0.0).


Specify...

[bdist_wheel]
py_limited_api = cp36

...in setup.cfg (using wheel v0.34.2).

The resulting wheel has a Python tag of 'cp36' and an ABI tag of 'abi3' 
for all platforms, which is interpreted by the current version of pip 
exactly as I want.


I'm not sure if this is documented anywhere.

Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XVBN3OWN5TAYAKTUYI6MEXATX3I62ZEZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Worried about Python release schedule and lack of stable C-API

2021-09-28 Thread Phil Thompson via Python-Dev

On 27/09/2021 21:53, Brett Cannon wrote:

On Sun, Sep 26, 2021 at 3:51 AM Phil Thompson via Python-Dev <
python-dev@python.org> wrote:


On 26/09/2021 05:21, Steven D'Aprano wrote:

[snip]





> These are not rhetorical questions, I genuinely do not know. I *think*
> that there was an attempt to make a stable C API back in 3.2 days:
>
> https://www.python.org/dev/peps/pep-0384/
>
> but I don't know what difference it has made to extension writers in
> practice. From your description, it sounds like perhaps not as big a
> difference as we would have liked.
>
> Maybe extension writers are not using the stable C API? Is that even
> possible? Please excuse my ignorance.

PyQt has used the stable ABI for many years. The main reason for using
it is to reduce the number of wheels. The PyQt ecosystem currently
contains 15 PyPI projects across 4 platforms supporting 5 Python
versions (including v3.10). Without the stable ABI a new release would
require 300 wheels. With the stable ABI it is a more manageable 60
wheels.

However the stable ABI is still a second class citizen as it is still
not possible (AFAIK) to specify a wheel name that doesn't need to
explicitly include each supported Python version (rather than a 
minimum

stable ABI version).



Actually you can do this. The list of compatible wheels for a platform
starts at CPython 3.2 when the stable ABI was introduced and goes 
forward

to the version of Python you are running. So you can build a wheel file
that targets the oldest version of CPython that you are targeting and 
its

version of the stable ABI and it is considered forward compatible. See
`python -m pip debug --verbose` for the complete list of wheel tags 
that

are supported for an interpreter.


Logical and it works.

Many thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/22MU4QKR46SMFQWQFPWUIWM76JYJMJ3L/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Worried about Python release schedule and lack of stable C-API

2021-09-26 Thread Phil Thompson via Python-Dev

On 26/09/2021 05:21, Steven D'Aprano wrote:

[snip]


As for the C-API... Python is 30 years old. Has it ever had a stable
C-API before now? Hasn't it *always* been the case that C packages have
targetted a single version and need to be rebuilt from source on every
release?


No.


These are not rhetorical questions, I genuinely do not know. I *think*
that there was an attempt to make a stable C API back in 3.2 days:

https://www.python.org/dev/peps/pep-0384/

but I don't know what difference it has made to extension writers in
practice. From your description, it sounds like perhaps not as big a
difference as we would have liked.

Maybe extension writers are not using the stable C API? Is that even
possible? Please excuse my ignorance.


PyQt has used the stable ABI for many years. The main reason for using 
it is to reduce the number of wheels. The PyQt ecosystem currently 
contains 15 PyPI projects across 4 platforms supporting 5 Python 
versions (including v3.10). Without the stable ABI a new release would 
require 300 wheels. With the stable ABI it is a more manageable 60 
wheels.


However the stable ABI is still a second class citizen as it is still 
not possible (AFAIK) to specify a wheel name that doesn't need to 
explicitly include each supported Python version (rather than a minimum 
stable ABI version). In other words it doesn't solve the OP's concern 
about unmaintained older packages being able to be installed in newer 
versions of Python (even though those packages had been explicitly 
designed to do so).


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RPDUNMG6RS4FBG6GODZDZ4DCB252N4VP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-03 Thread Phil Thompson via Python-Dev

On 02/02/2021 23:08, Greg Ewing wrote:

On 3/02/21 4:52 am, Phil Thompson wrote:

Thanks - that's fairly definitive, although I don't really understand 
why __new__ has this particular requirement.


The job of tp_new is to initialise the C struct. To do this,
it first has to initialise the fields of the struct it
inherits from, then initialise any fields of its own that
it adds, in that order.


Understood.


Initialising the inherited fields must be done by calling
the tp_new for the struct that it inherits from. You don't
want to call the tp_new of some other class that might have
got inserted into the MRO, because you have no idea what
kind of C struct it expects to get.


I had assumed that some other magic in typeobject.c (eg. conflicting 
meta-classes) would have raised an exception before getting to this 
stage if there was a conflict.



Cooperative calling is a nice idea, but it requires rather
special conditions to make it work. All the methods must
have exactly the same signature, and it mustn't matter what
order they're called in. Those conditions don't apply to
__new__, especially at the C level where everything is much
more strict type-wise.


Thanks for the explanation.

Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S5KRTD7M73SMBDADMMP5XM5CPT3BLGLD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-02 Thread Phil Thompson via Python-Dev

On 02/02/2021 14:18, Greg Ewing wrote:

On 3/02/21 12:07 am, Phil Thompson wrote:

On 01/02/2021 23:50, Greg Ewing wrote:

At the C level, there is always a *single* inheritance hierarchy.


Why?


Because a C struct can only extend one other C struct.


Yes - I misunderstood what you meant by "at the C level".

I want my C-implemented class's __new__ to support cooperative 
multi-inheritance


I don't think this is possible. Here is what the C API docs have to
say about the matter:

---

Note

If you are creating a co-operative tp_new (one that calls a base
type’s tp_new or __new__()), you must not try to determine what method
to call using method resolution order at runtime. Always statically
determine what type you are going to call, and call its tp_new
directly, or via type->tp_base->tp_new. If you do not do this, Python
subclasses of your type that also inherit from other Python-defined
classes may not work correctly. (Specifically, you may not be able to
create instances of such subclasses without getting a TypeError.)

---

(Source: https://docs.python.org/3.5/extending/newtypes.html)

This doesn't mean that your type can't be used in multiple inheritance,
just that __new__ methods in particular can't be cooperative.


Thanks - that's fairly definitive, although I don't really understand 
why __new__ has this particular requirement.


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FWSIZUAGD4QRZQ2ZDKLE7MP4P76EIMKL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-02 Thread Phil Thompson via Python-Dev

On 01/02/2021 19:06, Guido van Rossum wrote:

That code is quite old. This comment tries to explain it:
```
/* Check that the use doesn't do something silly and unsafe like
   object.__new__(dict). To do this, we check that the
most derived base that's not a heap type is this type. */
```


I understand what it is checking, but I don't understand why it is 
"silly and unsafe".


I think you may have to special-case this and arrange for B.__new__() 
to be

called, like it or not.


But it's already been called. The check fails when trying to 
subsequently call object.__new__().


(If you want us to change the code, please file a bpo bug report. I 
know

that's no fun, but it's the way to get the right people involved.)


Happy to do that but I first wanted to check if I was doing something 
"silly" - I'm still not sure.


Phil


On Mon, Feb 1, 2021 at 3:27 AM Phil Thompson via Python-Dev <
python-dev@python.org> wrote:


Hi,

I'm trying to understand the purpose of the check in tp_new_wrapper() 
of

typeobject.c that results in the "is not safe" exception.

I have the following class hierarchy...

B -> A -> object

...where B and A are implemented in C. Class A has an implementation 
of

tp_new which does a few context-specific checks before calling
PyBaseObject_Type.tp_new() directly to actually create the object. 
This

works fine.

However I want to allow class B to be used with a Python mixin. A's
tp_new() then has to do something similar to super().__new__(). I have
tried to implement this by locating the type object after A in B's 
MRO,
getting it's '__new__' attribute and calling it (using 
PyObject_Call())
with B passed as the only argument. However I then get the "is not 
safe"

exception, specifically...

TypeError: object.__new__(B) is not safe, use B.__new__()

I take the same approach for __init__() and that works fine.

If I comment out the check in tp_new_wrapper() then everything works
fine.

So, am I doing something unsafe? If so, what?

Or, is the check at fault in not allowing the case of a C extension 
type

with its own tp_new?

Thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at
https://mail.python.org/archives/list/python-dev@python.org/message/HRGDEMURCJ5DSNEPMQPQR3R7VVDFA4ZX/
Code of Conduct: http://python.org/psf/codeofconduct/


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZNJK6BJLXCMOOZNEDGNZZKT2YG4XUV57/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-02 Thread Phil Thompson via Python-Dev

On 01/02/2021 23:50, Greg Ewing wrote:

On 2/02/21 12:13 am, Phil Thompson via Python-Dev wrote:

TypeError: object.__new__(B) is not safe, use B.__new__()


It's not safe because object.__new__ doesn't know about any
C-level initialisation that A or B need.


But A.__new__ is calling object.__new__ and so can take care of its own 
needs after the latter returns.



At the C level, there is always a *single* inheritance hierarchy.


Why?


The right thing is for B's tp_new to directly call A's tp_new,
which calls object's tp_new.


I want my C-implemented class's __new__ to support cooperative 
multi-inheritance so my A class cannot assume that object.__new__ is the 
next in the MRO.


I did try to call the next-in-MRO's tp_new directly (rather that calling 
it's __new__ attribute) but that gave me recursion errors.



Don't worry about Python-level multiple inheritance; the
interpreter won't let you create an inheritance structure
that would mess this up.


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GZ2RF7TJ6MXDODPWCJB3PDC2Z3VDSQIQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Understanding "is not safe" in typeobject.c

2021-02-01 Thread Phil Thompson via Python-Dev

Hi,

I'm trying to understand the purpose of the check in tp_new_wrapper() of 
typeobject.c that results in the "is not safe" exception.


I have the following class hierarchy...

B -> A -> object

...where B and A are implemented in C. Class A has an implementation of 
tp_new which does a few context-specific checks before calling 
PyBaseObject_Type.tp_new() directly to actually create the object. This 
works fine.


However I want to allow class B to be used with a Python mixin. A's 
tp_new() then has to do something similar to super().__new__(). I have 
tried to implement this by locating the type object after A in B's MRO, 
getting it's '__new__' attribute and calling it (using PyObject_Call()) 
with B passed as the only argument. However I then get the "is not safe" 
exception, specifically...


TypeError: object.__new__(B) is not safe, use B.__new__()

I take the same approach for __init__() and that works fine.

If I comment out the check in tp_new_wrapper() then everything works 
fine.


So, am I doing something unsafe? If so, what?

Or, is the check at fault in not allowing the case of a C extension type 
with its own tp_new?


Thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HRGDEMURCJ5DSNEPMQPQR3R7VVDFA4ZX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-28 Thread Phil Thompson via Python-Dev

On 28/12/2020 11:27, Inada Naoki wrote:

On Mon, Dec 28, 2020 at 7:22 PM Phil Thompson
 wrote:


> So I'm +1 to make Unicode simple by removing PyUnicode_READY(), and -1
> to make Unicode complicated by adding customizable callback for lazy
> population.
>
> Anyway, I am OK to un-deprecate PyUnicode_READY() and make it no-op
> macro since Python 3.12.
> But I don't know how many third-parties use it properly, because
> legacy Unicode objects are very rare already.

For me lazy population might not be enough (as I'm not sure precisely
what you mean by it). I would like to be able to use my foreign 
unicode

thing to be used as the storage.

For example (where text() returns a unicode object with a foreign
kind)...

some_text = an_editor.text()
more_text = another_editor.text()

if some_text == more_text:
 print("The text is the same")

...would not involve any conversions at all.


So you mean custom internal representation of exact Unicode object?

Then, I am more strong -1, sorry.
I can not believe the merits of it is bigger than the costs of its 
complexity.

If 3rd party wants to use completely different internal
representation, it must not be a unicode object at all.


I would have thought that an object was defined by its behaviour rather 
than by any particular implementation detail. However I completely 
understand the desire to avoid additional complexity of the 
implementation.


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/D4U7TWKNP347HG37H56EPVJHUNRET7QX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-28 Thread Phil Thompson via Python-Dev

On 28/12/2020 02:07, Inada Naoki wrote:

On Sun, Dec 27, 2020 at 8:20 PM Ronald Oussoren via Python-Dev
 wrote:


On 26 Dec 2020, at 18:43, Guido van Rossum  wrote:

On Sat, Dec 26, 2020 at 3:54 AM Phil Thompson via Python-Dev 
 wrote:




That wouldn’t be a solution for code using the PyUnicode_* APIs of 
course, nor Python code explicitly checking for the str type.


In the end a new string “kind” (next to the 1, 2 and 4 byte variants) 
where callbacks are used to provide data might be the most pragmatic.  
That will still break code peaking directly in the the PyUnicodeObject 
struct, but anyone doing that should know that that is not a stable 
API.




I had a similar idea for lazy loading or lazy decoding of Unicode 
objects.

But I have rejected the idea and proposed to deprecate
PyUnicode_READY() because of the balance between merits and
complexity:

* Simplifying the Unicode object may introduce more room for
optimization because Unicode is the essential type for Python. Since
Python is a dynamic language, a huge amount of str comparison happened
in runtime compared with static languages like Java and Rust.
* Third parties may forget to check PyErr_Occurred() after API like
PyUnicode_Contains or PyUnicode_Compare when the author knows all
operands are exact Unicode type.

Additionally, if we introduce the customizable lazy str object, it's
very easy to release GIL during basic Unicode operations. Many third
parties may assume PyUnicode_Compare doesn't release GIL if both
operands are Unicode objects. It will produce bugs hard to find and
reproduce.


I would have no problem with the protocol stating that the GIL must not 
be released by "foreign" unicode implementations.



So I'm +1 to make Unicode simple by removing PyUnicode_READY(), and -1
to make Unicode complicated by adding customizable callback for lazy
population.

Anyway, I am OK to un-deprecate PyUnicode_READY() and make it no-op
macro since Python 3.12.
But I don't know how many third-parties use it properly, because
legacy Unicode objects are very rare already.


For me lazy population might not be enough (as I'm not sure precisely 
what you mean by it). I would like to be able to use my foreign unicode 
thing to be used as the storage.


For example (where text() returns a unicode object with a foreign 
kind)...


some_text = an_editor.text()
more_text = another_editor.text()

if some_text == more_text:
print("The text is the same")

...would not involve any conversions at all. The following would require 
a conversion...


if some_text == "literal text":

Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZSPNNLM25FRIEK2KYN5JORIR76PZH22N/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-26 Thread Phil Thompson via Python-Dev

On 26/12/2020 10:52, Ronald Oussoren via Python-Dev wrote:
On 25 Dec 2020, at 23:03, Nelson, Karl E. via Python-Dev 
 wrote:


I was directed to post this request to the general Python development 
community so hopefully this is on topic.


One of the weaknesses of the PyUnicode implementation is that the type 
is concrete and there is no option for an abstract proxy string to a 
foreign source.  This is an issue for an API like JPype in which 
java.lang.Strings are passed back from Java.   Ideally these would be 
a type derived from the Unicode type str, but that requires 
transferring the memory immediately from Java to Python even when that 
handle is large and will never be accessed from within Python.  For 
certain operations like XML parsing this can be prohibitable, so 
instead of returning a str we return a JString.   (There is a separate 
issue that Java method names and Python method names conflict so 
direct inheritance creates some problems.)


The JString type can of course be transferred to Python space at any 
time as both Python Unicode and Java string objects are immutable.  
However the CPython API which takes strings only accepts the Unicode 
type objects which have a concrete implementation.  It is possible to 
extend strings, but those extensions do not allow for proxing as far 
as I can tell.  Thus there is no option currently to proxy to a string 
representation in another language.  The concept of the using the duck 
type ``__str__`` method is insufficient as this indices that an object 
can become a string, rather than “this object is effectively a string” 
for the purposes of the CPython API.


One way to address this is to use currently outdated copy of READY to 
extend Unicode objects to other languages.  A class like JString would 
be an unready Unicode object which when READY is called transfers the 
memory from Java, sets up the flags and sets up a pointer to the code 
point representation.  Unfortunately the READY concept is scheduled 
for removal and thus the chance to address the needs for proxying a 
Unicode to another languages representation may be limited. There may 
be other methods to accomplish this without using the concept of 
READY.  So long as access to the code points go through the Unicode 
API and the Unicode object can be extended such that the actual code 
points may be located outside of the Unicode object then a proxy can 
still be achieved if there are hooks in it to decided when a transfer 
should be performed.   Generally the transfer request only needs to 
happen once  but the key issue being that the number of code points 
(nor the kind of points) will not be known until the memory is 
transferred.


Java has much the same problem.   Although they defined an interface 
class “java.lang.CharacterArray” the actually “java.lang.String” class 
is concrete and almost all API methods take a String rather than the 
base interface even when the base interface would have been adequate.  
Thus just like Python has difficulty treating a foreign string class 
as it would a native one, Java cannot treat a Python string as native 
one as well.  So Python strings get represented as CharacterArray type 
which effectively limits it use greatly.


Summary:

A String proxy would need the address of the memory in the “wstr” slot 
though the code points may be char[], wchar[] or int[] depending the 
representation in the proxy.
API calls to interpret the data would need to check to see if the data 
is transferred first, if not it would call the proxy dependent 
transfer method which is responsible for creating a block of code 
points and set up flags (kind, ascii, ready, and compact).
The memory block allocated would need to call the proxy dependent 
destructor to clean up with the string is done.
It is not clear if this would have impact on performance.   Python 
already has the concept of a string which needs actions before it can 
be accessed, but this is scheduled for removal.


Are there any plans currently to address the concept of a proxy string 
in PyUnicode API?


I have a similar problem in PyObjC which proxies Objective-C classes
to Python (and the other way around). For interop with Python code I
proxy Objective-C strings using a subclass of str() that is eagerly
populated even if, as you mention as well, a lot of these proxy object
are never used in a context where the str() representation is
important.  A complicating factor for me is that Objective-C strings
are, in general, mutable which can lead to interesting behaviour.
Another disadvantage of subclassing str() for foreign string types is
that this removes the proxy class from their logical location in the
class hierarchy (in my case the proxy type is not a subclass of the
proxy type for NSObject, even though all Objective-C classes inherit
from NSObject).

I primarily chose to subclass the str type because that enables using
the NSString proxy type with C functions/methods that expect a string