Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-07 Thread Chris Angelico
On Thu, Mar 8, 2018 at 5:17 AM, Ooomzay  wrote:
> On Thursday, 1 March 2018 22:44:59 UTC, Rob Gaddi  wrote:
>> On 03/01/2018 02:24 PM, Lawrence D’Oliveiro wrote:
>> > On Thursday, March 1, 2018 at 6:44:39 PM UTC+13, Paul Rubin wrote:
>> >> DOM trees are a classic example (see the various DOM modules in the
>> >> Python stdlib).  Non-leaf nodes have a list of child nodes, child nodes
>> >> have pointers back upwards to their parent, and each child node has
>> >> pointers to its left and right siblings if they exist.  It's all very
>> >> circular.
>> >
>> > This is why you have weak refs.
>> >
>> But making sure that the failure to use weakrefs doesn't memory leak the
>> program into the pavement is why you have background garbage collection
>> with intelligent cyclical reference breaking.
>
> That is the worst justification for gc I have seen yet, and also the most 
> truthful. gc.set_debug(gc.DEBUG_LEAK) is your friend my friends.

Look. You obviously want to use C++, so go use it, and stop trying to
make *us* use it. We've chosen to use Python.

*plonk*

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-07 Thread Ooomzay
On Thursday, 1 March 2018 22:44:59 UTC, Rob Gaddi  wrote:
> On 03/01/2018 02:24 PM, Lawrence D’Oliveiro wrote:
> > On Thursday, March 1, 2018 at 6:44:39 PM UTC+13, Paul Rubin wrote:
> >> DOM trees are a classic example (see the various DOM modules in the
> >> Python stdlib).  Non-leaf nodes have a list of child nodes, child nodes
> >> have pointers back upwards to their parent, and each child node has
> >> pointers to its left and right siblings if they exist.  It's all very
> >> circular.
> > 
> > This is why you have weak refs.
> > 
> But making sure that the failure to use weakrefs doesn't memory leak the 
> program into the pavement is why you have background garbage collection 
> with intelligent cyclical reference breaking.

That is the worst justification for gc I have seen yet, and also the most 
truthful. gc.set_debug(gc.DEBUG_LEAK) is your friend my friends.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Layers of abstraction, was Re: RFC: Proposal: Deterministic Object Destruction

2018-03-06 Thread Ooomzay
On Wednesday, 7 March 2018 06:43:10 UTC, Ooomzay  wrote:
> On Tuesday, 6 March 2018 14:12:38 UTC, Peter Otten  wrote:
> > Chris Angelico wrote:
> > 
> > > On Tue, Mar 6, 2018 at 10:04 AM, Steven D'Aprano
> > >  wrote:
> > >> # Later.
> > >> if __name__ = '__main__':
> > >> # Enter the Kingdom of Nouns.
> > > 
> > > Don't you need a NounKingdomEnterer to do that for you?
> > 
> > No, for some extra flexibility there should be a NounKingdomEntererFactory 
> 
> For example open().
> 
> > -- which of course has to implement the AbstractNounKingdomEntererFactory 
> > interface. 
> 
> For example a PEP 343 "Context Manager".
> 
> In the swamplands of the pythonistas the one-eyed man is BDFL!
> 
> Amen.

Damn. That should have read:-

In the swamplands of the pythonistas the man with a badly leaking boat is BDFL.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Layers of abstraction, was Re: RFC: Proposal: Deterministic Object Destruction

2018-03-06 Thread Ooomzay
On Tuesday, 6 March 2018 14:12:38 UTC, Peter Otten  wrote:
> Chris Angelico wrote:
> 
> > On Tue, Mar 6, 2018 at 10:04 AM, Steven D'Aprano
> >  wrote:
> >> # Later.
> >> if __name__ = '__main__':
> >> # Enter the Kingdom of Nouns.
> > 
> > Don't you need a NounKingdomEnterer to do that for you?
> 
> No, for some extra flexibility there should be a NounKingdomEntererFactory 

For example open().

> -- which of course has to implement the AbstractNounKingdomEntererFactory 
> interface. 

For example a PEP 343 "Context Manager".

In the swamplands of the pythonistas the one-eyed man is BDFL!

Amen.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-06 Thread Steven D'Aprano
On Tue, 06 Mar 2018 14:09:53 -0800, Ooomzay wrote:

> Unfortunately, despite having conquered it, without a _guarantee_ of
> this behaviour from the language, or at least one mainstream
> implementation, I will not invest in python again.

Oh well, so sad. See you later.


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-06 Thread Chris Angelico
On Wed, Mar 7, 2018 at 9:09 AM, Ooomzay  wrote:
>> I'm not trying to dissuade you from using RAII in your own applications,
>> if it works for you, great.
>
> Unfortunately, despite having conquered it, without a _guarantee_ of this
> behaviour from the language, or at least one mainstream implementation,
> I will not invest in python again. Nor recommend any one else with a serious
> real world resource management application to do so. This was the original
> motive for my PEP.

What a pity Python will have to lose someone whose sole goal was to
write C++ code. Had you but chosen to write Python code instead, you
could have saved us all a hundred emails or so and just used the
'with' statement, same as the rest of us do. Considering that "real
world resource management" is *exactly* the purpose of the 'with'
statement, I honestly don't see what the problem is.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-06 Thread Ooomzay

On Monday, 5 March 2018 23:06:53 UTC, Steven D'Aprano  wrote:
> On Mon, 05 Mar 2018 09:22:33 -0800, Ooomzay wrote:
> [...]
> > If you would like to have a shot at coding this without RAII, but
> > preserving the OO design, you will find that it is considerably
> > simpler than the with/context manager approach.
> 
> Preserving the OO design, you say? Okay, since my application apparently 
> isn't allowed to know that it is processing two files, I'll simply 
> delegate that to the object:
> 
> class C(A, B):
> def run(self):
> with open(self.a) as a:
> with open(self.b) as b:
> process(a, b)
> 
> # Later.
> if __name__ = '__main__':
> # Enter the Kingdom of Nouns.
> c = C()
> c.run()
> 
> There you go. Encapsulation and OO design.

1. This does not execute. It is only when you actually flesh out 
these deliberately minimal classes A, B & C with PEP232 paraphernalia
that we all be able to see clearly how much cleaner or messier 
things are with PEP232. 

2. Your Class C breaks A & B's encapsulation by inheriting rather than 
composing them. Apart from increasing coupling and preventing 
substitution, C contained A & B in this example to illustrate that there is 
no RAII related burden whatever_on this intermediate class as 
it manages no external resources directly. It does not even 
need to implement __del__.

3. You have assumed a single threaded application. Please imagine 
that A and B are classes managing some remote valves and maintain 
their own threads as well as a serial port for comms. And C is there to 
coordinate them towards some greater purpose.

If you would like to try and add PEP232 support to
classes A,B & C to the point that you can create
and destroy c = C() in an exception-safe way we may 
all learn something.

> I'm not trying to dissuade you from using RAII in your own applications, 
> if it works for you, great.

Unfortunately, despite having conquered it, without a _guarantee_ of this 
behaviour from the language, or at least one mainstream implementation, 
I will not invest in python again. Nor recommend any one else with a serious 
real world resource management application to do so. This was the original 
motive for my PEP.
-- 
https://mail.python.org/mailman/listinfo/python-list


Layers of abstraction, was Re: RFC: Proposal: Deterministic Object Destruction

2018-03-06 Thread Peter Otten
Chris Angelico wrote:

> On Tue, Mar 6, 2018 at 10:04 AM, Steven D'Aprano
>  wrote:
>> # Later.
>> if __name__ = '__main__':
>> # Enter the Kingdom of Nouns.
> 
> Don't you need a NounKingdomEnterer to do that for you?

No, for some extra flexibility there should be a NounKingdomEntererFactory 
-- which of course has to implement the AbstractNounKingdomEntererFactory 
interface. Instantiating the right NounKingdomEntererFactory can easily be 
delegated to a NounKingdomEntererFactoryProducer. It's turtles^Wfactories 
all the way down...

I don't know how this will ever do anything -- my theory is that some good 
soul managed to sneak a few lines of javascript into the giant xml 
configuration file ;)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Chris Angelico
On Tue, Mar 6, 2018 at 10:04 AM, Steven D'Aprano
 wrote:
> # Later.
> if __name__ = '__main__':
> # Enter the Kingdom of Nouns.

Don't you need a NounKingdomEnterer to do that for you?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Steven D'Aprano
On Mon, 05 Mar 2018 09:22:33 -0800, Ooomzay wrote:

[...]
>> Looking at that code, my major thought is  that there is far too much
>> OO design, not enough simplicity.
> 
> If this is far too much OO for you then RAII will be of no interest to
> you.

I think this is probably the wisest thing you have said in this entire 
discussion. OO design is often over-rated and elevated to the position of 
Holy Writ, full of over-engineered design patterns needed to overcome the 
flaws of OO design. I see no reason to design my application using OO 
principles when they aren't needed.

You say you have had great success with the RAII pattern in your own 
Python code. Great! I mean that sincerely. I'm really happy for you. But 
you now want to force that on *everyone*, even onto implementers where 
this will be an enormous burden to re-write their interpreter from the 
ground up to add a reference counter.

Why?

Do you have even the tiniest interest in running your RAII application 
under IronPython or Jython? Why do you care that your code, using CPython-
only features, must be portable to other implementations?

If your PEP is accepted, will you be volunteering your time for the next 
two or four years to re-write Jython and IronPython? To say nothing of 
PyPy and possibly Stackless?

CPython does what you want. So be satisfied that if you target CPython, 
you can use the RAII pattern to your heart's desire.


[...]
>> Perhaps I'm missing something, but I have no idea what benefit there is
>> in that style of code over:
>> 
>> with open('a') as a:
>> with open('b') as b:
>> process(a, b)
> 
> Encapsulation. Your application code is now managing details that should
> be hidden in the object. This PEP and RAII are unashamedly targeted at
> OO designs.

Encapsulation for the sake of encapsulation leads to monstrously over-
engineered heights of abstraction. Without a concrete use-case, I have no 
confidence that this *should* be "hidden in the object". Even if it 
should, I have no confidence that this specific design for encapsulation 
and/or data hiding[1] (they aren't the same thing) is the best design.


> If you would like to have a shot at coding this without RAII, but
> preserving the OO design, you will find that it is considerably
> _simpler_ than the with/context manager approach.

Preserving the OO design, you say? Okay, since my application apparently 
isn't allowed to know that it is processing two files, I'll simply 
delegate that to the object:

class C(A, B):
def run(self):
with open(self.a) as a:
with open(self.b) as b:
process(a, b)

# Later.
if __name__ = '__main__':
# Enter the Kingdom of Nouns.
c = C()
c.run()

There you go. Encapsulation and OO design.

Look, I'm sure that we could go back and forth for weeks trading more and 
more complicated, convoluted, esoteric or just plain unlikely scenarios 
leading up to the conclusion you want. I'm even willing to accept for the 
sake of discussion that in *your specific application's case*, you have 
come up with the best possible solution.

I'm not trying to dissuade you from using RAII in your own applications, 
if it works for you, great.

But I think it is unjustified to try to force that on all implementers 
unless you have a real need, not just as a matter of principle, to use 
RAII while still insisting on your code being implementation independent.


[...]
> If you choose RAII you will not be cavalier with your references.

You mean, "if you choose RAII, you cannot afford to be cavalier with your 
references, because if you fail to meet the rigorous demands of this 
pattern, your code will be buggy".

CPython has a long history of people relying on RAII and ending up with 
buggy code that doesn't close resources in a timely manner. The reason 
the with statement was invented was to be a simple and effective 
alternative to the RAII pattern, which promises the world in theory and 
fails to deliver in practice when it runs up against the reality that 
most people *are* cavalier with their references.

Not for you, obviously. I'm glad that it works for you. But as a 
community, we've been there and done that.




[1] They aren't the same thing.

-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 19:14:05 UTC, Paul Rubin  wrote:
> Ooomzay writes:
> > If you want to use RAII objects then you will make sure you avoid
> > adding them to orphan cycles by design. If you don't know how to do
> > that then don't write applications that manage critical resources.
> 
> My claim is managing critical resources with refcounts is bug-prone in
> the case where the refcounts can become arbitrarily large at runtime.

What case is that? Don't go there! You may be jaded because python forgives 
bad design practices and peeps are inclined to leak resources all over 
the place for no good reason whatever and then complain when they have 
trouble cleaning up or exiting!

> You say you wrote a program that worked that way, but it sounds
> horrendous and I'd like to know how you tested it, maintained it, kept
> it maintainable by other programmers, etc.  
> It's painful to even think about.

I too used to suffer pain with python - until I saw that light and 
worked out how to use it for RAII. So here's the keys for much less pain:-

* Use CPython (or C++ ;)

* Use RAII for every resource-holding class: Implement __del__ to release any 
resources acquired in __init__.  (This is significantly less effort, and more 
reliable than adding __enter__ & __exit__ to every class).

* Wrap your try-except blocks in functions to prevent exceptions persisting 
outside the handler because in python they leak. This is typically a very 
natural factorization.

* Take care not to create Cyclic Exceptions in your Exception handling logic.
 This was the one that had me scratching my head for a couple of hours the 
first 
time I inadvertantly created a cycle.

* If you have no application-level requirement for persistent orphan cycles, 
and few, if any, applications do, then disable gc on entry and raise an 
exception on exit, or any other convenient moment if there is any garbage 
to be collected.

* Immediately plug/bug any leaks that you discover. Do not let them build 
up or you will drown and loose faith that there can be a better way.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 17:58:40 UTC, Chris Angelico  wrote:
> On Tue, Mar 6, 2018 at 4:53 AM, Ooomzay wrote:
> > On Monday, 5 March 2018 14:21:54 UTC, Chris Angelico  wrote:
> >> On Tue, Mar 6, 2018 at 12:58 AM, Ooomzay wrote:
> >> > Here is my fixed example, if someone else could try it in CPython and 
> >> > report back that would be interesting:-
> >> >
> >> > class RAIIFileAccess():
> >> > def __init__(self, fname):
> >> > print("%s Opened" % fname)
> >> > self.fname = fname
> >> >
> >> > def __del__(self):
> >> > print("%s Closed" % self.fname)
> >> >
> >> > class A():
> >> > def __init__(self):
> >> > self.res = RAIIFileAccess("a")
> >> >
> >> > class B():
> >> > def __init__(self):
> >> > self.res = RAIIFileAccess("b")
> >> >
> >> > class C():
> >> > def __init__(self):
> >> > self.a = A()
> >> > self.b = B()
> >> >
> >> > def main():
> >> > c = C()
> >> > c.dostuff()
> >> >
> >> > main()
> >>
> >> Here's how I'd do it with context managers.
> >>
> >> from contextlib import contextmanager
> >>
> >> @contextmanager
> >> def file_access(fname):
> >> try:
> >> print("%s Opened" % fname)
> >> yield
> >> finally:
> >> print("%s Closed" % fname)
> >>
> >> @contextmanager
> >> def c():
> >> try:
> >> print("Starting c")
> >> with file_access("a") as a, file_access("b") as b:
> >> yield
> >> finally:
> >> print("Cleaning up c")
> >>
> >> def main():
> >> with c():
> >> dostuff() # NameError
> >
> >
> > Thank you for having a go...
> >
> > However you have broken the encapsulation of class A and B. These
> > are trivial for the sake of example. I should have used
> > _underscores (i.e. self._res) to make the intent of
> > this example more obvious.
> >
> > Please try again but preserving the integrity/encapsulation
> > of class A & B & C, just as the RAII example does.
> 
> What is B? Is it something that's notionally a resource to be managed?

Yes. For example a supply of electrical power controlled via a serial protocol  
to a programmable power supply - the file is a private detail used for the 
communications. And lets imagine that this powersupply object needs to keep 
track of some state such as the voltage - and it has a long lifetime - not just 
created then destroyed in scope of one function i.e. it is a substantial object.

> If so, you can trivially add another level to the nesting.

Please illustrate. I really do want to be able to compare like for like.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Chris Angelico
On Tue, Mar 6, 2018 at 4:53 AM, Ooomzay  wrote:
> On Monday, 5 March 2018 14:21:54 UTC, Chris Angelico  wrote:
>> On Tue, Mar 6, 2018 at 12:58 AM, Ooomzay wrote:
>> > Here is my fixed example, if someone else could try it in CPython and 
>> > report back that would be interesting:-
>> >
>> > class RAIIFileAccess():
>> > def __init__(self, fname):
>> > print("%s Opened" % fname)
>> > self.fname = fname
>> >
>> > def __del__(self):
>> > print("%s Closed" % self.fname)
>> >
>> > class A():
>> > def __init__(self):
>> > self.res = RAIIFileAccess("a")
>> >
>> > class B():
>> > def __init__(self):
>> > self.res = RAIIFileAccess("b")
>> >
>> > class C():
>> > def __init__(self):
>> > self.a = A()
>> > self.b = B()
>> >
>> > def main():
>> > c = C()
>> > c.dostuff()
>> >
>> > main()
>>
>> Here's how I'd do it with context managers.
>>
>> from contextlib import contextmanager
>>
>> @contextmanager
>> def file_access(fname):
>> try:
>> print("%s Opened" % fname)
>> yield
>> finally:
>> print("%s Closed" % fname)
>>
>> @contextmanager
>> def c():
>> try:
>> print("Starting c")
>> with file_access("a") as a, file_access("b") as b:
>> yield
>> finally:
>> print("Cleaning up c")
>>
>> def main():
>> with c():
>> dostuff() # NameError
>
>
> Thank you for having a go...
>
> However you have broken the encapsulation of class A and B. These
> are trivial for the sake of example. I should have used
> _underscores (i.e. self._res) to make the intent of
> this example more obvious.
>
> Please try again but preserving the integrity/encapsulation
> of class A & B & C, just as the RAII example does.

What is B? Is it something that's notionally a resource to be managed?
If so, you can trivially add another level to the nesting.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 14:21:54 UTC, Chris Angelico  wrote:
> On Tue, Mar 6, 2018 at 12:58 AM, Ooomzay wrote:
> > Here is my fixed example, if someone else could try it in CPython and 
> > report back that would be interesting:-
> >
> > class RAIIFileAccess():
> > def __init__(self, fname):
> > print("%s Opened" % fname)
> > self.fname = fname
> >
> > def __del__(self):
> > print("%s Closed" % self.fname)
> >
> > class A():
> > def __init__(self):
> > self.res = RAIIFileAccess("a")
> >
> > class B():
> > def __init__(self):
> > self.res = RAIIFileAccess("b")
> >
> > class C():
> > def __init__(self):
> > self.a = A()
> > self.b = B()
> >
> > def main():
> > c = C()
> > c.dostuff()
> >
> > main()
> 
> Here's how I'd do it with context managers.
> 
> from contextlib import contextmanager
> 
> @contextmanager
> def file_access(fname):
> try:
> print("%s Opened" % fname)
> yield
> finally:
> print("%s Closed" % fname)
> 
> @contextmanager
> def c():
> try:
> print("Starting c")
> with file_access("a") as a, file_access("b") as b:
> yield
> finally:
> print("Cleaning up c")
> 
> def main():
> with c():
> dostuff() # NameError
 

Thank you for having a go...

However you have broken the encapsulation of class A and B. These 
are trivial for the sake of example. I should have used
_underscores (i.e. self._res) to make the intent of
this example more obvious.

Please try again but preserving the integrity/encapsulation
of class A & B & C, just as the RAII example does.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 16:47:02 UTC, Steven D'Aprano  wrote:
> On Sun, 04 Mar 2018 16:58:38 -0800, Ooomzay wrote:
> 
> > Here is an example of a composite resource using RAII:-
> > 
> > class RAIIFileAccess():
> > def __init__(self, fname):
> > print("%s Opened" % fname)
> > def __del__(self):
> > print("%s Closed" % fname)
> > 
> > class A():
> > def __init__(self):
> > self.res = RAIIFileAccess("a")
> > 
> > class B():
> > def __init__(self):
> > self.res = RAIIFileAccess("b")
> > 
> > class C():
> > def __init__(self):
> > self.a = A()
> > self.b = B()
> >  
> > def main():
> > c = C()
> 
> Looking at that code, my major thought is  that there is far too much OO 
> design, not enough simplicity.

If this is far too much OO for you then RAII will be of no 
interest to you.

This example was specifically in response to a request to
illustrate the relative simplicity of RAII in the case of a 
composite (OO) resource.

> Perhaps I'm missing something, but I have no idea what benefit there is 
> in that style of code over:
> 
> with open('a') as a:
> with open('b') as b:
> process(a, b)

Encapsulation. Your application code is now managing details 
that should be hidden in the object. This PEP and RAII are
unashamedly targeted at OO designs.

If you would like to have a shot at coding this without RAII,
but preserving the OO design, you will find that it is 
considerably _simpler_ than the with/context manager approach.

> So long as you only process a or b inside the nested block, you are 
> guaranteed that they will be open.
> 
> And unlike your RAII example, they will be closed when you exit, 
> regardless of how many references to them you have, regardless of whether 
> an exception occurs or not, regardless of whether there are cycles or 
> whether they are globals or whether the interpreter is shutting down.

If you choose RAII you will not be cavalier with your references.

> I think that at this point, you have convinced me that you want to impose 
> enormous costs on all Python interpreters *and* Python developers, in 
> order to allow you to write C++ code in Python rather than learn Pythonic 
> idioms like the with statement.

On interpreters yes. On developers no. You can carry on exactly as you are.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Steven D'Aprano
On Mon, 05 Mar 2018 07:31:57 -0800, Ooomzay wrote:

> We do not expect this to work in PyPy.

Or Jython, IronPython, possibly not Stackless either, or in the 
interactive interpreter of (probably) any implementation, or CPython if 
the object is garbage collected during interpreter shutdown or is in a 
cycle or has a global reference, or if the object simply doesn't go out 
of scope as quickly as you would like.


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Steven D'Aprano
On Sun, 04 Mar 2018 16:58:38 -0800, Ooomzay wrote:

> Here is an example of a composite resource using RAII:-
> 
> class RAIIFileAccess():
> def __init__(self, fname):
> print("%s Opened" % fname)
> def __del__(self):
> print("%s Closed" % fname)
> 
> class A():
> def __init__(self):
> self.res = RAIIFileAccess("a")
> 
> class B():
> def __init__(self):
> self.res = RAIIFileAccess("b")
> 
> class C():
> def __init__(self):
> self.a = A()
> self.b = B()
>  
> def main():
> c = C()

Looking at that code, my major thought is  that there is far too much OO 
design, not enough simplicity.

Perhaps I'm missing something, but I have no idea what benefit there is 
in that style of code over:

with open('a') as a:
with open('b') as b:
process(a, b)

So long as you only process a or b inside the nested block, you are 
guaranteed that they will be open.

And unlike your RAII example, they will be closed when you exit, 
regardless of how many references to them you have, regardless of whether 
an exception occurs or not, regardless of whether there are cycles or 
whether they are globals or whether the interpreter is shutting down.

I think that at this point, you have convinced me that you want to impose 
enormous costs on all Python interpreters *and* Python developers, in 
order to allow you to write C++ code in Python rather than learn Pythonic 
idioms like the with statement.


I don't think this is a good tradeoff.


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 15:17:13 UTC, bartc  wrote:
> On 05/03/2018 13:58, Ooomzay wrote:
> > On Monday, 5 March 2018 11:24:37 UTC, Chris Angelico  wrote:
> >> On Mon, Mar 5, 2018 at 10:09 PM, Ooomzay wrote:
> >>> Here is an example of a composite resource using RAII:-
> >>>
> >>> class RAIIFileAccess():
> >>>  def __init__(self, fname):
> >>>  print("%s Opened" % fname)
> >>>  def __del__(self):
> >>>  print("%s Closed" % fname)
> >>>
> >>> class A():
> >>>  def __init__(self):
> >>>  self.res = RAIIFileAccess("a")
> >>>
> >>> class B():
> >>>  def __init__(self):
> >>>  self.res = RAIIFileAccess("b")
> >>>
> >>> class C():
> >>>  def __init__(self):
> >>>  self.a = A()
> >>>  self.b = B()
> >>>
> >>> def main():
> >>>  c = C()
> >>>
> >>> Under this PEP this is all that is needed to guarantee that the files "a"
> >>> and "b" are closed on exit from main or after any exception has been 
> >>> handled.
> >>
> >> Okay. And all your PEP needs is for reference count semantics, right?
> >> Okay. I'm going to run this in CPython, with reference semantics. You
> >> guarantee that those files will be closed after an exception is
> >> handled? Right.
> >>
> > def main():
> >> ... c = C()
> >> ... c.do_stuff()
> >> ...
> > main()
> >> a Opened
> >> b Opened
> >> Traceback (most recent call last):
> >>File "", line 1, in 
> >>File "", line 3, in main
> >> AttributeError: 'C' object has no attribute 'do_stuff'
> >
> >>   
> >> Uhh I'm not seeing any messages about the files getting closed.
> > 
> > Then that is indeed a challenge. From CPython back in 2.6 days up to 
> > Python36-32 what I see is:-
> > 
> > a Opened
> > b Opened
> > Traceback (most recent call last):
> > ...
> > AttributeError: 'C' object has no attribute 'dostuff'
> > a Closed
> > b Closed
> > 
> >> Maybe exceptions aren't as easy to handle as you think?
> > 
> > Well there is a general issue with exceptions owing to the ease
> > with which one can create cycles that may catch out newbs. But
> > that is not the case here.
> > 
> >> Or maybe you
> >> just haven't tried any of this (which is obvious from the bug in your
> >> code
> > 
> > Or maybe I just made a typo when simplifying my test case and failed to 
> > retest?
> > 
> > Here is my fixed case, if someone else could try it in CPython and report 
> > back that would be interesting:-
> > 
> > class RAIIFileAccess():
> >  def __init__(self, fname):
> >  print("%s Opened" % fname)
> >  self.fname = fname
> > 
> >  def __del__(self):
> >  print("%s Closed" % self.fname)
> > 
> > class A():
> >  def __init__(self):
> >  self.res = RAIIFileAccess("a")
> > 
> > class B():
> >  def __init__(self):
> >  self.res = RAIIFileAccess("b")
> > 
> > class C():
> >  def __init__(self):
> >  self.a = A()
> >  self.b = B()
> > 
> > def main():
> >  c = C()
> >  c.dostuff()
> > 
> > main()
> 
> I get A and B closed messages when running on CPython 2.7 and 3.6, with 
> the code run from a .py file. (I never use interactive mode.)
> 
> But not when running on a PyPy version of 2.7 (however that is not CPython).

Thanks bartc, I have made the example more complete by adding an exception 
scope - this means it works as designed - in any context. See my reply to Chris.

We do not expect this to work in PyPy.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread bartc

On 05/03/2018 13:58, Ooomzay wrote:

On Monday, 5 March 2018 11:24:37 UTC, Chris Angelico  wrote:

On Mon, Mar 5, 2018 at 10:09 PM, Ooomzay wrote:

Here is an example of a composite resource using RAII:-

class RAIIFileAccess():
 def __init__(self, fname):
 print("%s Opened" % fname)
 def __del__(self):
 print("%s Closed" % fname)

class A():
 def __init__(self):
 self.res = RAIIFileAccess("a")

class B():
 def __init__(self):
 self.res = RAIIFileAccess("b")

class C():
 def __init__(self):
 self.a = A()
 self.b = B()

def main():
 c = C()

Under this PEP this is all that is needed to guarantee that the files "a"
and "b" are closed on exit from main or after any exception has been handled.


Okay. And all your PEP needs is for reference count semantics, right?
Okay. I'm going to run this in CPython, with reference semantics. You
guarantee that those files will be closed after an exception is
handled? Right.


def main():

... c = C()
... c.do_stuff()
...

main()

a Opened
b Opened
Traceback (most recent call last):
   File "", line 1, in 
   File "", line 3, in main
AttributeError: 'C' object has no attribute 'do_stuff'


  
Uhh I'm not seeing any messages about the files getting closed.


Then that is indeed a challenge. From CPython back in 2.6 days up to 
Python36-32 what I see is:-

a Opened
b Opened
Traceback (most recent call last):
...
AttributeError: 'C' object has no attribute 'dostuff'
a Closed
b Closed


Maybe exceptions aren't as easy to handle as you think?


Well there is a general issue with exceptions owing to the ease
with which one can create cycles that may catch out newbs. But
that is not the case here.


Or maybe you
just haven't tried any of this (which is obvious from the bug in your
code


Or maybe I just made a typo when simplifying my test case and failed to retest?

Here is my fixed case, if someone else could try it in CPython and report back 
that would be interesting:-

class RAIIFileAccess():
 def __init__(self, fname):
 print("%s Opened" % fname)
 self.fname = fname

 def __del__(self):
 print("%s Closed" % self.fname)

class A():
 def __init__(self):
 self.res = RAIIFileAccess("a")

class B():
 def __init__(self):
 self.res = RAIIFileAccess("b")

class C():
 def __init__(self):
 self.a = A()
 self.b = B()

def main():
 c = C()
 c.dostuff()

main()


I get A and B closed messages when running on CPython 2.7 and 3.6, with 
the code run from a .py file. (I never use interactive mode.)


But not when running on a PyPy version of 2.7 (however that is not CPython).

--
bartc
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 14:36:30 UTC, Chris Angelico  wrote:
> On Tue, Mar 6, 2018 at 1:25 AM, Ooomzay wrote:
> > Ahah... I see now you are running it from a shell so the exception is 
> > staying in scope. We just need to include normal exception handling in the 
> > example to fix this:-
> >
> > def main():
> > try:
> > c = C()
> > c.dostuff()
> > except:
> > print("Boom!")
> >
> > main()
> >
> 
> So RAII depends on absorbing every exception close to where the
> resource is being managed? You can't permit that exception to bubble
> up lest the resource get leaked??

The exception can bubble up as many layers as you like but in python the 
exception leaks out of the handling context and needs its scope limiting. I 
have previously pointed out that such scoping is still recommended and that a 
function is pythons scoping construct - so use them for the job.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Chris Angelico
On Tue, Mar 6, 2018 at 1:25 AM, Ooomzay  wrote:
> Ahah... I see now you are running it from a shell so the exception is staying 
> in scope. We just need to include normal exception handling in the example to 
> fix this:-
>
> def main():
> try:
> c = C()
> c.dostuff()
> except:
> print("Boom!")
>
> main()
>

So RAII depends on absorbing every exception close to where the
resource is being managed? You can't permit that exception to bubble
up lest the resource get leaked??

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 13:59:35 UTC, Ooomzay  wrote:
> On Monday, 5 March 2018 11:24:37 UTC, Chris Angelico  wrote:
> > On Mon, Mar 5, 2018 at 10:09 PM, Ooomzay wrote:
> > > Here is an example of a composite resource using RAII:-
> > >
> > > class RAIIFileAccess():
> > > def __init__(self, fname):
> > > print("%s Opened" % fname)
> > > def __del__(self):
> > > print("%s Closed" % fname)
> > >
> > > class A():
> > > def __init__(self):
> > > self.res = RAIIFileAccess("a")
> > >
> > > class B():
> > > def __init__(self):
> > > self.res = RAIIFileAccess("b")
> > >
> > > class C():
> > > def __init__(self):
> > > self.a = A()
> > > self.b = B()
> > >
> > > def main():
> > > c = C()
> > >
> > > Under this PEP this is all that is needed to guarantee that the files "a"
> > > and "b" are closed on exit from main or after any exception has been 
> > > handled.
> > 
> > Okay. And all your PEP needs is for reference count semantics, right?
> > Okay. I'm going to run this in CPython, with reference semantics. You
> > guarantee that those files will be closed after an exception is
> > handled? Right.
> > 
> > >>> def main():
> > ... c = C()
> > ... c.do_stuff()
> > ...
> > >>> main()
> > a Opened
> > b Opened
> > Traceback (most recent call last):
> >   File "", line 1, in 
> >   File "", line 3, in main
> > AttributeError: 'C' object has no attribute 'do_stuff'
> > >>>
> >  
> > Uhh I'm not seeing any messages about the files getting closed.
> 
> Then that is indeed a challenge. From CPython back in 2.6 days up to 
> Python36-32 what I see is:-
> 
> a Opened
> b Opened
> Traceback (most recent call last):
> ...
> AttributeError: 'C' object has no attribute 'dostuff'
> a Closed
> b Closed
> 
> > Maybe exceptions aren't as easy to handle as you think? 
> 
> Well there is a general issue with exceptions owing to the ease
> with which one can create cycles that may catch out newbs. But
> that is not the case here.
> 
> > Or maybe you
> > just haven't tried any of this (which is obvious from the bug in your
> > code 
> 
> Or maybe I just made a typo when simplifying my test case and failed to 
> retest?
> 
> Here is my fixed case, if someone else could try it in CPython and report 
> back that would be interesting:-
> 
> class RAIIFileAccess():
> def __init__(self, fname):
> print("%s Opened" % fname)
> self.fname = fname
> 
> def __del__(self):
> print("%s Closed" % self.fname)
> 
> class A():
> def __init__(self):
> self.res = RAIIFileAccess("a")
> 
> class B():
> def __init__(self):
> self.res = RAIIFileAccess("b")
> 
> class C():
> def __init__(self):
> self.a = A()
> self.b = B()
> 
> def main():
> c = C()
> c.dostuff()
> 
> main()
> 
> > You keep insisting that this is an easy thing.  > We keep pointing out
> > that it isn't. Now you're proving that you haven't even attempted any
> > of this. 
> 
> Nonsense. But you have got a result I have never seen in many years 
> and I would like to get to the  bottom of it.

Ahah... I see now you are running it from a shell so the exception is staying 
in scope. We just need to include normal exception handling in the example to 
fix this:-


class RAIIFileAccess():
def __init__(self, fname):
print("%s Opened" % fname)
self.fname = fname
def __del__(self):
print("%s Closed" % self.fname)


class A():
def __init__(self):
self.res = RAIIFileAccess("A")


class B():
def __init__(self):
self.res = RAIIFileAccess("B")


class C():
def __init__(self):
self.a = A()
self.b = B()


def main():
try:
c = C()
c.dostuff()
except:
print("Boom!")

main()




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Chris Angelico
On Tue, Mar 6, 2018 at 12:58 AM, Ooomzay  wrote:
> Then that is indeed a challenge. From CPython back in 2.6 days up to 
> Python36-32 what I see is:-
>
> a Opened
> b Opened
> Traceback (most recent call last):
> ...
> AttributeError: 'C' object has no attribute 'dostuff'
> a Closed
> b Closed
>
>> Maybe exceptions aren't as easy to handle as you think?
>
> Well there is a general issue with exceptions owing to the ease
> with which one can create cycles that may catch out newbs. But
> that is not the case here.
>
>> Or maybe you
>> just haven't tried any of this (which is obvious from the bug in your
>> code
>
> Or maybe I just made a typo when simplifying my test case and failed to 
> retest?
>
> Here is my fixed case, if someone else could try it in CPython and report 
> back that would be interesting:-
>
> class RAIIFileAccess():
> def __init__(self, fname):
> print("%s Opened" % fname)
> self.fname = fname
>
> def __del__(self):
> print("%s Closed" % self.fname)
>
> class A():
> def __init__(self):
> self.res = RAIIFileAccess("a")
>
> class B():
> def __init__(self):
> self.res = RAIIFileAccess("b")
>
> class C():
> def __init__(self):
> self.a = A()
> self.b = B()
>
> def main():
> c = C()
> c.dostuff()
>
> main()

Tried this in the interactive interpreter again.

>>> main()
a Opened
b Opened
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 3, in main
AttributeError: 'C' object has no attribute 'dostuff'
>>>

Same problem! If you can't handle this situation, there's something
fundamentally wrong with your system.

Here's how I'd do it with context managers.

from contextlib import contextmanager

@contextmanager
def file_access(fname):
try:
print("%s Opened" % fname)
yield
finally:
print("%s Closed" % fname)

@contextmanager
def c():
try:
print("Starting c")
with file_access("a") as a, file_access("b") as b:
yield
finally:
print("Cleaning up c")

def main():
with c():
dostuff() # NameError

>>> main()
Starting c
a Opened
b Opened
b Closed
a Closed
Cleaning up c
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 3, in main
NameError: name 'dostuff' is not defined
>>>

And if you want different semantics, you can lay out c() differently -
maybe have the try/finally not contain within the 'with' but be
contained within it, or whatever else you like. Exceptions move
through the stack exactly the way you'd expect them to. Instead of
having an object to represent each resource, you simply have a
function with the code needed to allocate and deallocate it. They nest
perfectly.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, 5 March 2018 11:24:37 UTC, Chris Angelico  wrote:
> On Mon, Mar 5, 2018 at 10:09 PM, Ooomzay wrote:
> > Here is an example of a composite resource using RAII:-
> >
> > class RAIIFileAccess():
> > def __init__(self, fname):
> > print("%s Opened" % fname)
> > def __del__(self):
> > print("%s Closed" % fname)
> >
> > class A():
> > def __init__(self):
> > self.res = RAIIFileAccess("a")
> >
> > class B():
> > def __init__(self):
> > self.res = RAIIFileAccess("b")
> >
> > class C():
> > def __init__(self):
> > self.a = A()
> > self.b = B()
> >
> > def main():
> > c = C()
> >
> > Under this PEP this is all that is needed to guarantee that the files "a"
> > and "b" are closed on exit from main or after any exception has been 
> > handled.
> 
> Okay. And all your PEP needs is for reference count semantics, right?
> Okay. I'm going to run this in CPython, with reference semantics. You
> guarantee that those files will be closed after an exception is
> handled? Right.
> 
> >>> def main():
> ... c = C()
> ... c.do_stuff()
> ...
> >>> main()
> a Opened
> b Opened
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "", line 3, in main
> AttributeError: 'C' object has no attribute 'do_stuff'
> >>>
>  
> Uhh I'm not seeing any messages about the files getting closed.

Then that is indeed a challenge. From CPython back in 2.6 days up to 
Python36-32 what I see is:-

a Opened
b Opened
Traceback (most recent call last):
...
AttributeError: 'C' object has no attribute 'dostuff'
a Closed
b Closed

> Maybe exceptions aren't as easy to handle as you think? 

Well there is a general issue with exceptions owing to the ease
with which one can create cycles that may catch out newbs. But
that is not the case here.

> Or maybe you
> just haven't tried any of this (which is obvious from the bug in your
> code 

Or maybe I just made a typo when simplifying my test case and failed to retest?

Here is my fixed case, if someone else could try it in CPython and report back 
that would be interesting:-

class RAIIFileAccess():
def __init__(self, fname):
print("%s Opened" % fname)
self.fname = fname

def __del__(self):
print("%s Closed" % self.fname)

class A():
def __init__(self):
self.res = RAIIFileAccess("a")

class B():
def __init__(self):
self.res = RAIIFileAccess("b")

class C():
def __init__(self):
self.a = A()
self.b = B()

def main():
c = C()
c.dostuff()

main()

> You keep insisting that this is an easy thing.  > We keep pointing out
> that it isn't. Now you're proving that you haven't even attempted any
> of this. 

Nonsense. But you have got a result I have never seen in many years 
and I would like to get to the  bottom of it.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Chris Angelico
On Mon, Mar 5, 2018 at 10:09 PM, Ooomzay  wrote:
> Here is an example of a composite resource using RAII:-
>
> class RAIIFileAccess():
> def __init__(self, fname):
> print("%s Opened" % fname)
> def __del__(self):
> print("%s Closed" % fname)
>
> class A():
> def __init__(self):
> self.res = RAIIFileAccess("a")
>
> class B():
> def __init__(self):
> self.res = RAIIFileAccess("b")
>
> class C():
> def __init__(self):
> self.a = A()
> self.b = B()
>
> def main():
> c = C()
>
> Under this PEP this is all that is needed to guarantee that the files "a"
> and "b" are closed on exit from main or after any exception has been handled.

Okay. And all your PEP needs is for reference count semantics, right?
Okay. I'm going to run this in CPython, with reference semantics. You
guarantee that those files will be closed after an exception is
handled? Right.

>>> def main():
... c = C()
... c.do_stuff()
...
>>> main()
a Opened
b Opened
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 3, in main
AttributeError: 'C' object has no attribute 'do_stuff'
>>>


Uhh I'm not seeing any messages about the files getting closed.
Maybe exceptions aren't as easy to handle as you think? Or maybe you
just haven't tried any of this (which is obvious from the bug in your
code - but even if I fix that, there's still a problem with exception
handling).

You keep insisting that this is an easy thing. We keep pointing out
that it isn't. Now you're proving that you haven't even attempted any
of this. Do you believe me now? (Probably not.)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Sunday, 4 March 2018 23:56:09 UTC, Chris Angelico  wrote:
> On Sun, Mar 4, 2018 at 10:37 PM, Ooomzay  wrote:
> > Please consider the case of a composite resource: You need to implement
> > __enter__, __exit__ and track the open/closed state at every level in
> > your component hierarchy - even if some levels hold no resources directly.
> >
> > This is burdensome, breaks encapsulation, breaks invariance and is error 
> > prone
> > ...very unpythonic.
> 
> Why do you need to? I don't understand your complaint here - can you
> give an example of a composite resource that needs this kind of
> special management?

Here is an example of a composite resource using RAII:- 

class RAIIFileAccess(): 
def __init__(self, fname): 
print("%s Opened" % fname) 
def __del__(self): 
print("%s Closed" % fname) 

class A(): 
def __init__(self): 
self.res = RAIIFileAccess("a") 

class B(): 
def __init__(self): 
self.res = RAIIFileAccess("b") 

class C(): 
def __init__(self): 
self.a = A() 
self.b = B() 
  
def main(): 
c = C() 

Under this PEP this is all that is needed to guarantee that the files "a" 
and "b" are closed on exit from main or after any exception has been handled. 

Also note that if you have a reference to these objects then they are 
guaranteed to be in a valid/useable/open state (invariant) - no danger 
or need to worry/check about enter/exit state. 

Now repeat this exercise with "with".
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-05 Thread Ooomzay
On Monday, March 5, 2018 at 6:38:49 AM UTC, Mark Lawrence wrote:
> On 05/03/18 01:01, Ooomzay wrote:
> > On Sunday, 4 March 2018 23:57:24 UTC, Mark Lawrence  wrote:
> >> On 04/03/18 02:28, Ooomzay wrote:
> >>> On Friday, 2 March 2018 15:37:25 UTC, Paul  Moore  wrote:
> >>> [snip]
>    def fn():
>    for i in range(1):
>    with open(f"file{i}.txt", "w") as f:
>    f.write("Some text")
> 
>  How would you write this in your RAII style - without leaving 10,000
>  file descriptors open until the end of the function?
> >>>
> >>>   def fn():
> >>>   for i in range(1):
> >>>   f = RAIIFile(f"file{i}.txt", "w")
> >>>   f.write("Some text")
> >>>
> > 
> >> Over my dead body.
> > 
> > Care to expand on that?
> > 
> 
> Sure, when you state what you intend doing about reference cycles, which 
> you've been asked about countless times.

Nothing. No change whatever. As I stated in my second post ref cycles are 
orthogonal to this PEP. 

If you want to use RAII objects then you will make sure you avoid adding them 
to orphan cycles by design. If you don't know how to do that then don't write 
applications that manage critical resources.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Mark Lawrence

On 05/03/18 01:01, Ooomzay wrote:

On Sunday, 4 March 2018 23:57:24 UTC, Mark Lawrence  wrote:

On 04/03/18 02:28, Ooomzay wrote:

On Friday, 2 March 2018 15:37:25 UTC, Paul  Moore  wrote:
[snip]

  def fn():
  for i in range(1):
  with open(f"file{i}.txt", "w") as f:
  f.write("Some text")

How would you write this in your RAII style - without leaving 10,000
file descriptors open until the end of the function?
   
  def fn():

  for i in range(1):
  f = RAIIFile(f"file{i}.txt", "w")
  f.write("Some text")




Over my dead body.


Care to expand on that?



Sure, when you state what you intend doing about reference cycles, which 
you've been asked about countless times.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Monday, 5 March 2018 01:11:43 UTC, Richard Damon  wrote:
> On 3/4/18 6:55 PM, Ned Batchelder wrote:
> > On 3/4/18 5:25 PM, Ooomzay wrote:
> >> On Sunday, 4 March 2018 14:37:30 UTC, Ned Batchelder  wrote:
> >>> Are you including cyclic references in your assertion that CPython
> >>> behaves as you want?
> >> Yes. Because the only behaviour required for RAII is to detect and 
> >> debug such cycles in order to eliminate them. It is a design 
> >> error/resource leak to create an orphan cycle containing RAII objects.
> >>
> >> def main():
> >> gc,disable
> >>
> >>
> >>
> >
> > This isn't a reasonable position.  Cycles exist, and the gc exists for 
> > a reason.  Your proposal isn't going to go anywhere if you just 
> > naively ignore cycles.
> >
> > --Ned.
> 
> While Ooomzay seems to want to say that all cycles are bad, 

I only want to say that orphan cycles with RAII objects in them are bad. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ben Finney
Richard Damon  writes:

> […] I can't see any good way to create the equivalent of a 'weak
> reference' (names aren't objects so don't have properties). The  best
> I can think of is to create some sort of magical object that can refer
> to another object, but that reference 'doesn't count'. This seems very
> unPythonic.

Do you mean something like the standard library ‘weakref’ module
?

The weakref module allows the Python programmer to create weak
references to objects.

[…]
A weak reference to an object is not enough to keep the object
alive: when the only remaining references to a referent are weak
references, garbage collection is free to destroy the referent and
reuse its memory for something else. However, until the object is
actually destroyed the weak reference may return the object even if
there are no strong references to it.

-- 
 \   “The cost of education is trivial compared to the cost of |
  `\ ignorance.” —Thomas Jefferson |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Saturday, 3 March 2018 17:10:53 UTC, Dietmar Schwertberger  wrote:
> CPython does *not* guarantee destruction when the object reference goes 
> out of scope, even if there are no other references.
> I would very much appreciate such a deterministic behaviour, at least 
> with CPython.
> 
> I recently had to debug an issue in the matplotlib wx backend (*). Under 
> certain conditions, the wx device context was not destroyed when the 
> reference went out of scope. Adding a del to the end of the method or 
> calling the Destroy method of the context did fix the issue. (There was 
> also a hidden reference, but avoiding this was not sufficient. The del 
> was still required.)

You say the reference was out of scope but that a del was still required. What 
were you delling if the reference was out of scope? Could you sketch the code.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Richard Damon

On 3/4/18 6:55 PM, Ned Batchelder wrote:

On 3/4/18 5:25 PM, Ooomzay wrote:

On Sunday, 4 March 2018 14:37:30 UTC, Ned Batchelder  wrote:

Are you including cyclic references in your assertion that CPython
behaves as you want?
Yes. Because the only behaviour required for RAII is to detect and 
debug such cycles in order to eliminate them. It is a design 
error/resource leak to create an orphan cycle containing RAII objects.


def main():
gc,disable





This isn't a reasonable position.  Cycles exist, and the gc exists for 
a reason.  Your proposal isn't going to go anywhere if you just 
naively ignore cycles.


--Ned.


While Ooomzay seems to want to say that all cycles are bad, I think it 
is fair to say that in general in Python they are unavoidable, in part 
because I can't see any good way to create the equivalent of a 'weak 
reference' (names aren't objects so don't have properties). The  best I 
can think of is to create some sort of magical object that can refer to 
another object, but that reference 'doesn't count'. This seems very 
unPythonic.


What I think can be said is that it should be possible (enforced by the 
programmer, not the language) to use these RAII objects in ways that 
don't create cycles (or maybe that the program knows of the cycles and 
makes the effort to break them when it is important). So perhaps it can 
be said that cycle that involve major resource RAII objects should exist.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 23:55:33 UTC, Ned Batchelder  wrote:
> On 3/4/18 5:25 PM, Ooomzay wrote:
> > On Sunday, 4 March 2018 14:37:30 UTC, Ned Batchelder  wrote:
> >> Are you including cyclic references in your assertion that CPython
> >> behaves as you want?
> > Yes. Because the only behaviour required for RAII is to detect and debug 
> > such cycles in order to eliminate them. It is a design error/resource leak 
> > to create an orphan cycle containing RAII objects.
> >
> This isn't a reasonable position.  Cycles exist, and the gc exists for a 
> reason.  Your proposal isn't going to go anywhere if you just naively 
> ignore cycles.

I am not naively ignoring them. But _if_ you want to use RAII then do not leak 
them in cycles. Put anything else in there you like and gc them as before.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 23:57:24 UTC, Mark Lawrence  wrote:
> On 04/03/18 02:28, Ooomzay wrote:
> > On Friday, 2 March 2018 15:37:25 UTC, Paul  Moore  wrote:
> > [snip]
> >>  def fn():
> >>  for i in range(1):
> >>  with open(f"file{i}.txt", "w") as f:
> >>  f.write("Some text")
> >>
> >> How would you write this in your RAII style - without leaving 10,000
> >> file descriptors open until the end of the function?
> >   
> >  def fn():
> >  for i in range(1):
> >  f = RAIIFile(f"file{i}.txt", "w")
> >  f.write("Some text")
> > 

> Over my dead body.  

Care to expand on that?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 15:24:08 UTC, Steven D'Aprano  wrote:
> On Sun, 04 Mar 2018 03:37:38 -0800, Ooomzay wrote:
> 
> > Please consider the case of a composite resource: You need to implement
> > __enter__, __exit__ and track the open/closed state at every level in
> > your component hierarchy - even if some levels hold no resources
> > directly.
> > This is burdensome, breaks encapsulation, breaks invariance and is error
> > prone ...very unpythonic.
> 
> Without a more concrete example, I cannot comment on these claims.

Here is an example of a composite resource using RAII:-

class RAIIFileAccess():
def __init__(self, fname):
print("%s Opened" % fname)
def __del__(self):
print("%s Closed" % fname)

class A():
def __init__(self):
self.res = RAIIFileAccess("a") 

class B():
def __init__(self):
self.res = RAIIFileAccess("b")

class C():
def __init__(self):
self.a = A()
self.b = B()
 
def main():
c = C()

Under this PEP this is all that is needed to guarantee that the files "a" 
and "b" are closed on exit from main or after any exception has been handled.

Also note that if you have a reference to these objects then they are 
guaranteed to be in a valid/useable/open state (invariant) - no danger
or need to worry/check about enter/exit state.

Now repeat this example with "with".

> [...]
> > My PEP is about improving the linguistic integrity and fitness for
> > resource management purpose of the language.
> 
> So you claim, but *requiring* reference counting semantics does not 
> improve the integrity or fitness of the language. 

We will just have to disagree on that for now.

> And the required 
> changes to programming styles and practices (no cycles, 

No change required. But if you _choose_ to benefit from RAII you had better 
not create orphan cycles with RAII objects in them, as that 
is clearly a resource leak.

> no globals, 

No change required. But if you _choose_ to benefit from RAII you had better 
take care to delete any RAII resources you choose to hold at global scope in 
a robust way. (These are exceptional in my experience).

> put all resources inside their own scope) 

No change required. But if you _choose_ to benefit from RAII you can make use
of python's existing scopes (functions) or del to restrict resource lifetimes.

> >> In any case, you might not like with statements, but I think they're
> >> infinitely better than:
> >> 
> >> def meaningless_function_that_exists_only_to_manage_resource():
> >> x = open_resource()
> >> process(x)
> > 
> >> def function():
> >> meaningless_function_that_exists_only_to_manage_resource()
> >> sleep(1)  # simulate a long-running function
> > 
> > Why would you prefer a new construct?
> 
> I don't prefer a new construct. The "with" statement isn't "new". It goes 
> back to Python 2.5 (`from __future__ import with_statement`) which is 
> more than eleven years old now. That's about half the lifetime of the 
> language!
> 
> I prefer the with statement because it is *explicit*, simple to use, and 
> clear to read. I can read some code and instantly see that when the with 
> block ends, the resource will be closed, regardless of how many 
> references to the object still exist.
> 
> I don't have to try to predict (guess!) when the last reference will go 
> out of scope, because that's irrelevant.

If you don't care about what the other references might be then 
RAII is not for you. Fine.
 
> RAII conflates the lifetime of the object with the lifetime of the 
> resource held by the object. 

This "conflation" is called "invariance" and is usually considered a 
"very good thing" as you cant have references floating around to
half-baked resources.

> They are not the same, and the object can 
> outlive the resource.

Not with RAII it can't. Simple. Good. 

> Your position is:
> 
> "RAII makes it really elegant to close the file! All you need to do is 
> make sure that when you want to close the file, you delete all the 
> references to the file, so that it goes out of scope, and the file will 
> be closed."
> 
> My position is:
> 
> "If I want to close the file, I'll just close the file. Why should I care 
> that there are zero or one or a million references to it?"

Because if you have no idea what references there are you can not assume it 
is OK to close the file! That would be a truly terrible program design.

> >> - the with block is equivalent to a try...finally, and so it is
> >>   guaranteed to close the resource even if an exception occurs; your
> >>   solution isn't.

It is: RAII will release all resources held, transitively. Try the example
above using CPython and put an exception in one of the constructors. 

> >> If process(x) creates a non-local reference to x, and then raises an
> >> exception, and that exception is caught elsewhere, x will not go out of
> >> scope and won't be closed.
> >> A regression in the reliability of the 

Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Mark Lawrence

On 04/03/18 02:28, Ooomzay wrote:

On Friday, 2 March 2018 15:37:25 UTC, Paul  Moore  wrote:
[snip]

 def fn():
 for i in range(1):
 with open(f"file{i}.txt", "w") as f:
 f.write("Some text")

How would you write this in your RAII style - without leaving 10,000
file descriptors open until the end of the function?
  
 def fn():

 for i in range(1):
 f = RAIIFile(f"file{i}.txt", "w")
 f.write("Some text")



Over my dead body.  Not that it matters as I can't see this happening in 
a month of Sundays.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Chris Angelico
On Sun, Mar 4, 2018 at 10:37 PM, Ooomzay  wrote:
> Please consider the case of a composite resource: You need to implement
> __enter__, __exit__ and track the open/closed state at every level in
> your component hierarchy - even if some levels hold no resources directly.
>
> This is burdensome, breaks encapsulation, breaks invariance and is error prone
> ...very unpythonic.

Why do you need to? I don't understand your complaint here - can you
give an example of a composite resource that needs this kind of
special management?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Dietmar Schwertberger

On 3/4/2018 1:37 PM, Ooomzay wrote:
Not so:- CPython, the reference interpreter, already implements the 
required behaviour, as mentioned in the PEP. 


It does most of the time, but it's not guaranteed. See my previous post.

Regards,

Dietmar


--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ned Batchelder

On 3/4/18 5:25 PM, Ooomzay wrote:

On Sunday, 4 March 2018 14:37:30 UTC, Ned Batchelder  wrote:

Are you including cyclic references in your assertion that CPython
behaves as you want?

Yes. Because the only behaviour required for RAII is to detect and debug such 
cycles in order to eliminate them. It is a design error/resource leak to create 
an orphan cycle containing RAII objects.

def main():
gc,disable





This isn't a reasonable position.  Cycles exist, and the gc exists for a 
reason.  Your proposal isn't going to go anywhere if you just naively 
ignore cycles.


--Ned.
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Chris Angelico
On Mon, Mar 5, 2018 at 10:42 AM, Steven D'Aprano
 wrote:
> On Mon, 05 Mar 2018 09:20:24 +1100, Chris Angelico wrote:
>
>> Okay, that sounds reasonable. Let's help things out by creating a
>> special syntax for reference-counted object usage
> [...]
>> Feel free to bikeshed the exact keywords and function names, of course.
>
>
> I see what you did there.
>
> :-)
>

Yeah I'm not sure if the OP will see what I did, though...

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Steven D'Aprano
On Mon, 05 Mar 2018 09:20:24 +1100, Chris Angelico wrote:

> Okay, that sounds reasonable. Let's help things out by creating a
> special syntax for reference-counted object usage
[...]
> Feel free to bikeshed the exact keywords and function names, of course.


I see what you did there.

:-)


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 14:37:30 UTC, Ned Batchelder  wrote:
> Are you including cyclic references in your assertion that CPython 
> behaves as you want?

Yes. Because the only behaviour required for RAII is to detect and debug such 
cycles in order to eliminate them. It is a design error/resource leak to create 
an orphan cycle containing RAII objects.

def main():
gc,disable



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Chris Angelico
On Mon, Mar 5, 2018 at 9:09 AM, Richard Damon  wrote:
> My presumption of the proposal is the it does NOT expect that __del__ be
> called just because an object is no longer reachable but is in a cycle of
> isolated objects, those would still need to wait for the garbage collector
> to get them. The request is that when the direct reference count goes to 0,
> that __del__ gets called.
>
> I think that is what CPython promises (but not other versions).
>
> I am not positive if __del__ gets called for sure when the object is garbage
> collected (I thought it did, but I am not positive).
>
> I am pretty sure it does NOT get called on object that are still in
> existence when things terminate (which would be the major difference from a
> language like C++)
>
> What the language does not promise is that in general, __del__ be called
> 'immediately' on the last reference going away in the general case, because
> CPythons reference counting is an implementation detail.
>
> My understanding of this proposal is to ask that, I would say at least for
> selected objects, that all implementations perform this reference counting,
> allowing objects that track 'critical resources' to get disposed of in a
> timely manner and not wait for garbage collection. And that this is
> triggered only by the reference count going to zero, and not if the object
> happens to be in an isolated reference cycle. This does limit what you can
> do with this sort of object, but that normally isn't a problem.

Okay, that sounds reasonable. Let's help things out by creating a
special syntax for reference-counted object usage, to allow other
implementations to use different forms of garbage collection. When you
acquire these kinds of objects, you "mark" them with this special
syntax, and Python will call a special method on that object to say
"hey, you're now in use". Then when that special syntax is done,
Python calls another special method to say "hey, you're not in use any
more". Something like this:

using some_object:
...
...
...
# at the unindent, we're automatically not using it

That would call a special method some_object.__now_using__() at the
top of the block, and some_object.__not_using__() at the bottom. Or,
if the block exits because of an exception, we could call
some_object.__not_using(exc) with the exception details. I think this
could be a really good feature - it'd allow non-refcounted Python
implementations to have a strong guarantee of immediate disposal of a
managed resource, and it'd also strengthen the guarantee for
refcounted Pythons too.

Feel free to bikeshed the exact keywords and function names, of course.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Richard Damon

On 3/4/18 8:46 AM, Chris Angelico wrote:

On Mon, Mar 5, 2018 at 12:26 AM, Ooomzay  wrote:

Well refcounts are definitely "doable": This is how the reference python 
implementation, CPython, currently manages to comply with this PEP and can
therefore be used for RAII.

This PEP is an attempt to _guarantee_ this behaviour and make the elegance
of RAII available to all pythonistas that want it. Without this guarantee
python is not attractive to applications that must manage non-trivial
resources reliably.

Aside: I once read somewhere that must have seemed authoritative at the
time, that CPython _guarantees_ to continue to behave like this - but now the 
subject is topical again I can find no trace of this guarantee.

That's because there is no such guarantee. In fact, you can probably
find places in the docs where it is specifically stated that you
cannot depend on __del__ for this.

You still haven't said how you're going to cope with reference cycles
- or are you expecting every single decref to do a full cycle check?

ChrisA


My presumption of the proposal is the it does NOT expect that __del__ be 
called just because an object is no longer reachable but is in a cycle 
of isolated objects, those would still need to wait for the garbage 
collector to get them. The request is that when the direct reference 
count goes to 0, that __del__ gets called.


I think that is what CPython promises (but not other versions).

I am not positive if __del__ gets called for sure when the object is 
garbage collected (I thought it did, but I am not positive).


I am pretty sure it does NOT get called on object that are still in 
existence when things terminate (which would be the major difference 
from a language like C++)


What the language does not promise is that in general, __del__ be called 
'immediately' on the last reference going away in the general case, 
because CPythons reference counting is an implementation detail.


My understanding of this proposal is to ask that, I would say at least 
for selected objects, that all implementations perform this reference 
counting, allowing objects that track 'critical resources' to get 
disposed of in a timely manner and not wait for garbage collection. And 
that this is triggered only by the reference count going to zero, and 
not if the object happens to be in an isolated reference cycle. This 
does limit what you can do with this sort of object, but that normally 
isn't a problem.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread bartc

On 04/03/2018 14:11, Ooomzay wrote:


Well I see a lot of posts that indicate peeps here are more comfortable with
the "with" idiom than the RAII idiom but I have not yet seen a single
linguistic problem or breakage.

As it happens I have used RAII extensively with CPython to manage a debugging 
environment with complex external resources that need managing very efficiently.


I have bit of problem with features normally used for the housekeeping 
of a language's data structures being roped in to control external 
resources that it knows nothing about.


Which also means that if X is a variable with the sole reference to some 
external resource, then a mere:


   X = 0

will close, destroy, or discard that resource. If the resource is 
non-trivial, then it perhaps deserves a non-trivial piece of code to 
deal with it when it's no longer needed.


It further means that when you did want to discard an expensive 
resource, then X going out of scope, calling del X or whatever, will not 
work if a copy of X still exists somewhere.


--
bartc
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ned Batchelder

On 3/4/18 9:11 AM, Ooomzay wrote:

I am well aware of what it will mean for interpreters. For some interpreters it 
will have zero impact (e.g. CPython) ...


There's no point continuing this if you are just going to insist on 
falsehoods like this.  CPython doesn't currently do what you want, but 
you are choosing to ignore the cases where it doesn't.  You have to deal 
with cycles.


--Ned.
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Steven D'Aprano
On Sun, 04 Mar 2018 09:37:34 -0500, Ned Batchelder wrote:

> On 3/4/18 7:37 AM, Ooomzay wrote:
>> On Sunday, 4 March 2018 04:23:07 UTC, Steven D'Aprano  wrote:
>>> [...]
>>> [This PEP] imposes enormous burdens on the maintainers of at least
>>> five interpreters (CPython, Stackless, Jython, IronPython, PyPy) all
>>> of which will need to be re-written to have RAII semantics guaranteed;
>> Not so:-  CPython, the reference interpreter, already implements the
>> required behaviour, as mentioned in the PEP.
> 
> Except for cycles.

And during interpreter shutdown.

-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Steven D'Aprano
On Sun, 04 Mar 2018 03:37:38 -0800, Ooomzay wrote:

> Please consider the case of a composite resource: You need to implement
> __enter__, __exit__ and track the open/closed state at every level in
> your component hierarchy - even if some levels hold no resources
> directly.

I'm sorry, that description is too abstract for me to understand. Can you 
give a simple example?


> This is burdensome, breaks encapsulation, breaks invariance and is error
> prone ...very unpythonic.

Without a more concrete example, I cannot comment on these claims.


[...]
> My PEP is about improving the linguistic integrity and fitness for
> resource management purpose of the language.

So you claim, but *requiring* reference counting semantics does not 
improve the integrity or fitness of the language. And the required 
changes to programming styles and practices (no cycles, no globals, put 
all resources inside their own scope) demonstrate that this is a step 
backwards.

Right now I can write reliable code that uses external resources (such as 
a database connection or file) and put it in my application's module 
scope, and still easily manage the resource lifetime. I cannot do that by 
relying only on RAII. That's a step backwards as far as language fitness.


>> In any case, you might not like with statements, but I think they're
>> infinitely better than:
>> 
>> def meaningless_function_that_exists_only_to_manage_resource():
>> x = open_resource()
>> process(x)
> 
>> def function():
>> meaningless_function_that_exists_only_to_manage_resource()
>> sleep(1)  # simulate a long-running function
> 
> Why would you prefer a new construct?

I don't prefer a new construct. The "with" statement isn't "new". It goes 
back to Python 2.5 (`from __future__ import with_statement`) which is 
more than eleven years old now. That's about half the lifetime of the 
language!

I prefer the with statement because it is *explicit*, simple to use, and 
clear to read. I can read some code and instantly see that when the with 
block ends, the resource will be closed, regardless of how many 
references to the object still exist.

I don't have to try to predict (guess!) when the last reference will go 
out of scope, because that's irrelevant.

RAII conflates the lifetime of the object with the lifetime of the 
resource held by the object. They are not the same, and the object can 
outlive the resource.

Your position is:

"RAII makes it really elegant to close the file! All you need to do is 
make sure that when you want to close the file, you delete all the 
references to the file, so that it goes out of scope, and the file will 
be closed."

My position is:

"If I want to close the file, I'll just close the file. Why should I care 
that there are zero or one or a million references to it?"


>> - the with block is equivalent to a try...finally, and so it is
>>   guaranteed to close the resource even if an exception occurs; your
>>   solution isn't.
> 
>> If process(x) creates a non-local reference to x, and then raises an
>> exception, and that exception is caught elsewhere, x will not go out of
>> scope and won't be closed.
>> A regression in the reliability of the code.
> 
> This PEP does not affect existing code. Peeps who are familiar with RAII
> understand that creating a global reference to an RAII resource is
> explicitly saying "I want this kept open at global scope" and that is
> the behaviour that they will be guaranteed.

I'm not talking about global scope. Any persistent reference to the 
object will prevent the resource from being closed. 

Here's a proof of concept which demonstrates the problem with conflating 
object scope and resource lifetime. Notice that there are no globals used.


def main():
from time import sleep
values = []
def process():
f = open('/tmp/foo', 'w')
values.append(f)
f.write("Hello world!")
f.read()  # oops!
del values[-1]
try:
process()
except IOError:
pass
# The file should be closed now. But it isn't.
sleep(10)  # simulate doing a lot of work
g = open('/tmp/foo', 'r')
assert g.read() == "Hello world!"


The assertion fails, because the file hasn't be closed in a timely 
manner. On the other hand:

def main2():
from time import sleep
values = []
def process():
with open('/tmp/bar', 'w') as f:
values.append(f)
f.write("Hello world!")
f.read()  # oops!
del values[-1]
try:
process()
except IOError:
pass
sleep(10)  # simulate doing a lot of work
g = open('/tmp/foo', 'r')
assert g.read() == "Hello world!"


The assertion here passes.

Now, these are fairly contrived examples, but in real code the resource 
owner might be passed into an iterator, or bound to a class attribute, or 
anything else that holds onto a reference to it. As soon as that happens, 
and there's another reference to the object 

Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ned Batchelder

On 3/4/18 7:37 AM, Ooomzay wrote:

On Sunday, 4 March 2018 04:23:07 UTC, Steven D'Aprano  wrote:

[...]
[This PEP] imposes enormous burdens on the maintainers of at least five
interpreters (CPython, Stackless, Jython, IronPython, PyPy) all of which
will need to be re-written to have RAII semantics guaranteed;

Not so:-  CPython, the reference interpreter, already implements the required 
behaviour, as mentioned in the PEP.


Except for cycles.

--Ned.
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ned Batchelder

On 3/4/18 8:26 AM, Ooomzay wrote:

On Sunday, 4 March 2018 03:16:31 UTC, Paul Rubin  wrote:

Chris Angelico  writes:

Yep, cool. Now do that with all of your smart pointers being on the
heap too. You are not allowed to use ANY stack objects.  ANY. Got it?

That's both overconstraining and not even that big a problem the way you
phrase it.

1) Python has both the "with" statement and try/finally.  Both of these
run code at the exit from a syntactically defined scope.  So they are
like stack allocation in C++, where a deallocator can run when the scope
exits.

2) Even with no scope-based de-allocation, it's common to put smart
pointers into containers like lists and vectors.  So you could have a
unique_ptr to a filestream object, and stash the unique_ptr someplace as
a vector element, where the vector itself could be part of some even
more deeply nested structure.  At some point, the big structure gets
deleted (maybe through a manually-executed delete statement).  When that
happens, if the nested structures are all standard containers full of
unique_ptrs, the top-level finalizer will end up traversing the entire
tree and freeing up the file handles and whatever else might be in
there.

It occurs to me, maybe #2 above is closer to what the OP is really after
in Python.

Yep. C++ smart pointers are a good analogue to python references for
purposes of this PEP.


I guess it's doable, but refcounts don't seem like the right
way.

Well refcounts are definitely "doable": This is how the reference python 
implementation, CPython, currently manages to comply with this PEP and can
therefore be used for RAII.




Are you including cyclic references in your assertion that CPython 
behaves as you want?


--Ned.
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Chris Angelico
On Mon, Mar 5, 2018 at 1:11 AM, Ooomzay  wrote:
> On Sunday, 4 March 2018 03:00:13 UTC, Chris Angelico  wrote:
>> This thread is dead. The OP wants to wave a magic wand and say
>> "__del__ is now guaranteed to be called immediately",
>
> No "magic" required: Just one line change in the language reference will do 
> it.

Go ahead and actually implement it. It's not just one line in the
language reference.

>> without any explanation
>
> The PEP says it all really: To make the very pythonic RAII idiom available in 
> python.
>
>> - and, from the look of things, without any understanding
>> - of what that means for the language
>
> What impact on the _language_ (c.f. interpreter) do you think I have not 
> understood?
>
> It is is 100% backwards compatible with the language. It breaks nothing.
>
> It allows people who want to use the RAII idiom to do so.
>
> It allows people who want to use the "with" idiom to continue do so.
>
>> and the interpreters.
>
> I am well aware of what it will mean for interpreters. For some interpreters 
> it will have zero impact (e.g. CPython) and for some others it would unlikely 
> be economic to make them comply.
>

I don't even understand you now. First off, you just acknowledged that
this WILL impact CPython - you're not simply mandating what CPython
already does, you're tightening up the rules significantly. For others
- what do you even mean, "unlikely be economic to make them comply"?
Are you saying that the Jython project should simply die? Or that it
should become non-compliant with the Python spec? Huh?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 03:00:13 UTC, Chris Angelico  wrote:
> This thread is dead. The OP wants to wave a magic wand and say
> "__del__ is now guaranteed to be called immediately", 

No "magic" required: Just one line change in the language reference will do it.

> without any explanation

The PEP says it all really: To make the very pythonic RAII idiom available in 
python.

> - and, from the look of things, without any understanding
> - of what that means for the language 

What impact on the _language_ (c.f. interpreter) do you think I have not 
understood?

It is is 100% backwards compatible with the language. It breaks nothing.

It allows people who want to use the RAII idiom to do so.

It allows people who want to use the "with" idiom to continue do so.

> and the interpreters. 

I am well aware of what it will mean for interpreters. For some interpreters it 
will have zero impact (e.g. CPython) and for some others it would unlikely be 
economic to make them comply.

The decision here is does python want to be more pythonic, and make itself 
attractive for resource management applications or does it want to be 
compromised by some implementations?

> Everyone else is saying "your magic wand is broken". This is not going to go
> anywhere.

Well I see a lot of posts that indicate peeps here are more comfortable with 
the "with" idiom than the RAII idiom but I have not yet seen a single 
linguistic problem or breakage.

As it happens I have used RAII extensively with CPython to manage a debugging 
environment with complex external resources that need managing very 
efficiently. 

(I would use C++ if starting from scratch because it _guarantees_ the required 
deterministic destruction whereas python does not)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Steven D'Aprano
On Sun, 04 Mar 2018 04:37:46 -0800, Ooomzay wrote:

> On Sunday, 4 March 2018 04:23:07 UTC, Steven D'Aprano  wrote:
>> [...]
>> [This PEP] imposes enormous burdens on the maintainers of at least five
>> interpreters (CPython, Stackless, Jython, IronPython, PyPy) all of
>> which will need to be re-written to have RAII semantics guaranteed;
> 
> Not so:-  CPython, the reference interpreter, already implements the
> required behaviour, as mentioned in the PEP.

Except that it doesn't. From the docs:

"It is not guaranteed that __del__() methods are called for objects that 
still exist when the interpreter exits."

https://docs.python.org/3/reference/datamodel.html#object.__del__


That limitation of CPython is much reduced now compared to older 
versions, but there are still circumstances where the __del__ method 
cannot be called.


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Chris Angelico
On Mon, Mar 5, 2018 at 12:26 AM, Ooomzay  wrote:
> Well refcounts are definitely "doable": This is how the reference python 
> implementation, CPython, currently manages to comply with this PEP and can
> therefore be used for RAII.
>
> This PEP is an attempt to _guarantee_ this behaviour and make the elegance
> of RAII available to all pythonistas that want it. Without this guarantee
> python is not attractive to applications that must manage non-trivial
> resources reliably.
>
> Aside: I once read somewhere that must have seemed authoritative at the
> time, that CPython _guarantees_ to continue to behave like this - but now the 
> subject is topical again I can find no trace of this guarantee.

That's because there is no such guarantee. In fact, you can probably
find places in the docs where it is specifically stated that you
cannot depend on __del__ for this.

You still haven't said how you're going to cope with reference cycles
- or are you expecting every single decref to do a full cycle check?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 03:16:31 UTC, Paul Rubin  wrote:
> Chris Angelico  writes:
> > Yep, cool. Now do that with all of your smart pointers being on the
> > heap too. You are not allowed to use ANY stack objects.  ANY. Got it?
> 
> That's both overconstraining and not even that big a problem the way you
> phrase it.
> 
> 1) Python has both the "with" statement and try/finally.  Both of these
> run code at the exit from a syntactically defined scope.  So they are
> like stack allocation in C++, where a deallocator can run when the scope
> exits.
> 
> 2) Even with no scope-based de-allocation, it's common to put smart
> pointers into containers like lists and vectors.  So you could have a
> unique_ptr to a filestream object, and stash the unique_ptr someplace as
> a vector element, where the vector itself could be part of some even
> more deeply nested structure.  At some point, the big structure gets
> deleted (maybe through a manually-executed delete statement).  When that
> happens, if the nested structures are all standard containers full of
> unique_ptrs, the top-level finalizer will end up traversing the entire
> tree and freeing up the file handles and whatever else might be in
> there.
> 
> It occurs to me, maybe #2 above is closer to what the OP is really after
> in Python.  

Yep. C++ smart pointers are a good analogue to python references for 
purposes of this PEP.

> I guess it's doable, but refcounts don't seem like the right
> way. 

Well refcounts are definitely "doable": This is how the reference python 
implementation, CPython, currently manages to comply with this PEP and can 
therefore be used for RAII.

This PEP is an attempt to _guarantee_ this behaviour and make the elegance 
of RAII available to all pythonistas that want it. Without this guarantee 
python is not attractive to applications that must manage non-trivial 
resources reliably.

Aside: I once read somewhere that must have seemed authoritative at the
time, that CPython _guarantees_ to continue to behave like this - but now the 
subject is topical again I can find no trace of this guarantee.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 04:23:07 UTC, Steven D'Aprano  wrote:
> [...]
> [This PEP] imposes enormous burdens on the maintainers of at least five 
> interpreters (CPython, Stackless, Jython, IronPython, PyPy) all of which 
> will need to be re-written to have RAII semantics guaranteed; 

Not so:-  CPython, the reference interpreter, already implements the required 
behaviour, as mentioned in the PEP.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-04 Thread Ooomzay
On Sunday, 4 March 2018 04:23:07 UTC, Steven D'Aprano  wrote:
> On Sat, 03 Mar 2018 18:19:37 -0800, Ooomzay wrote:
> 
> >> def function():
> >> x = open_resource()
> >> process(x)
> >> # and we're done with x now, but too lazy to explicitly close it
> >> sleep(1) # Simulate some more work. Lots of work. 
> >> return
> >> # and finally x is closed (2.8 hours after you finished using it)
> >> 
> >> The answer in C++ is "well don't do that then". The answer is Python
> >> is, "don't be so lazy, just use a with statement".
> > 
> > The answer in C++ would be to say "don't be so lazy just put the x
> > manipulation in a function or sub-block". 
> 
> Right -- so you still have to think about resource management. The 
> premise of this proposal is that RAII means you don't have to think about 
> resource management, it just happens, but that's not really the case.

That is not my main premise: Which is that RAII is a more elegant (no 
specialised syntax at all) and robust way to manage resources.

Please consider the case of a composite resource: You need to implement
__enter__, __exit__ and track the open/closed state at every level in
your component hierarchy - even if some levels hold no resources directly.

This is burdensome, breaks encapsulation, breaks invariance and is error prone
...very unpythonic.

> > The answer with Python + this
> > PEP would be "don't be so lazy just put the x manipulation in a function
> > or explicitly del x" ...no new syntax.
> 
> Sure -- but it doesn't gain you anything we don't already have.

See above.
 
> It imposes enormous burdens on the maintainers of at least five 
> interpreters (CPython, Stackless, Jython, IronPython, PyPy) all of which 
> will need to be re-written to have RAII semantics guaranteed; 

Yes. This is a substantial issue that will almost certainly see it rejected by 
HWCNBN on political, rather than linguistic, grounds.

My PEP is about improving the linguistic integrity and fitness for resource 
management purpose of the language.

> it will 
> probably have significant performance costs; and at the end of the day, 
> the benefit over using a with statement is minor.
> 
> (Actually, I'm not convinced that there is *any* benefit. If anything, I 
> think it will be a reliability regression -- see below.)
> 
> >> If you want deterministic closing of resources, with statements are the
> >> way to do it.
> >> 
> >> def function():
> >> with open_resource() as x:
> >> process(x)
> >> # and x is guaranteed to be closed
> > 
> > What a palava!
> 
> I don't know that word, and neither do any of my dictionaries. I think 
> the word you might mean is "pelaver"?

I don't know that word, and neither do any of my dictionaries. I think 
the word you might mean is "Palaver"? 

Anyway Palava/Pelaver/Palaver/Palavra/Palabra derives from the word 
for "word", but in England it is often used idiomatically to mean a 
surfeit of words, or even more generally, a surfeit of effort.

I intend it in both senses: The unnecessary addition of 
the words "with", "as", "__enter__" & "__exit__" to the language 
and the need implement the latter two methods all 
over the place.

> In any case, you might not like with statements, but I think they're 
> infinitely better than:
> 
> def meaningless_function_that_exists_only_to_manage_resource():
> x = open_resource()
> process(x)

> def function():
> meaningless_function_that_exists_only_to_manage_resource()
> sleep(1)  # simulate a long-running function

Why would you prefer a new construct? Functions _are_ pythons scoping context! 
Giving one a pejorative name does not change that. 
 
> In other words, your solutions are just as much manual resource 
> management as the with statement. The only differences are:
> 
> - instead of explicitly using a dedicated syntax designed for 
>   resource management, you're implicitly using scope behaviour;

Excellent: With the benefit of automatic, exception safe, destruction of 
resources, including composite resources.
 
> - the with block is equivalent to a try...finally, and so it is
>   guaranteed to close the resource even if an exception occurs;
>   your solution isn't.

> If process(x) creates a non-local reference to x, and then raises an 
> exception, and that exception is caught elsewhere, x will not go out of 
> scope and won't be closed. 
> A regression in the reliability of the code.

This PEP does not affect existing code. Peeps who are familiar with RAII
understand that creating a global reference to an RAII resource is 
explicitly saying "I want this kept open at global scope" and that is the
behaviour that they will be guaranteed.

> (I'm moderately confident that this failure of RAII to deliver what it 
> promises is possible in C++ too.)

(Well I bet the entire VOIP stack, AAA and accounting modules in a certain 
9 satellite network's base-stations against you)


-- 

Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Steven D'Aprano
On Sat, 03 Mar 2018 18:19:37 -0800, Ooomzay wrote:

>> def function():
>> x = open_resource()
>> process(x)
>> # and we're done with x now, but too lazy to explicitly close it
>> sleep(1) # Simulate some more work. Lots of work. 
>> return
>> # and finally x is closed (2.8 hours after you finished using it)
>> 
>> The answer in C++ is "well don't do that then". The answer is Python
>> is, "don't be so lazy, just use a with statement".
> 
> The answer in C++ would be to say "don't be so lazy just put the x
> manipulation in a function or sub-block". 

Right -- so you still have to think about resource management. The 
premise of this proposal is that RAII means you don't have to think about 
resource management, it just happens, but that's not really the case.


> The answer with Python + this
> PEP would be "don't be so lazy just put the x manipulation in a function
> or explicitly del x" ...no new syntax.

Sure -- but it doesn't gain you anything we don't already have.

It imposes enormous burdens on the maintainers of at least five 
interpreters (CPython, Stackless, Jython, IronPython, PyPy) all of which 
will need to be re-written to have RAII semantics guaranteed; it will 
probably have significant performance costs; and at the end of the day, 
the benefit over using a with statement is minor.

(Actually, I'm not convinced that there is *any* benefit. If anything, I 
think it will be a reliability regression -- see below.)


>> If you want deterministic closing of resources, with statements are the
>> way to do it.
>> 
>> def function():
>> with open_resource() as x:
>> process(x)
>> # and x is guaranteed to be closed
> 
> What a palava!

I don't know that word, and neither do any of my dictionaries. I think 
the word you might mean is "pelaver"?

Palaver \Pa*la"ver\, n. [Sp. palabra, or Pg. palavra, fr. L.
   parabola a comparison, a parable, LL., a word. See
   Parable.]
   [1913 Webster]
   1. Talk; conversation; esp., idle or beguiling talk; talk
  intended to deceive; flattery.



In any case, you might not like with statements, but I think they're 
infinitely better than:

def meaningless_function_that_exists_only_to_manage_resource():
x = open_resource()
process(x)

def function():
meaningless_function_that_exists_only_to_manage_resource()
sleep(1)  # simulate a long-running function


Or worse:

def function():
if True:
# Do this just to introduce a new scope, as in C++
# (but not Python today)
x = open_resource()
process(x)
sleep(1)  # simulate a long-running function


In other words, your solutions are just as much manual resource 
management as the with statement. The only differences are:

- instead of explicitly using a dedicated syntax designed for 
  resource management, you're implicitly using scope behaviour;

- the with block is equivalent to a try...finally, and so it is
  guaranteed to close the resource even if an exception occurs;
  your solution isn't.

If process(x) creates a non-local reference to x, and then raises an 
exception, and that exception is caught elsewhere, x will not go out of 
scope and won't be closed. A regression in the reliability of the code.

(I'm moderately confident that this failure of RAII to deliver what it 
promises is possible in C++ too.)



-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Steven D'Aprano
On Sat, 03 Mar 2018 08:02:35 -0800, ooomzay wrote:

[...]
>> > But I am not! On the contrary RAII frees the programmer from even
>> > having to remember to close the file. The poster asked what would
>> > happen if the resource was deliberately kept open by storing a
>> > reference at global scope.
>> 
>> You say that as if it were difficult to do, requiring the programmer to
>> take extraordinary steps of heroic proportion. It doesn't. It is
>> unbelievably easy to store the reference at global scope, which means
>> that the programmer needs to remember to close the file explicitly and
>> RAII doesn't help.
> 
> Can you expand on what you mean by "unbelievably easy to store the
> reference at global scope". I will assume you are claiming that it is
> easy to _inadvertently_ store a global reference.

No.

Copy and paste this line:

--- %< -

x = 1

--- %< -

into a text file, and save it as "demo_global.py". Congratulations, 
you've just created a reference in the global scope!

I told you it was easy.

Do you put your imports at the top of the module, as recommended? Every 
one of those imported modules are references in the global scope.

Do you define classes and functions at the top level of your module? 
Those too are references in the global scope.

I'll accept that, *typically*, imported modules, functions and classes 
are unlikely to involve the sorts of resources that we care about closing 
in a timely fashion. But other kinds of top-level instances may.

The point is, you want to *mandate* RAII as a *language feature* rather 
than an implementation detail which Python interpreters are free to 
choose or not choose on their own. This (probably?) will put a VERY high 
burden on Jython, IronPython and Stackless, as well as anyone else who 
tries making a Python interpreter. And for what benefit? Let's see:

- RAII doesn't make reference cycles go away, so we still need a 
  garbage collector that can break cycles;

- RAII rarely helps for objects in the module global namespace, and
  many Python objects are in the global namespace;

- to make RAII work for heap objects, every implementation needs to
  implement their own "smart pointers" or similar, otherwise the entire
  idea is a bust; this adds significant complexity to interpreters that
  aren't written in C++ (i.e. all of them apart from Nuitka);

- to make the RAII guarantees of timely resource closure work, you have
  to impose significant mental burdens on the programmer: 

  * no cycles (avoid them, or use weak references);
  * no globals; 
  * if you must use a global, then you are responsible for 
manually deleting it in order to allow RAII to operate;

- to make the RAII guarantees work *well*, you need to introduce new
  scoping rules (new to Python, not C++ ) where each indented block 
  (loops, if...else blocks etc) are new scopes; otherwise the lifetime
  of a function scope could easily be too long.



> This problem with destruction of globals on exit is not unique to
> python. There are patterns to manage this for code you control, such as
> putting them all in one global "bucket" object and explicitly deleting
> it before exit.

The problem we have is, we want the resource to be closed in a timely 
manner once we're done with it. And your solution is to put it in a "God 
Object" that collects a whole lot of unrelated resources, then wait until 
you are done with the last of the them, before closing any of them?

That's not a pattern managing the problem, that's a work-around that 
fails to come even close to fixing the problem.


[...]
>> But as you said yourself, if the resource is held open in a global
>> reference, it will stay open indefinitely. And remember, global in this
>> context doesn't just mean the main module of your application, but
>> *every* module you import.
> 
> Can you give an example of such a case where an application-managed
> (acquired/opened & released/closed) resource is referenced strongly by a
> 3rd party module at global scope? ...in my experience this pattern
> usually smells.

Regardless of whether it smells or not, it is allowed.


>> I think you have just put your finger on the difference between what
>> RAII *claims* to do and what it *actually* can do.
> 
> I can assure you that RAII does what it says on the tin and is relied on
> in many critical systems to release resources robustly ... given the
> pre-requisite deterministic destruction.

Given the execution environment and design constraints of C++, I'm sure 
RAII works fine in C++ (and any other language which may copy it).

But it is not a panacea that makes resource management go away. You 
yourself have stated at least one limitation: if you hold a global 
reference to the object then RAII won't help you.

So your proposal requires major changes to not just one but all (present 
and future) Python interpreters, AND major changes to the way people 
write Python code, in order to solve a problem which is already 

Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Chris Angelico
On Sun, Mar 4, 2018 at 1:45 PM, Ooomzay  wrote:
> On Sunday, 4 March 2018 01:58:02 UTC, Gregory Ewing  wrote:
>> ooomzay wrote:
>> > Well he was not telling you the whole story: RAII works just as well with
>> > heap objects using smart pointers (unique_ptr and friends) which are a 
>> > closer
>> > analogy to python object references.
>>
>> By that definition, *all* resource management in Python is
>> based on RAII[1].
>
> I think not. Objects may have a close() in their __del__ method as a back-up 
> - but currently this is not guaranteed to be called in a timely fashion. 
> Hence the
> justification for "with" and my proposal to obviate it by actually 
> guaranteeing
> that __del__ is always called in a timely fashion.

This thread is dead. The OP wants to wave a magic wand and say
"__del__ is now guaranteed to be called immediately", without any
explanation - and, from the look of things, without any understanding
- of what that means for the language and the interpreters. Everyone
else is saying "your magic wand is broken". This is not going to go
anywhere.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Ooomzay
On Sunday, 4 March 2018 01:58:02 UTC, Gregory Ewing  wrote:
> ooomzay wrote:
> > Well he was not telling you the whole story: RAII works just as well with
> > heap objects using smart pointers (unique_ptr and friends) which are a 
> > closer
> > analogy to python object references.
> 
> By that definition, *all* resource management in Python is
> based on RAII[1]. 

I think not. Objects may have a close() in their __del__ method as a back-up - 
but currently this is not guaranteed to be called in a timely fashion. Hence 
the 
justification for "with" and my proposal to obviate it by actually guaranteeing 
that __del__ is always called in a timely fashion.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Richard Damon

On 3/3/18 9:10 PM, Chris Angelico wrote:

On Sun, Mar 4, 2018 at 1:01 PM, Ooomzay  wrote:

On Saturday, 3 March 2018 17:44:08 UTC, Chris Angelico  wrote:

On Sun, Mar 4, 2018 at 4:37 AM, Richard Damon 

Yes, stack allocated object in C++ have a nice lifetime to allow RAII to
work, but it doesn't just work with stack allocated objects. A lot of RAII
objects are members of a class object that may well be allocated on the
heap, and RAII makes sure that all the needed cleanup gets done when that
object gets destroyed.

How do you guarantee that the heap object is properly disposed of when
you're done with it? Your RAII object depends 100% on the destruction
of the heap object.

Smart pointers (unique_ptr and friends) are used to manage heap object 
lifecycles n . These are analogous to python object references.

Yep, cool. Now do that with all of your smart pointers being on the
heap too. You are not allowed to use ANY stack objects.  ANY. Got it?

ChrisA


Now write your Python without using anything not on the heap either (so 
no binding your objects to a name).


As Chris said, the stack variables in C++ are the local names used to 
refer to the objects.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Gregory Ewing

Richard Damon wrote:
The 
idea was to have a way to mark that certain classes/objects request that 
they are reference counted so they get the __del__ called as soon as the 
last reference goes away, without needing to require that overhead for 
all objects in all implementations.


That could be done with a flag in the object or type header.
But checking the flag every time a reference is added or
removed is probably about as expensive as doing the refcount
adjustment.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Ooomzay
On Friday, 2 March 2018 15:37:25 UTC, Paul  Moore  wrote:
[snip]
> def fn():
> for i in range(1):
> with open(f"file{i}.txt", "w") as f:
> f.write("Some text")
> 
> How would you write this in your RAII style - without leaving 10,000
> file descriptors open until the end of the function?
 
def fn():
for i in range(1):
f = RAIIFile(f"file{i}.txt", "w")
f.write("Some text")
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Gregory Ewing

Steven D'Aprano wrote:
Not just the class __dict__. You would have to do a full search of the 
MRO looking for any superclass which defines such methods.


That could be reduced a lot by making it a type slot. But
it would still increase the overhead of every refcount change
by at least a factor of two, as a rough estimate.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Ooomzay
On Saturday, 3 March 2018 23:52:34 UTC, Steven D'Aprano  wrote:
> I know that laziness and hubris are programmer virtues, but there is 
> still such a thing as *too much laziness*. RAII works in C++ where 
> instances are allocated in the stack, but even there, if you have an 
> especially long-lived function, your resources won't be closed promptly. 
> In Python terms:
> 
> def function():
> x = open_resource()
> process(x)
> # and we're done with x now, but too lazy to explicitly close it
> sleep(1) # Simulate some more work. Lots of work.
> return
> # and finally x is closed (2.8 hours after you finished using it)
> 
> The answer in C++ is "well don't do that then". The answer is Python is, 
> "don't be so lazy, just use a with statement".

The answer in C++ would be to say "don't be so lazy just put the x 
manipulation in a function or sub-block". The answer with Python + this PEP 
would be "don't be so lazy just put the x manipulation in a function or 
explicitly del x" ...no new syntax.

> If you want deterministic closing of resources, with statements are the 
> way to do it.
> 
> def function():
> with open_resource() as x:
> process(x)
> # and x is guaranteed to be closed

What a palava!

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Gregory Ewing

Paul Rubin wrote:

Richard Damon  writes:


a class to define member functions like __ref__
and __unref__ (or perhaps some other name) that if defined, would be
called every time a name was bound or unbound to an object?


That sounds horrendous and wouldn't handle the case of a list element
creating a reference without binding a name.


To be any use, they would have to be called any time the
reference count changes, which happens *very* frequently --
so massive overhead.

There's also a logical problem. Calling __ref__  would
itself involve adding at least one reference to the object
temporarily, thus requiring __ref__ to be called, which
would...

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Chris Angelico
On Sun, Mar 4, 2018 at 1:01 PM, Ooomzay  wrote:
> On Saturday, 3 March 2018 17:44:08 UTC, Chris Angelico  wrote:
>> On Sun, Mar 4, 2018 at 4:37 AM, Richard Damon 
>> > Yes, stack allocated object in C++ have a nice lifetime to allow RAII to
>> > work, but it doesn't just work with stack allocated objects. A lot of RAII
>> > objects are members of a class object that may well be allocated on the
>> > heap, and RAII makes sure that all the needed cleanup gets done when that
>> > object gets destroyed.
>>
>> How do you guarantee that the heap object is properly disposed of when
>> you're done with it? Your RAII object depends 100% on the destruction
>> of the heap object.
>
> Smart pointers (unique_ptr and friends) are used to manage heap object 
> lifecycles n . These are analogous to python object references.

Yep, cool. Now do that with all of your smart pointers being on the
heap too. You are not allowed to use ANY stack objects.  ANY. Got it?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Ooomzay
On Saturday, 3 March 2018 17:44:08 UTC, Chris Angelico  wrote:
> On Sun, Mar 4, 2018 at 4:37 AM, Richard Damon  
> > Yes, stack allocated object in C++ have a nice lifetime to allow RAII to
> > work, but it doesn't just work with stack allocated objects. A lot of RAII
> > objects are members of a class object that may well be allocated on the
> > heap, and RAII makes sure that all the needed cleanup gets done when that
> > object gets destroyed.
> 
> How do you guarantee that the heap object is properly disposed of when
> you're done with it? Your RAII object depends 100% on the destruction
> of the heap object.

Smart pointers (unique_ptr and friends) are used to manage heap object 
lifecycles n . These are analogous to python object references.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Ooomzay
On Saturday, 3 March 2018 17:16:14 UTC, Ned Batchelder  wrote:
> On 3/2/18 10:36 AM, Paul Moore wrote:
> > Or (real Python):
> >
> >  def fn():
> >  for i in range(1):
> >  with open(f"file{i}.txt", "w") as f:
> >  f.write("Some text")
> >
> > How would you write this in your RAII style - without leaving 10,000
> > file descriptors open until the end of the function?
> 
> IIUC, if the OP's proposal were accepted, the __del__ method would be 
> called as soon as the *value*'s reference count went to zero. That means 
> this wouldn't leave 10,000 files open, since each open() would assign a 
> new file object to f, which would make the previous file object's ref 
> count be zero, and it would be closed.

You have understood correctly. Here is the equivalent RAII version:-

def fn():
for i in range(1):
 f = RAIIFile(f"file{i}.txt", "w")
 f.write("Some text")

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Gregory Ewing

ooom...@gmail.com wrote:

Well he was not telling you the whole story: RAII works just as well with
heap objects using smart pointers (unique_ptr and friends) which are a closer
analogy to python object references.


By that definition, *all* resource management in Python is
based on RAII[1]. The only difference is that in Python the
only way to destroy an object is to remove all references to
it, whereas in C++ you have some other options (explicit
delete, stack allocation, etc.)

However, when people talk about RAII in C++ they're
*usually* referring to stack-allocated objects. There are
various ways of getting the same effect in Python:

(a) Rely on CPython reference counting

(b) Use a with-statement

(c) Use a try-finally statement

Of these, (a) is the closest in spirit to stack-allocated
RAII in C++, but it's only guaranteed to work in current
CPython.

[1] Well, not quite -- you can use low-level OS calls to
open files, etc. But the high-level wrappers for files,
sockets, and so forth all use RAII.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread ooomzay
On Friday, March 2, 2018 at 5:29:54 AM UTC, Rustom Mody wrote:
> Please excuse if this has been addressed above and/or its too basic:
> What's the difference between RAII and python's with/context-managers?

They address the same problem but I am claiming that RAII achieves this in a 
significantly more elegant/pythonic way without involving any special keywords 
or methods. in summary _if_ the PEP was adopted and/or you are using CPython 
today then:-

def riaa_file_copy(srcname, dstname):
src = RAIIFile(srcname, 'r')
dst = RAIIFile(dstname, 'w')
for line in src:
dst.write(line)

becomes equivalent to:

def pep343_file_copy(srcname, dstname):
with open(srcname, 'r')  as src,
 open(dstname, 'w') as dst:
for line in src:
dst.write(line)

RAII resource management is also simpler to implement only requiring existing 
__init__ and __del__ methods (e.g. to open/close the underlying file) and the 
resource objects are invariant. Which means the objects/managers do not need to 
track the enter/exit state - as there is no way to access them when they are 
not "open" in RAII.





-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread ooomzay

On Friday, March 2, 2018 at 3:37:25 PM UTC, Paul  Moore wrote:
> [...]
> RAII works in C++ (where it was initially invented) because it's used
> with stack-allocated variables that have clearly-defined and limited
> scope. 

RAII also works with smart pointers, which are a closer analogue 
to python references.

> In my experience writing C++, nobody uses RAII with
> heap-allocated variables - those require explicit allocation and
> deallocation and so are equivalent to having an explicit "close()"
> method in Python (or using __del__ in CPython as it currently exists).

In my experience no-one has been been explicitly deallocating
heap-variables for many years: they have been using smart pointers 
and RAII.

> Python doesn't have stack allocation, nor does it have a deterministic
> order of deletion of objects when their last reference goes out of
> scope (which can happen simultaneously for many objects):
>
> class Tracker:
> def __init__(self, n):
> self.n = n
> def __del__(self):
> print("Deleting instance", self.n)
> 
> def f():
> a = Tracker(1)
> b = Tracker(2)
> 
> f()
> 
> The language doesn't guarantee that a is removed before b. Are you
> proposing to make that change to the language as well?

It would improve things more, but I have not included such a proposal
in the current PEP as this is not essential for the main benefit. 
 
> Also Python only has function scope, so variables local to a
> smaller-than-the-function block of code aren't possible. That's
> something that is used in C++ a lot to limit the lifetime of resources
> under RAII. How do you propose to address that (without needing
> explicit del statements)? 

Create a sub function. 

> That's why the with statement exists, to
> clearly define lifetimes smaller than "the enclosing function". 

It does that in a half-hearted and smelly way compared to a function: 
The object references are left dangling in a broken state (not invariant). 
In fact you would do well to wrap your "with"s in a function to contain 
the smell.

> Your proposal doesn't offer any equivalent (other than an extra function).

An equivalent is not needed. Using a function is the ideomatic and does not
leak - unlike the "with" hack. (IMO block-scoping would enhance the langauge
but is not essential so not in this PEP).

> Consider C++:
> 
> void fn() {
> for (i = 0; i < 1; ++i) {
> char name[100];
> sprintf(name, "file%d.txt, i);
> File f(name); // I don't think std::ofstream doesn't support RAII
> f << "Some text";
> }
> }
> 
> Or (real Python):
> 
> def fn():
> for i in range(1):
> with open(f"file{i}.txt", "w") as f:
> f.write("Some text")
> 
> How would you write this in your RAII style - without leaving 10,000
> file descriptors open until the end of the function?

 def write_some_text_to_file(fname):
  f = RAIIFileAccess(fname, 'w')
  f.write("Some text")

 def fn():
 for i in range(1):
 write_some_text_to_file(f"file{i}.txt")

> That's both less efficient (function calls have a cost) 

Oh come now. No one is choosing to use python for its efficiency and 
functions that deal with real resources, such as files, are likely 
dominated by the os fopen/close etc.

> and less maintainable than the with-statement version.

I disagree. Even for such a trivial example, as soon as you 
start to change the detail of what you do then the usual maintainability
benefits of small well factored functions comes into play. That's why we
have functions in the language isn't it? Or do you just write one 
long script?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Richard Damon

On 3/3/18 6:57 PM, Steven D'Aprano wrote:

On Sat, 03 Mar 2018 10:01:43 -0700, Ian Kelly wrote:


On Sat, Mar 3, 2018 at 9:19 AM, Richard Damon 
wrote:

One idea does come to mind though, would it be reasonable, and somewhat
Pythonic, for a class to define member functions like __ref__ and
__unref__ (or perhaps some other name) that if defined, would be called
every time a name was bound or unbound to an object? (the assignment to
a name should __ref__ the new value, then __unref__ the old to avoid
possible issues with rebinding an object to the last name it was bound
to). This would allow those (limited) objects that want to be destroyed
as soon as possible to do so, but most other objects wouldn't have any
significant overhead added (just a check for this method).

Objects with these methods would still be subject to being cleaned up
with garbage collection in the case they were kept alive via a cycle,
having the cycle just makes it so that you don't get the immediate
distruction.

This sounds like a nightmare for performance. Ref counting is enough of
a drain already when it's done in C, using macros for in-lining. Now
every time the count would be incremented or decremented, we'd also need
to check the class __dict__ to see if there's a corresponding Python
method, build a frame object, and then call it in the interpreter.

Not just the class __dict__. You would have to do a full search of the
MRO looking for any superclass which defines such methods.

Python code doesn't tend to go in for long inheritance hierarchies, but
still, you could easily be checking four or five classes on *every*
binding.

Richard's assertion that this wouldn't add significant overhead isn't
right.

[...]

It would be much more efficient to spare all that by doing the
ref-counting in the interpreter and just call a method when it hits
zero. Which is just what CPython already does.

Indeed.

I'm afraid I don't understand what Richard's proposal is actually meant
to do. It sounds to me like he's just proposed adding a second, parallel,
optional, *SLOW* reference counter to the CPython implementation, in
order to force non-ref counter implementations to do something which they
have no need to do.


If checking for a method definiton is that slow, it wouldn't work. The 
idea was to have a way to mark that certain classes/objects request that 
they are reference counted so they get the __del__ called as soon as the 
last reference goes away, without needing to require that overhead for 
all objects in all implementations.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Richard Damon

On 3/3/18 6:49 PM, Steven D'Aprano wrote:

On Sat, 03 Mar 2018 12:37:08 -0500, Richard Damon wrote:


With RAII and immediate destruction on end of scope, we can automate the
release, without it and you need a lot of explicit code to manage these
resources.

Not so much.

with resource_i_care_about() as rsrc:
 process(rsrc)


is hardly what I call "a lot of explicit code". Yes, there's some code in
the context manager, but you write that *once* (if it isn't already
written for you!) and you're done.


The with statement handles the C++ case of using a unique_ptr, where the 
resource is aquired and release within a single block scope and thus a 
single stack based object is ultimately responsible for the lifetime of 
the resource.


I am not sure how it would handle a resource that gets shared for a 
while with something like a shared_ptr, and needs to be released when 
all the people who is has been shared with are done with their copy. 
Admittedly, this case is much less likely to be using such a critical 
resource, but it might. In Python, currently, you would either need to 
keep an explicit usage count to allow you to delete it when done, or let 
things wait for the collector to find it.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Steven D'Aprano
On Sat, 03 Mar 2018 10:01:43 -0700, Ian Kelly wrote:

> On Sat, Mar 3, 2018 at 9:19 AM, Richard Damon 
> wrote:
>> One idea does come to mind though, would it be reasonable, and somewhat
>> Pythonic, for a class to define member functions like __ref__ and
>> __unref__ (or perhaps some other name) that if defined, would be called
>> every time a name was bound or unbound to an object? (the assignment to
>> a name should __ref__ the new value, then __unref__ the old to avoid
>> possible issues with rebinding an object to the last name it was bound
>> to). This would allow those (limited) objects that want to be destroyed
>> as soon as possible to do so, but most other objects wouldn't have any
>> significant overhead added (just a check for this method).
>>
>> Objects with these methods would still be subject to being cleaned up
>> with garbage collection in the case they were kept alive via a cycle,
>> having the cycle just makes it so that you don't get the immediate
>> distruction.
> 
> This sounds like a nightmare for performance. Ref counting is enough of
> a drain already when it's done in C, using macros for in-lining. Now
> every time the count would be incremented or decremented, we'd also need
> to check the class __dict__ to see if there's a corresponding Python
> method, build a frame object, and then call it in the interpreter.

Not just the class __dict__. You would have to do a full search of the 
MRO looking for any superclass which defines such methods.

Python code doesn't tend to go in for long inheritance hierarchies, but 
still, you could easily be checking four or five classes on *every* 
binding.

Richard's assertion that this wouldn't add significant overhead isn't 
right.

[...]
> It would be much more efficient to spare all that by doing the
> ref-counting in the interpreter and just call a method when it hits
> zero. Which is just what CPython already does.

Indeed.

I'm afraid I don't understand what Richard's proposal is actually meant 
to do. It sounds to me like he's just proposed adding a second, parallel, 
optional, *SLOW* reference counter to the CPython implementation, in 
order to force non-ref counter implementations to do something which they 
have no need to do.


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Steven D'Aprano
On Sat, 03 Mar 2018 12:37:08 -0500, Richard Damon wrote:

> With RAII and immediate destruction on end of scope, we can automate the
> release, without it and you need a lot of explicit code to manage these
> resources.

Not so much.

with resource_i_care_about() as rsrc:
process(rsrc)


is hardly what I call "a lot of explicit code". Yes, there's some code in 
the context manager, but you write that *once* (if it isn't already 
written for you!) and you're done.

I know that laziness and hubris are programmer virtues, but there is 
still such a thing as *too much laziness*. RAII works in C++ where 
instances are allocated in the stack, but even there, if you have an 
especially long-lived function, your resources won't be closed promptly. 
In Python terms:

def function():
x = open_resource()
process(x)
# and we're done with x now, but too lazy to explicitly close it
sleep(1) # Simulate some more work. Lots of work.
return
# and finally x is closed (2.8 hours after you finished using it)


The answer in C++ is "well don't do that then". The answer is Python is, 
"don't be so lazy, just use a with statement".

If you want deterministic closing of resources, with statements are the 
way to do it.


def function():
with open_resource() as x:
process(x)
# and x is guaranteed to be closed
sleep(1) # Simulate some more work. Lots of work.
return


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread ooomzay
On Saturday, March 3, 2018 at 4:33:59 PM UTC, Michael Torrie wrote:
> On 03/03/2018 09:02 AM, ooomzay wrote:
> > I can assure you that RAII does what it says on the tin and is relied on in 
> > many critical systems to release resources robustly ... given the 
> > pre-requisite deterministic destruction. 
> 
> Sure but did you read what Paul Moore wrote?  

Yes.

> He said RAII works in C++
> because objects are allocated on the *stack* with strict lifetimes and
> scopes. 

Well he was not telling you the whole story: RAII works just as well with heap 
objects using smart pointers (unique_ptr and friends) which are a closer 
analogy to python object references.

> In C++, Heap-allocated objects must still be managed manually, without
> the benefit of RAII, 

No one should be manually managing resources on the heap in C++. They should be 
using smart pointers.

> for much of the same reasons as people are giving
> here for why RAII is not a good fit for Python.  

...for much the same reasons I am giving here for why RAII could 
be a very good fit for python.

> There are smart pointer
> objects that try to give RAII semantics to heap-allocated objects, with
> varying degrees of success. In other words there are some limitations.

Not sure what limitations you are evoking here but I have not had to write a 
delete or suffered a resource leak in C++ for many years. (we rolled our own 
smart pointers before they were standardised).

> Python does not have stack-allocated objects, so the same issues that
> prevent RAII from automatically applying in C++ to heap objects exist here.

False premise, false conclusion.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Richard Damon

On 3/3/18 1:28 PM, Chris Angelico wrote:

On Sun, Mar 4, 2018 at 5:22 AM, Richard Damon  wrote:

On 3/3/18 12:43 PM, Chris Angelico wrote:

On Sun, Mar 4, 2018 at 4:37 AM, Richard Damon 
wrote:

On 3/3/18 11:33 AM, Michael Torrie wrote:

On 03/03/2018 09:02 AM, ooom...@gmail.com wrote:

I can assure you that RAII does what it says on the tin and is relied
on
in
many critical systems to release resources robustly ... given the
pre-requisite deterministic destruction.

Sure but did you read what Paul Moore wrote?  He said RAII works in C++
because objects are allocated on the *stack* with strict lifetimes and
scopes. They won't ever have cycles and they are guaranteed to be
destroyed no matter what as the stack is unwound.  Python has no
stack-allocated objects.

In C++, Heap-allocated objects must still be managed manually, without
the benefit of RAII, for much of the same reasons as people are giving
here for why RAII is not a good fit for Python.  There are smart pointer
objects that try to give RAII semantics to heap-allocated objects, with
varying degrees of success. In other words there are some limitations.

Python does not have stack-allocated objects, so the same issues that
prevent RAII from automatically applying in C++ to heap objects exist
here.


Yes, stack allocated object in C++ have a nice lifetime to allow RAII to
work, but it doesn't just work with stack allocated objects. A lot of
RAII
objects are members of a class object that may well be allocated on the
heap, and RAII makes sure that all the needed cleanup gets done when that
object gets destroyed.

How do you guarantee that the heap object is properly disposed of when
you're done with it? Your RAII object depends 100% on the destruction
of the heap object.

ChrisA


Yes, the heap object 'owns' a resource, and as long as that heap object
exists, the resource needs to exist, but as soon as it doesn't, you want
that resource freed.

That heap object might well be controlled by some other RAII object (maybe a
smart pointer with a ref count) or maybe that object is being manually
controlled by the program. But the key is that when it goes away, all of the
resources it controls via RAII are automatically cleaned up, and you don't
need all that cleanup made explicit in the owning class.

You're trying to handwave away the *exact point* that we are all
making: EVERY object in Python is a heap object. So how is your magic
wand going to make any of this work? You have to do it exclusively
with heap objects because there is nothing else available.

ChrisA


Because while every OBJECT in Python is on the heap, not every THING is, 
as you do have local and global names bound to those objects.


This is like in C++, in C++ there is no way to directly reference an 
object on the heap, but need to use a stack or statically allocated 
object or reference (which has a name) to get to it.


The fact that all objects exist on the heap actually makes some things 
simpler, in C++ you need to be careful with smart pointers that they 
only point to objects on the heap, and thus are delete-able. If you 
accidentally put the address of a static or stack based object into one 
of them (and don't add a fake reference to the count) you will run into 
an issue when the reference count goes to zero and you try to destroy 
and then free the memory for the object. In Python, that can't happen.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Chris Angelico
On Sun, Mar 4, 2018 at 5:22 AM, Richard Damon  wrote:
> On 3/3/18 12:43 PM, Chris Angelico wrote:
>>
>> On Sun, Mar 4, 2018 at 4:37 AM, Richard Damon 
>> wrote:
>>>
>>> On 3/3/18 11:33 AM, Michael Torrie wrote:

 On 03/03/2018 09:02 AM, ooom...@gmail.com wrote:
>
> I can assure you that RAII does what it says on the tin and is relied
> on
> in
> many critical systems to release resources robustly ... given the
> pre-requisite deterministic destruction.

 Sure but did you read what Paul Moore wrote?  He said RAII works in C++
 because objects are allocated on the *stack* with strict lifetimes and
 scopes. They won't ever have cycles and they are guaranteed to be
 destroyed no matter what as the stack is unwound.  Python has no
 stack-allocated objects.

 In C++, Heap-allocated objects must still be managed manually, without
 the benefit of RAII, for much of the same reasons as people are giving
 here for why RAII is not a good fit for Python.  There are smart pointer
 objects that try to give RAII semantics to heap-allocated objects, with
 varying degrees of success. In other words there are some limitations.

 Python does not have stack-allocated objects, so the same issues that
 prevent RAII from automatically applying in C++ to heap objects exist
 here.
>>>
>>>
>>> Yes, stack allocated object in C++ have a nice lifetime to allow RAII to
>>> work, but it doesn't just work with stack allocated objects. A lot of
>>> RAII
>>> objects are members of a class object that may well be allocated on the
>>> heap, and RAII makes sure that all the needed cleanup gets done when that
>>> object gets destroyed.
>>
>> How do you guarantee that the heap object is properly disposed of when
>> you're done with it? Your RAII object depends 100% on the destruction
>> of the heap object.
>>
>> ChrisA
>
>
> Yes, the heap object 'owns' a resource, and as long as that heap object
> exists, the resource needs to exist, but as soon as it doesn't, you want
> that resource freed.
>
> That heap object might well be controlled by some other RAII object (maybe a
> smart pointer with a ref count) or maybe that object is being manually
> controlled by the program. But the key is that when it goes away, all of the
> resources it controls via RAII are automatically cleaned up, and you don't
> need all that cleanup made explicit in the owning class.

You're trying to handwave away the *exact point* that we are all
making: EVERY object in Python is a heap object. So how is your magic
wand going to make any of this work? You have to do it exclusively
with heap objects because there is nothing else available.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Richard Damon

On 3/3/18 12:43 PM, Chris Angelico wrote:

On Sun, Mar 4, 2018 at 4:37 AM, Richard Damon  wrote:

On 3/3/18 11:33 AM, Michael Torrie wrote:

On 03/03/2018 09:02 AM, ooom...@gmail.com wrote:

I can assure you that RAII does what it says on the tin and is relied on
in
many critical systems to release resources robustly ... given the
pre-requisite deterministic destruction.

Sure but did you read what Paul Moore wrote?  He said RAII works in C++
because objects are allocated on the *stack* with strict lifetimes and
scopes. They won't ever have cycles and they are guaranteed to be
destroyed no matter what as the stack is unwound.  Python has no
stack-allocated objects.

In C++, Heap-allocated objects must still be managed manually, without
the benefit of RAII, for much of the same reasons as people are giving
here for why RAII is not a good fit for Python.  There are smart pointer
objects that try to give RAII semantics to heap-allocated objects, with
varying degrees of success. In other words there are some limitations.

Python does not have stack-allocated objects, so the same issues that
prevent RAII from automatically applying in C++ to heap objects exist
here.


Yes, stack allocated object in C++ have a nice lifetime to allow RAII to
work, but it doesn't just work with stack allocated objects. A lot of RAII
objects are members of a class object that may well be allocated on the
heap, and RAII makes sure that all the needed cleanup gets done when that
object gets destroyed.

How do you guarantee that the heap object is properly disposed of when
you're done with it? Your RAII object depends 100% on the destruction
of the heap object.

ChrisA


Yes, the heap object 'owns' a resource, and as long as that heap object 
exists, the resource needs to exist, but as soon as it doesn't, you want 
that resource freed.


That heap object might well be controlled by some other RAII object 
(maybe a smart pointer with a ref count) or maybe that object is being 
manually controlled by the program. But the key is that when it goes 
away, all of the resources it controls via RAII are automatically 
cleaned up, and you don't need all that cleanup made explicit in the 
owning class.


If the resource is plentiful, like memory, we can simplify the life 
management by using something like garbage collection, so if some is 
used longer than needed it isn't a big issue. Other resources are much 
more valuable and limited, and it is important to release them when no 
longer needed. Without some method to make sure that happens 
'automatically' as soon as possible, it requires that the programmer 
explicitly deal with it in full detail.


Now, a lot of talk has been throw around about things like reference 
cycles getting in the way of this, but practically, the resources that 
this would be needed for, then not to get involved in those sorts of 
issues, but tend to live in more simple structures, so the fact that 
objects in a cycle might need for the garbage collector to determine 
they aren't used anymore isn't an issue (and the presence of garbage 
collection says that these more complicated structures, which only hold 
resources that are plentiful, don't need to be precisely managed will 
RAII controls).


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Chris Angelico
On Sun, Mar 4, 2018 at 4:37 AM, Richard Damon  wrote:
> On 3/3/18 11:33 AM, Michael Torrie wrote:
>>
>> On 03/03/2018 09:02 AM, ooom...@gmail.com wrote:
>>>
>>> I can assure you that RAII does what it says on the tin and is relied on
>>> in
>>> many critical systems to release resources robustly ... given the
>>> pre-requisite deterministic destruction.
>>
>> Sure but did you read what Paul Moore wrote?  He said RAII works in C++
>> because objects are allocated on the *stack* with strict lifetimes and
>> scopes. They won't ever have cycles and they are guaranteed to be
>> destroyed no matter what as the stack is unwound.  Python has no
>> stack-allocated objects.
>>
>> In C++, Heap-allocated objects must still be managed manually, without
>> the benefit of RAII, for much of the same reasons as people are giving
>> here for why RAII is not a good fit for Python.  There are smart pointer
>> objects that try to give RAII semantics to heap-allocated objects, with
>> varying degrees of success. In other words there are some limitations.
>>
>> Python does not have stack-allocated objects, so the same issues that
>> prevent RAII from automatically applying in C++ to heap objects exist
>> here.
>
>
> Yes, stack allocated object in C++ have a nice lifetime to allow RAII to
> work, but it doesn't just work with stack allocated objects. A lot of RAII
> objects are members of a class object that may well be allocated on the
> heap, and RAII makes sure that all the needed cleanup gets done when that
> object gets destroyed.

How do you guarantee that the heap object is properly disposed of when
you're done with it? Your RAII object depends 100% on the destruction
of the heap object.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Richard Damon

On 3/3/18 11:33 AM, Michael Torrie wrote:

On 03/03/2018 09:02 AM, ooom...@gmail.com wrote:

I can assure you that RAII does what it says on the tin and is relied on in
many critical systems to release resources robustly ... given the
pre-requisite deterministic destruction.

Sure but did you read what Paul Moore wrote?  He said RAII works in C++
because objects are allocated on the *stack* with strict lifetimes and
scopes. They won't ever have cycles and they are guaranteed to be
destroyed no matter what as the stack is unwound.  Python has no
stack-allocated objects.

In C++, Heap-allocated objects must still be managed manually, without
the benefit of RAII, for much of the same reasons as people are giving
here for why RAII is not a good fit for Python.  There are smart pointer
objects that try to give RAII semantics to heap-allocated objects, with
varying degrees of success. In other words there are some limitations.

Python does not have stack-allocated objects, so the same issues that
prevent RAII from automatically applying in C++ to heap objects exist here.


Yes, stack allocated object in C++ have a nice lifetime to allow RAII to 
work, but it doesn't just work with stack allocated objects. A lot of 
RAII objects are members of a class object that may well be allocated on 
the heap, and RAII makes sure that all the needed cleanup gets done when 
that object gets destroyed.


Yes, I suspect that directly creating a RAII control object as the sole 
object on the heap is unusual, but that doesn't make the never appear on 
the heap.


When the resource is memory, since one chunk of memory tends to be just 
as usable as another, garbage collection works great. Other resources 
can be quite precious, and delays in releasing them can have significant 
effects. With RAII and immediate destruction on end of scope, we can 
automate the release, without it and you need a lot of explicit code to 
manage these resources. Maybe the issue is that most Pythonic code 
doesn't need to deal with these sorts of resources where this becomes 
important to be dealt with automatically, either being held for the 
whole program so OS cleanup handles it, or the are so fine grained that 
the explicitness isn't an issue.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Chris Angelico
On Sun, Mar 4, 2018 at 3:19 AM, Richard Damon  wrote:
> One idea does come to mind though, would it be reasonable, and somewhat
> Pythonic, for a class to define member functions like __ref__ and __unref__
> (or perhaps some other name) that if defined, would be called every time a
> name was bound or unbound to an object?

Just a thought, here: would __ref__ be called every time a new
function-local name is bound to an object? For instance, when a method
is called, and its 'self' parameter gets set to that object?

That miight be a little problematic.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Ned Batchelder

On 2/28/18 6:53 PM, ooom...@gmail.com wrote:

On Wednesday, February 28, 2018 at 11:45:24 PM UTC, ooo...@gmail.com wrote:

On Wednesday, February 28, 2018 at 11:02:17 PM UTC, Chris Angelico wrote:

On Thu, Mar 1, 2018 at 9:51 AM,  ooomzay wrote:
[snip]
Taking a really simple situation:

class Foo:
 def __init__(self):
 self.self = self
 print("Creating a Foo")
 def __del__(self):
 print("Disposing of a Foo")

foo = Foo()
foo = 1

When do you expect __del__ to be called?

At the point of (re)assignment: "foo = 1"

Oh... I now see there is a (non-weak) self reference in there.

So in this case it would be orphaned. It is a design error and should be 
recoded. I don't care how it is detected for current purposes.


There are many ways to create a cycle. You can't simply dismiss them all 
as design errors and put the burden on the developer to rewrite their code.


For example, objects have a reference to their classes.  So any time you 
write a class that references its objects, you have a cycle.  Or perhaps 
you have a tree structure where nodes have a list of their children, but 
also children have a reference to their parent?  I'm not even sure where 
in the inner workings of Python there are cycles.


There's a reason Python doesn't guarantee synchronous execution of 
__del__.  If you want your proposal to be taken seriously, you need to 
understand that reason, and propose a realistic alternative.


--Ned.
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Ned Batchelder

On 3/2/18 10:36 AM, Paul Moore wrote:

Or (real Python):

 def fn():
 for i in range(1):
 with open(f"file{i}.txt", "w") as f:
 f.write("Some text")

How would you write this in your RAII style - without leaving 10,000
file descriptors open until the end of the function?


IIUC, if the OP's proposal were accepted, the __del__ method would be 
called as soon as the *value*'s reference count went to zero. That means 
this wouldn't leave 10,000 files open, since each open() would assign a 
new file object to f, which would make the previous file object's ref 
count be zero, and it would be closed.


--Ned.
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Dietmar Schwertberger

On 2/28/2018 11:51 PM, ooom...@gmail.com wrote:

This PEP proposes that valid python interpreters *must* synchronously destroy 
objects when the last reference to an object goes out of scope. This 
interpreter behaviour is currently permitted and exhibited by the reference 
implementation [CPython], but it is optional.


CPython does *not* guarantee destruction when the object reference goes 
out of scope, even if there are no other references.
I would very much appreciate such a deterministic behaviour, at least 
with CPython.


I recently had to debug an issue in the matplotlib wx backend (*). Under 
certain conditions, the wx device context was not destroyed when the 
reference went out of scope. Adding a del to the end of the method or 
calling the Destroy method of the context did fix the issue. (There was 
also a hidden reference, but avoiding this was not sufficient. The del 
was still required.)


(*) https://github.com/matplotlib/matplotlib/issues/10174

Regards,

Dietmar


--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Chris Angelico
On Sun, Mar 4, 2018 at 3:19 AM, Richard Damon  wrote:
> On 3/3/18 9:03 AM, Ian Kelly wrote:
>>
>> On Fri, Mar 2, 2018 at 9:57 PM, Gregory Ewing
>>  wrote:
>>>
>>> Paul Rubin wrote:

 So you want the programmer to put more head scratching into figuring out
 which reference should be strong and which should be weak?
>>>
>>>
>>> Also, sometimes weak references don't really solve the
>>> problem, e.g. if you have a graph where you can't identify
>>> any particular node as a "root" node, and you want the graph
>>> to stay around as long as any of its nodes are referenced.
>>
>> I offered two possible solutions for the graph problem elsewhere in this
>> thread.
>>
>> For the record I'm not in favor of the OP's proposal (although making
>> the different Python implementations more consistent in their garbage
>> collection semantics would not be a bad thing per se) but I am
>> somewhat mystified as to why people seem to think that weak refs are
>> difficult to use correctly.
>
>
> As I think about this, and coming from a background where I have found the
> ability to reliably use RAII useful in some cases, I can see some merit to
> the proposal. One big issue is that since Python doesn't have sub-function
> level lifetime scopes for locals, it is perhaps not quite as useful in some
> cases.
>
> One idea does come to mind though, would it be reasonable, and somewhat
> Pythonic, for a class to define member functions like __ref__ and __unref__
> (or perhaps some other name) that if defined, would be called every time a
> name was bound or unbound to an object? (the assignment to a name should
> __ref__ the new value, then __unref__ the old to avoid possible issues with
> rebinding an object to the last name it was bound to). This would allow
> those (limited) objects that want to be destroyed as soon as possible to do
> so, but most other objects wouldn't have any significant overhead added
> (just a check for this method).
>
> Objects with these methods would still be subject to being cleaned up with
> garbage collection in the case they were kept alive via a cycle, having the
> cycle just makes it so that you don't get the immediate distruction.

I'm not sure what __unref__ gives you that __del__ doesn't, aside from
mandating that every Python implementation from here to eternity MUST
use reference counting.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Ian Kelly
On Sat, Mar 3, 2018 at 9:19 AM, Richard Damon  wrote:
> One idea does come to mind though, would it be reasonable, and somewhat
> Pythonic, for a class to define member functions like __ref__ and __unref__
> (or perhaps some other name) that if defined, would be called every time a
> name was bound or unbound to an object? (the assignment to a name should
> __ref__ the new value, then __unref__ the old to avoid possible issues with
> rebinding an object to the last name it was bound to). This would allow
> those (limited) objects that want to be destroyed as soon as possible to do
> so, but most other objects wouldn't have any significant overhead added
> (just a check for this method).
>
> Objects with these methods would still be subject to being cleaned up with
> garbage collection in the case they were kept alive via a cycle, having the
> cycle just makes it so that you don't get the immediate distruction.

This sounds like a nightmare for performance. Ref counting is enough
of a drain already when it's done in C, using macros for in-lining.
Now every time the count would be incremented or decremented, we'd
also need to check the class __dict__ to see if there's a
corresponding Python method, build a frame object, and then call it in
the interpreter.

And note that it would have to be basically any time the CPython
implementation currently adjusts the ref count, not just when a name
is bound or unbound, as there are lots of references that aren't
related to name bindings, for example:

* Objects stored in collections
* Function argument default values
* Function annotations
* References temporarily held by C-API code just to make sure that the
object they're manipulating doesn't suddenly become invalid during a
function call

It would be much more efficient to spare all that by doing the
ref-counting in the interpreter and just call a method when it hits
zero. Which is just what CPython already does.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread Michael Torrie
On 03/03/2018 09:02 AM, ooom...@gmail.com wrote:
> I can assure you that RAII does what it says on the tin and is relied on in 
> many critical systems to release resources robustly ... given the 
> pre-requisite deterministic destruction. 

Sure but did you read what Paul Moore wrote?  He said RAII works in C++
because objects are allocated on the *stack* with strict lifetimes and
scopes. They won't ever have cycles and they are guaranteed to be
destroyed no matter what as the stack is unwound.  Python has no
stack-allocated objects.

In C++, Heap-allocated objects must still be managed manually, without
the benefit of RAII, for much of the same reasons as people are giving
here for why RAII is not a good fit for Python.  There are smart pointer
objects that try to give RAII semantics to heap-allocated objects, with
varying degrees of success. In other words there are some limitations.

Python does not have stack-allocated objects, so the same issues that
prevent RAII from automatically applying in C++ to heap objects exist here.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Richard Damon

On 3/3/18 9:03 AM, Ian Kelly wrote:

On Fri, Mar 2, 2018 at 9:57 PM, Gregory Ewing
 wrote:

Paul Rubin wrote:

So you want the programmer to put more head scratching into figuring out
which reference should be strong and which should be weak?


Also, sometimes weak references don't really solve the
problem, e.g. if you have a graph where you can't identify
any particular node as a "root" node, and you want the graph
to stay around as long as any of its nodes are referenced.

I offered two possible solutions for the graph problem elsewhere in this thread.

For the record I'm not in favor of the OP's proposal (although making
the different Python implementations more consistent in their garbage
collection semantics would not be a bad thing per se) but I am
somewhat mystified as to why people seem to think that weak refs are
difficult to use correctly.


As I think about this, and coming from a background where I have found 
the ability to reliably use RAII useful in some cases, I can see some 
merit to the proposal. One big issue is that since Python doesn't have 
sub-function level lifetime scopes for locals, it is perhaps not quite 
as useful in some cases.


One idea does come to mind though, would it be reasonable, and somewhat 
Pythonic, for a class to define member functions like __ref__ and 
__unref__ (or perhaps some other name) that if defined, would be called 
every time a name was bound or unbound to an object? (the assignment to 
a name should __ref__ the new value, then __unref__ the old to avoid 
possible issues with rebinding an object to the last name it was bound 
to). This would allow those (limited) objects that want to be destroyed 
as soon as possible to do so, but most other objects wouldn't have any 
significant overhead added (just a check for this method).


Objects with these methods would still be subject to being cleaned up 
with garbage collection in the case they were kept alive via a cycle, 
having the cycle just makes it so that you don't get the immediate 
distruction.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-03 Thread ooomzay
On Friday, March 2, 2018 at 10:43:57 PM UTC, Steven D'Aprano wrote:
> On Fri, 02 Mar 2018 07:09:19 -0800, ooomzay wrote:
> [...]
> >> If you're going to *require* the programmer to explicitly del the
> >> reference:
> >>
> >> f = open("file")
> >> text = f.read()
> >> del f
> > 
> > But I am not! On the contrary RAII frees the programmer from even having
> > to remember to close the file. The poster asked what would happen if the
> > resource was deliberately kept open by storing a reference at global
> > scope.
> 
> You say that as if it were difficult to do, requiring the programmer to 
> take extraordinary steps of heroic proportion. It doesn't.
> It is unbelievably easy to store the reference at global scope, which 
> means that the programmer needs to remember to close the file explicitly 
> and RAII doesn't help.

Can you expand on what you mean by "unbelievably easy to store the reference 
at global scope". I will assume you are claiming that it is easy to 
_inadvertently_ store a global reference. Can you give an example of how 
this might happen? (Apart from writing leaky exception handlers, which I 
trust no one thinks is a good idea)

> [...]
> Your justification for requiring RAII is to ensure the timely closure of 
> resources -- but to do that, you have to explicitly close or delete the 
> resource. It simply isn't true that "RAII frees the programmer from even 
> having to remember to close the file".

In the special case (keyword global) that you reference resources from global 
scope - then yes, you must remember to del the global references before you 
exit to avoid relying on the vagaries of exit processing.

This problem with destruction of globals on exit is not unique to python. 
There are patterns to manage this for code you control, such as putting them 
all in one global "bucket" object and explicitly deleting it before exit.

> >> then you might as well require them to explicitly close the file:
> >> 
> >> f = open("file")
> >> text = f.read()
> >> f.close()
> >> 
> >> which we know from many years experience is not satisfactory except for
> >> the simplest scripts that don't need to care about resource management.
> >> That's the fatal flaw in RAII:
> > 
> > We must be discussing a different RAII. That is the raison d'etre of
> > RAII: RAII directly addresses this problem in an exception-safe way that
> > does not burden the resource user at all.
> 
> But as you said yourself, if the resource is held open in a global 
> reference, it will stay open indefinitely. And remember, global in this 
> context doesn't just mean the main module of your application, but 
> *every* module you import.

Can you give an example of such a case where an application-managed 
(acquired/opened & released/closed) resource is referenced strongly by 
a 3rd party module at global scope? ...in my experience this pattern 
usually smells.

> I think you have just put your finger on the difference between what RAII 
> *claims* to do and what it *actually* can do.

I can assure you that RAII does what it says on the tin and is relied on in 
many critical systems to release resources robustly ... given the 
pre-requisite deterministic destruction. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-03 Thread Ian Kelly
On Fri, Mar 2, 2018 at 9:57 PM, Gregory Ewing
 wrote:
> Paul Rubin wrote:
>>
>> So you want the programmer to put more head scratching into figuring out
>> which reference should be strong and which should be weak?
>
>
> Also, sometimes weak references don't really solve the
> problem, e.g. if you have a graph where you can't identify
> any particular node as a "root" node, and you want the graph
> to stay around as long as any of its nodes are referenced.

I offered two possible solutions for the graph problem elsewhere in this thread.

For the record I'm not in favor of the OP's proposal (although making
the different Python implementations more consistent in their garbage
collection semantics would not be a bad thing per se) but I am
somewhat mystified as to why people seem to think that weak refs are
difficult to use correctly.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction (Posting On Python-List Prohibited)

2018-03-02 Thread Gregory Ewing

Paul Rubin wrote:

So you want the programmer to put more head scratching into figuring out
which reference should be strong and which should be weak?


Also, sometimes weak references don't really solve the
problem, e.g. if you have a graph where you can't identify
any particular node as a "root" node, and you want the graph
to stay around as long as any of its nodes are referenced.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-02 Thread Chris Angelico
On Sat, Mar 3, 2018 at 11:58 AM, Michael Torrie  wrote:
> On 03/02/2018 08:36 AM, Paul Moore wrote:
>> On 2 March 2018 at 15:09,   wrote:
>>> We must be discussing a different RAII. That is the raison d'etre of RAII: 
>>> RAII directly addresses this problem in an exception-safe way that does not 
>>> burden the resource user at all.
>>
>> RAII works in C++ (where it was initially invented) because it's used
>> with stack-allocated variables that have clearly-defined and limited
>> scope. In my experience writing C++, nobody uses RAII with
>> heap-allocated variables - those require explicit allocation and
>> deallocation and so are equivalent to having an explicit "close()"
>> method in Python (or using __del__ in CPython as it currently exists).
>
> Very good point.  I'm not sure the OP considered this when proposing
> this idea.
>
> That said, In C++, there are a number of ways of making heap-allocated
> objects more RAII-like. These include smart pointers that do reference
> counting and reference borrowing.  They work pretty well when used as
> designed, but they do require some programmer understanding and
> intervention to use correctly and leaks can and do happen.  Smart
> pointers are pretty good hacks as far as they go, without a garbage
> collector.

Ref-counting smart pointers? They might save you the trouble of
calling (the equivalents of) Py_INCREF and Py_DECREF manually, but
they still don't solve reference loops, and they don't deal with the
termination cleanup problem. Smart pointers don't change heap
allocation semantics and don't remove the garbage collector; all they
do is change the way you do refcounting. (Which is not a bad thing,
btw. I've used smart pointers, and they can be very helpful. But
they're no panacea.)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-02 Thread Michael Torrie
On 03/02/2018 08:36 AM, Paul Moore wrote:
> On 2 March 2018 at 15:09,   wrote:
>> We must be discussing a different RAII. That is the raison d'etre of RAII: 
>> RAII directly addresses this problem in an exception-safe way that does not 
>> burden the resource user at all.
> 
> RAII works in C++ (where it was initially invented) because it's used
> with stack-allocated variables that have clearly-defined and limited
> scope. In my experience writing C++, nobody uses RAII with
> heap-allocated variables - those require explicit allocation and
> deallocation and so are equivalent to having an explicit "close()"
> method in Python (or using __del__ in CPython as it currently exists).

Very good point.  I'm not sure the OP considered this when proposing
this idea.

That said, In C++, there are a number of ways of making heap-allocated
objects more RAII-like. These include smart pointers that do reference
counting and reference borrowing.  They work pretty well when used as
designed, but they do require some programmer understanding and
intervention to use correctly and leaks can and do happen.  Smart
pointers are pretty good hacks as far as they go, without a garbage
collector.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-02 Thread Steven D'Aprano
On Fri, 02 Mar 2018 07:09:19 -0800, ooomzay wrote:

[...]
>> If you're going to *require* the programmer to explicitly del the
>> reference:
>>
>> f = open("file")
>> text = f.read()
>> del f
> 
> But I am not! On the contrary RAII frees the programmer from even having
> to remember to close the file. The poster asked what would happen if the
> resource was deliberately kept open by storing a reference at global
> scope.

You say that as if it were difficult to do, requiring the programmer to 
take extraordinary steps of heroic proportion. It doesn't.

It is unbelievably easy to store the reference at global scope, which 
means that the programmer needs to remember to close the file explicitly 
and RAII doesn't help.


> In practice CPython destroys it cleanly on exit - but I am not sure the
> language guarantees this

Typically the OS guarantees that any open files will be closed when the 
application exits. But that *absolutely does not* apply to other 
resources that may need closing, and Python at least doesn't guarantee to 
run __del__ during application shutdown as it may not be able to.

(It is better now than it used to be, but there are still scenarios where 
the resources needed to run __del__ are gone before __del__ is called.)


> - in any case RAII won't make things any worse
> in this respect. (Logfiles are a common example of such global
> resource.)

I didn't say RAII will make it worse, but neither will it make it better, 
nor  make "with" statements obsolete.

Your justification for requiring RAII is to ensure the timely closure of 
resources -- but to do that, you have to explicitly close or delete the 
resource. It simply isn't true that "RAII frees the programmer from even 
having to remember to close the file".


>> then you might as well require them to explicitly close the file:
>> 
>> f = open("file")
>> text = f.read()
>> f.close()
>> 
>> which we know from many years experience is not satisfactory except for
>> the simplest scripts that don't need to care about resource management.
>> That's the fatal flaw in RAII:
> 
> We must be discussing a different RAII. That is the raison d'etre of
> RAII: RAII directly addresses this problem in an exception-safe way that
> does not burden the resource user at all.

But as you said yourself, if the resource is held open in a global 
reference, it will stay open indefinitely. And remember, global in this 
context doesn't just mean the main module of your application, but 
*every* module you import.

I think you have just put your finger on the difference between what RAII 
*claims* to do and what it *actually* can do.



-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-02 Thread ooomzay
On Friday, March 2, 2018 at 2:43:09 PM UTC, Chris Angelico wrote:
> On Sat, Mar 3, 2018 at 1:18 AM,  ooomzay wrote:
> > On Friday, March 2, 2018 at 8:16:22 AM UTC, Paul Rubin wrote:[snip]
> >> controlling stuff like file handles
> >> with scopes (like with "with") is fine.
> >
> > How does with work for non-trivial/composite objects that 
> > represent/reference multiple resources or even a hierarchy of such objects 
> > where all the resources held must be released in a timely fashion when 
> > finished with?
> >
> 
> Can you give me an example that works with RAII but doesn't work in a
> 'with' statement?

My claim is about the relative elegance/pythonic nature of RAII w.r.t. 'with'. 
Probably with enough coding and no ommissions or mistakes 'with' could do 
achieve the same result that RAII does automatically without any 
burden on the user.

Consider the hierarchy/tree of resource-holding objects postulated above... 
Presumably __enter__, __exit__ (or functional equivalent) would have to be 
implemented at every layer breaking the invariance. Thus complicating and 
making less robust every object it touches.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-02 Thread Paul Moore
On 2 March 2018 at 15:09,   wrote:
> We must be discussing a different RAII. That is the raison d'etre of RAII: 
> RAII directly addresses this problem in an exception-safe way that does not 
> burden the resource user at all.

RAII works in C++ (where it was initially invented) because it's used
with stack-allocated variables that have clearly-defined and limited
scope. In my experience writing C++, nobody uses RAII with
heap-allocated variables - those require explicit allocation and
deallocation and so are equivalent to having an explicit "close()"
method in Python (or using __del__ in CPython as it currently exists).

Python doesn't have stack allocation, nor does it have a deterministic
order of deletion of objects when their last reference goes out of
scope (which can happen simultaneously for many objects):

class Tracker:
def __init__(self, n):
self.n = n
def __del__(self):
print("Deleting instance", self.n)

def f():
a = Tracker(1)
b = Tracker(2)

f()

The language doesn't guarantee that a is removed before b. Are you
proposing to make that change to the language as well?

Also Python only has function scope, so variables local to a
smaller-than-the-function block of code aren't possible. That's
something that is used in C++ a lot to limit the lifetime of resources
under RAII. How do you propose to address that (without needing
explicit del statements)? That's why the with statement exists, to
clearly define lifetimes smaller than "the enclosing function". Your
proposal doesn't offer any equivalent (other than an extra function).

Consider C++:

void fn() {
for (i = 0; i < 1; ++i) {
char name[100];
sprintf(name, "file%d.txt, i);
File f(name); // I don't think std::ofstream doesn't support RAII
f << "Some text";
}
}

Or (real Python):

def fn():
for i in range(1):
with open(f"file{i}.txt", "w") as f:
f.write("Some text")

How would you write this in your RAII style - without leaving 10,000
file descriptors open until the end of the function?

def loop_body(i):
f = open(f"file{i}.txt", "w")
f.write("Some text")
def fn():
for i in range(1):
loop_body(i)

That's both less efficient (function calls have a cost) and less
maintainable than the with-statement version.

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RFC: Proposal: Deterministic Object Destruction

2018-03-02 Thread ooomzay
On Friday, March 2, 2018 at 4:35:41 AM UTC, Steven D'Aprano wrote:
> On Thu, 01 Mar 2018 16:26:47 -0800, ooomzay wrote:
> 
> >> >> When does the destination file get closed?
> >> >
> >> > When you execute:-
> >> >
> >> >del dst
> >> >
> >> > or:-
> >> >
> >> >dst = something_else
> >> 
> >> What if you don't?
> > 
> > Then the resource will remain open until your script exits at which
> > point it is probably not very well defined exactly when or even if the
> > destructor/__del__ will be called.
> > 
> > I.e. Don't do this! Did you have some realistic case in mind or are you
> > just probing the behaviour?
> 
> 
> If you're going to *require* the programmer to explicitly del the 
> reference:
>
> f = open("file")
> text = f.read()
> del f

But I am not! On the contrary RAII frees the programmer from even having to 
remember to close the file. The poster asked what would happen if the resource 
was deliberately kept open by storing a reference at global scope. 

In practice CPython destroys it cleanly on exit - but I am not sure the 
language guarantees this - in any case RAII won't make things any worse in this 
respect. (Logfiles are a common example of such global resource.)

> then you might as well require them to explicitly close the file:
> 
> f = open("file")
> text = f.read()
> f.close()
> 
> which we know from many years experience is not satisfactory except for 
> the simplest scripts that don't need to care about resource management.
> That's the fatal flaw in RAII: 

We must be discussing a different RAII. That is the raison d'etre of RAII: RAII 
directly addresses this problem in an exception-safe way that does not burden 
the resource user at all.

> the problem is that the lifespan of the resource may 
> not be the same as the lifetime of the object. 
>
> Especially for files, the 
> problem is that the lifespan of resource (the time you are actually using 
> it) may be significantly less than the lifespan of the object holding 
> onto that resource. Since there's no way for the interpreter to know 
> whether or not you have finished with the resource, you have a choice:
> 
> - close the resource yourself (either explicitly with file.close(), 
>   or implicitly with a context manager);
> 
> - or keep the resource open indefinitely, until such eventual time
>   that the object is garbage collected and the resource closed.

Hence my PEP! It enables RAII. The interpreter merely has to call __del__ as 
soon as the object is no longer referenced (as does CPYthon).
-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   >