Re: del behavior 2

2009-01-08 Thread Eric Snow
On Jan 7, 3:23 pm, Martin v. Löwis mar...@v.loewis.de wrote:
  Thanks for the responses.  What I mean is when a python process is
  interrupted and does not get a chance to clean everything up then what
  is a good way to do so?  For instance, I have a script that uses child
  ptys to facilitate ssh connections (I'm using pxssh).  When I ^C the
  python process I am left with the child processes running and the ssh
  connections open.  Naturally I run out of ttys if this happens too
  much, which I have had happen.  So if python does not get a chance to
  take care of those, what is a good way to do so?  Does a try/finally
  or a with statement address that?  Thanks!

 That's strange. When the parent process terminates, the tty master
 should get closed, causing the slave to be closed as well, in addition
 to sending a SIGHUP signal to the child, which ssh should interpret
 as terminating.

 Perhaps the problem is that the master socket *doesn't* get closed?
 I see that pexpect closes all file descriptors in the child before
 invoking exec. Could it be that you are starting addition child
 processes which inherit the master socket?

 Regards,
 Martin

Thanks.  I'll look into that.

-eric
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-08 Thread Eric Snow
On Jan 7, 12:42 pm, Eric Snow es...@verio.net wrote:
 I was reading in the documentation about __del__ and have a couple of
 questions.  Here is what I was looking at:

 http://docs.python.org/reference/datamodel.html#object.__del__

 My second question is about the following:

 It is not guaranteed that __del__() methods are called for objects
 that still exist when the interpreter exits.

 I understand that and have seen it too.  That's fine.  But how do any
 of you deal with things that are left open because you did not get a
 chance to close them?  How do you clean up after the fact?  Do you
 simply keep track externally the things that need to be cleaned up if
 __del__ doesn't get a chance?  Any ideas?  Thanks

 -eric

So I see a couple of options here.  Thanks for all the suggestions
everyone.  Here is what I have:

- use try/finally to clean things up
- set a handler using signal.signal to clean everything up

There is also having try/except for more specific behvaior, like for
KeyboardInterrupt, but I am not sure I need that much specificity.
Again, thanks for all the great help.  Really cleared things up for
me.

-eric
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-07 Thread MRAB

Eric Snow wrote:

I was reading in the documentation about __del__ and have a couple of
questions.  Here is what I was looking at:

http://docs.python.org/reference/datamodel.html#object.__del__

My second question is about the following:

It is not guaranteed that __del__() methods are called for objects
that still exist when the interpreter exits.

I understand that and have seen it too.  That's fine.  But how do any
of you deal with things that are left open because you did not get a
chance to close them?  How do you clean up after the fact?  Do you
simply keep track externally the things that need to be cleaned up if
__del__ doesn't get a chance?  Any ideas?  Thanks


There's the 'with' statement and try...finally.
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-07 Thread Chris Rebert
On Wed, Jan 7, 2009 at 11:42 AM, Eric Snow es...@verio.net wrote:
 I was reading in the documentation about __del__ and have a couple of
 questions.  Here is what I was looking at:

 http://docs.python.org/reference/datamodel.html#object.__del__

 My second question is about the following:

 It is not guaranteed that __del__() methods are called for objects
 that still exist when the interpreter exits.

 I understand that and have seen it too.  That's fine.  But how do any
 of you deal with things that are left open because you did not get a
 chance to close them?  How do you clean up after the fact?  Do you
 simply keep track externally the things that need to be cleaned up if
 __del__ doesn't get a chance?  Any ideas?  Thanks

As you point out, __del__ is not a reliable way to free limited
resources. Instead, one generally includes logic to explicitly free
the resources. This is generally done using try-finally or the `with`
statement.

Example:

def mess_with_file(f):
try:
#fiddle with the file
finally:
f.close() #guarantee that the file gets closed

def mess_with_other_file(filename):
with open(filename) as f:
#do stuff with file
x = None #the file has now been closed, and it'll be closed even
if an exception gets raised
#the context handler (see PEP 343) for the `file` type
guarantees this for us

Cheers,
Chris

-- 
Follow the path of the Iguana...
http://rebertia.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-07 Thread Martin v. Löwis
 I understand that and have seen it too.  That's fine.  But how do any
 of you deal with things that are left open because you did not get a
 chance to close them?  How do you clean up after the fact?  Do you
 simply keep track externally the things that need to be cleaned up if
 __del__ doesn't get a chance?  Any ideas?  Thanks

You should try to write you program so that any kind of process exit
will not need any cleanup. For many kinds of things, this will work
automatically on most file systems. For example, file handles and
network connections get automatically closed - so you don't absolutely
have to close them if your program exits abnormally. Likewise, database
connections will shut down properly, and windows will close just fine.

What kind of thing do you have that remains open even after the
process terminates?

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-07 Thread Eric Snow
On Jan 7, 12:57 pm, Chris Rebert c...@rebertia.com wrote:
 On Wed, Jan 7, 2009 at 11:42 AM, Eric Snow es...@verio.net wrote:
  I was reading in the documentation about __del__ and have a couple of
  questions.  Here is what I was looking at:

 http://docs.python.org/reference/datamodel.html#object.__del__

  My second question is about the following:

  It is not guaranteed that __del__() methods are called for objects
  that still exist when the interpreter exits.

  I understand that and have seen it too.  That's fine.  But how do any
  of you deal with things that are left open because you did not get a
  chance to close them?  How do you clean up after the fact?  Do you
  simply keep track externally the things that need to be cleaned up if
  __del__ doesn't get a chance?  Any ideas?  Thanks

 As you point out, __del__ is not a reliable way to free limited
 resources. Instead, one generally includes logic to explicitly free
 the resources. This is generally done using try-finally or the `with`
 statement.

 Example:

 def mess_with_file(f):
     try:
         #fiddle with the file
     finally:
         f.close() #guarantee that the file gets closed

 def mess_with_other_file(filename):
     with open(filename) as f:
         #do stuff with file
     x = None #the file has now been closed, and it'll be closed even
 if an exception gets raised
     #the context handler (see PEP 343) for the `file` type
 guarantees this for us

 Cheers,
 Chris

 --
 Follow the path of the Iguana...http://rebertia.com

Thanks for the responses.  What I mean is when a python process is
interrupted and does not get a chance to clean everything up then what
is a good way to do so?  For instance, I have a script that uses child
ptys to facilitate ssh connections (I'm using pxssh).  When I ^C the
python process I am left with the child processes running and the ssh
connections open.  Naturally I run out of ttys if this happens too
much, which I have had happen.  So if python does not get a chance to
take care of those, what is a good way to do so?  Does a try/finally
or a with statement address that?  Thanks!

-eric
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-07 Thread Eric Snow
On Jan 7, 1:03 pm, Eric Snow es...@verio.net wrote:
 On Jan 7, 12:57 pm, Chris Rebert c...@rebertia.com wrote:



  On Wed, Jan 7, 2009 at 11:42 AM, Eric Snow es...@verio.net wrote:
   I was reading in the documentation about __del__ and have a couple of
   questions.  Here is what I was looking at:

  http://docs.python.org/reference/datamodel.html#object.__del__

   My second question is about the following:

   It is not guaranteed that __del__() methods are called for objects
   that still exist when the interpreter exits.

   I understand that and have seen it too.  That's fine.  But how do any
   of you deal with things that are left open because you did not get a
   chance to close them?  How do you clean up after the fact?  Do you
   simply keep track externally the things that need to be cleaned up if
   __del__ doesn't get a chance?  Any ideas?  Thanks

  As you point out, __del__ is not a reliable way to free limited
  resources. Instead, one generally includes logic to explicitly free
  the resources. This is generally done using try-finally or the `with`
  statement.

  Example:

  def mess_with_file(f):
      try:
          #fiddle with the file
      finally:
          f.close() #guarantee that the file gets closed

  def mess_with_other_file(filename):
      with open(filename) as f:
          #do stuff with file
      x = None #the file has now been closed, and it'll be closed even
  if an exception gets raised
      #the context handler (see PEP 343) for the `file` type
  guarantees this for us

  Cheers,
  Chris

  --
  Follow the path of the Iguana...http://rebertia.com

 Thanks for the responses.  What I mean is when a python process is
 interrupted and does not get a chance to clean everything up then what
 is a good way to do so?  For instance, I have a script that uses child
 ptys to facilitate ssh connections (I'm using pxssh).  When I ^C the
 python process I am left with the child processes running and the ssh
 connections open.  Naturally I run out of ttys if this happens too
 much, which I have had happen.  So if python does not get a chance to
 take care of those, what is a good way to do so?  Does a try/finally
 or a with statement address that?  Thanks!

 -eric

pxssh uses pexpect which uses pty.fork
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-07 Thread Marc 'BlackJack' Rintsch
On Wed, 07 Jan 2009 12:03:36 -0800, Eric Snow wrote:

 Thanks for the responses.  What I mean is when a python process is
 interrupted and does not get a chance to clean everything up then what
 is a good way to do so?

Well, if it doesn't get a chance then it doesn't get a chance.  ;-)

 For instance, I have a script that uses child
 ptys to facilitate ssh connections (I'm using pxssh).  When I ^C the
 python process I am left with the child processes running and the ssh
 connections open.  Naturally I run out of ttys if this happens too much,
 which I have had happen.  So if python does not get a chance to take
 care of those, what is a good way to do so?  Does a try/finally or a
 with statement address that?  Thanks!

If you clean up the mess in the ``finally`` branch: yes.  Ctrl+C 
raises a `KeyboardInterrupt`.

Ciao,
Marc 'BlackJack' Rintsch
--
http://mail.python.org/mailman/listinfo/python-list


Re: del behavior 2

2009-01-07 Thread Martin v. Löwis
 Thanks for the responses.  What I mean is when a python process is
 interrupted and does not get a chance to clean everything up then what
 is a good way to do so?  For instance, I have a script that uses child
 ptys to facilitate ssh connections (I'm using pxssh).  When I ^C the
 python process I am left with the child processes running and the ssh
 connections open.  Naturally I run out of ttys if this happens too
 much, which I have had happen.  So if python does not get a chance to
 take care of those, what is a good way to do so?  Does a try/finally
 or a with statement address that?  Thanks!

That's strange. When the parent process terminates, the tty master
should get closed, causing the slave to be closed as well, in addition
to sending a SIGHUP signal to the child, which ssh should interpret
as terminating.

Perhaps the problem is that the master socket *doesn't* get closed?
I see that pexpect closes all file descriptors in the child before
invoking exec. Could it be that you are starting addition child
processes which inherit the master socket?

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list