Re: [openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-08 Thread Duncan Thomas
Certainly adding an explicit shutdown or terminate call to the driver
seems reasonable - a blueprint to this effect would be welcome.

On 7 April 2014 06:13, Deepak Shetty dpkshe...@gmail.com wrote:
 To add:
 I was looking at Nova code and it seems there is a framework for cleanup
 using the terminate calls.. IIUC this works as libvirt calls terminate on
 Nova instance when the VM is shutting down/destroying, hence terminate seems
 to be a good place to do cleanup on Nova side.. something similar is missing
 on Cinder side and __del__ way of cleanup isn't working as I posted above.


 On Mon, Apr 7, 2014 at 10:24 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Duncan,
 Thanks for your response. Tho' i agree to what you said.. I am still
 trying to understand why i see what i see .. i.e. why the base class
 variables (_mount_shared) shows up empty in __del__
 I am assuming here that the obj is not completely gone/deleted, so its
 vars must still be in scope and valid.. but debug prints suggests the
 otherwise :(


 On Sun, Apr 6, 2014 at 12:07 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 I'm not yet sure of the right way to do cleanup on shutdown, but any
 driver should do as much checking as possible on startup - the service
 might not have gone down cleanly (kill -9, SEGFAULT, etc), or
 something might have gone wrong during clean shutdown. The driver
 coming up should therefore not make any assumptions it doesn't
 absolutely have to, but rather should check and attempt cleanup
 itself, on startup.

 On 3 April 2014 15:14, Deepak Shetty dpkshe...@gmail.com wrote:
 
  Hi,
  I am looking to umount the glsuterfs shares that are mounted as
  part of
  gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
  devstack
  env) or when c-vol service is being shutdown.
 
  I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
  didn't
  work
 
  def __del__(self):
  LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
  self._mounted_shares)
  for share in self._mounted_shares:
  mount_path = self._get_mount_point_for_share(share)
  command = ['umount', mount_path]
  self._do_umount(command, True, share)
 
  self._mounted_shares is defined in the base class (RemoteFsDriver)
 
  ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
  Caught
  SIGINT, stopping children
  2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
  Waiting on
  2 children to exit
  2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-] Child
  30185
  exited with status 1
  2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-] DPKS:
  Inside __del__ Hurray!, shares=[]
  2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-] Child
  30186
  exited with status 1
  Exception TypeError: 'NoneType' object is not callable in bound
  method
  GlusterfsDriver.__del__ of
  cinder.volume.drivers.glusterfs.GlusterfsDriver
  object at 0x2777ed0 ignored
  [stack@devstack-vm tempest]$
 
  So the _mounted_shares is empty ([]) which isn't true since I have 2
  glsuterfs shares mounted and when i print _mounted_shares in other
  parts of
  code, it does show me the right thing.. as below...
 
  From volume/drivers/glusterfs.py @ line 1062:
  LOG.debug(_('Available shares: %s') % self._mounted_shares)
 
  which dumps the debugprint  as below...
 
  2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
  [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None] Available shares:
  [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']
  from
  (pid=30185) _ensure_shares_mounted
  /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
 
  This brings in few Qs ( I am usign devstack env) ...
 
  1) Is __del__ the right way to do cleanup for a cinder driver ? I have
  2
  gluster backends setup, hence 2 cinder-volume instances, but i see
  __del__
  being called once only (as per above debug prints)
  2) I tried atexit and registering a function to do the cleanup.
  Ctrl-C'ing
  c-vol (from screen ) gives the same issue.. shares is empty ([]), but
  this
  time i see that my atexit handler called twice (once for each backend)
  3) In general, whats the right way to do cleanup inside cinder volume
  driver
  when a service is going down or being restarted ?
  4) The solution should work in both devstack (ctrl-c to shutdown c-vol
  service) and production (where we do service restart c-vol)
 
  Would appreciate a response
 
  thanx,
  deepak
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 

Re: [openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-06 Thread Deepak Shetty
Duncan,
Thanks for your response. Tho' i agree to what you said.. I am still
trying to understand why i see what i see .. i.e. why the base class
variables (_mount_shared) shows up empty in __del__
I am assuming here that the obj is not completely gone/deleted, so its vars
must still be in scope and valid.. but debug prints suggests the otherwise
:(


On Sun, Apr 6, 2014 at 12:07 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 I'm not yet sure of the right way to do cleanup on shutdown, but any
 driver should do as much checking as possible on startup - the service
 might not have gone down cleanly (kill -9, SEGFAULT, etc), or
 something might have gone wrong during clean shutdown. The driver
 coming up should therefore not make any assumptions it doesn't
 absolutely have to, but rather should check and attempt cleanup
 itself, on startup.

 On 3 April 2014 15:14, Deepak Shetty dpkshe...@gmail.com wrote:
 
  Hi,
  I am looking to umount the glsuterfs shares that are mounted as part
 of
  gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
 devstack
  env) or when c-vol service is being shutdown.
 
  I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
 didn't
  work
 
  def __del__(self):
  LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
  self._mounted_shares)
  for share in self._mounted_shares:
  mount_path = self._get_mount_point_for_share(share)
  command = ['umount', mount_path]
  self._do_umount(command, True, share)
 
  self._mounted_shares is defined in the base class (RemoteFsDriver)
 
  ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-] Caught
  SIGINT, stopping children
  2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-] Waiting
 on
  2 children to exit
  2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-] Child
 30185
  exited with status 1
  2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-] DPKS:
  Inside __del__ Hurray!, shares=[]
  2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-] Child
 30186
  exited with status 1
  Exception TypeError: 'NoneType' object is not callable in bound method
  GlusterfsDriver.__del__ of
 cinder.volume.drivers.glusterfs.GlusterfsDriver
  object at 0x2777ed0 ignored
  [stack@devstack-vm tempest]$
 
  So the _mounted_shares is empty ([]) which isn't true since I have 2
  glsuterfs shares mounted and when i print _mounted_shares in other parts
 of
  code, it does show me the right thing.. as below...
 
  From volume/drivers/glusterfs.py @ line 1062:
  LOG.debug(_('Available shares: %s') % self._mounted_shares)
 
  which dumps the debugprint  as below...
 
  2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
  [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None] Available shares:
  [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']
 from
  (pid=30185) _ensure_shares_mounted
  /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
 
  This brings in few Qs ( I am usign devstack env) ...
 
  1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
  gluster backends setup, hence 2 cinder-volume instances, but i see
 __del__
  being called once only (as per above debug prints)
  2) I tried atexit and registering a function to do the cleanup.
 Ctrl-C'ing
  c-vol (from screen ) gives the same issue.. shares is empty ([]), but
 this
  time i see that my atexit handler called twice (once for each backend)
  3) In general, whats the right way to do cleanup inside cinder volume
 driver
  when a service is going down or being restarted ?
  4) The solution should work in both devstack (ctrl-c to shutdown c-vol
  service) and production (where we do service restart c-vol)
 
  Would appreciate a response
 
  thanx,
  deepak
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-06 Thread Deepak Shetty
To add:
I was looking at Nova code and it seems there is a framework for
cleanup using the terminate calls.. IIUC this works as libvirt calls
terminate on Nova instance when the VM is shutting down/destroying, hence
terminate seems to be a good place to do cleanup on Nova side.. something
similar is missing on Cinder side and __del__ way of cleanup isn't working
as I posted above.


On Mon, Apr 7, 2014 at 10:24 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Duncan,
 Thanks for your response. Tho' i agree to what you said.. I am still
 trying to understand why i see what i see .. i.e. why the base class
 variables (_mount_shared) shows up empty in __del__
 I am assuming here that the obj is not completely gone/deleted, so its
 vars must still be in scope and valid.. but debug prints suggests the
 otherwise :(


 On Sun, Apr 6, 2014 at 12:07 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 I'm not yet sure of the right way to do cleanup on shutdown, but any
 driver should do as much checking as possible on startup - the service
 might not have gone down cleanly (kill -9, SEGFAULT, etc), or
 something might have gone wrong during clean shutdown. The driver
 coming up should therefore not make any assumptions it doesn't
 absolutely have to, but rather should check and attempt cleanup
 itself, on startup.

 On 3 April 2014 15:14, Deepak Shetty dpkshe...@gmail.com wrote:
 
  Hi,
  I am looking to umount the glsuterfs shares that are mounted as
 part of
  gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
 devstack
  env) or when c-vol service is being shutdown.
 
  I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
 didn't
  work
 
  def __del__(self):
  LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
  self._mounted_shares)
  for share in self._mounted_shares:
  mount_path = self._get_mount_point_for_share(share)
  command = ['umount', mount_path]
  self._do_umount(command, True, share)
 
  self._mounted_shares is defined in the base class (RemoteFsDriver)
 
  ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
 Caught
  SIGINT, stopping children
  2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-] Caught
  SIGTERM, exiting
  2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
 Waiting on
  2 children to exit
  2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-] Child
 30185
  exited with status 1
  2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-] DPKS:
  Inside __del__ Hurray!, shares=[]
  2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-] Child
 30186
  exited with status 1
  Exception TypeError: 'NoneType' object is not callable in bound
 method
  GlusterfsDriver.__del__ of
 cinder.volume.drivers.glusterfs.GlusterfsDriver
  object at 0x2777ed0 ignored
  [stack@devstack-vm tempest]$
 
  So the _mounted_shares is empty ([]) which isn't true since I have 2
  glsuterfs shares mounted and when i print _mounted_shares in other
 parts of
  code, it does show me the right thing.. as below...
 
  From volume/drivers/glusterfs.py @ line 1062:
  LOG.debug(_('Available shares: %s') % self._mounted_shares)
 
  which dumps the debugprint  as below...
 
  2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
  [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None] Available shares:
  [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']
 from
  (pid=30185) _ensure_shares_mounted
  /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
 
  This brings in few Qs ( I am usign devstack env) ...
 
  1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
  gluster backends setup, hence 2 cinder-volume instances, but i see
 __del__
  being called once only (as per above debug prints)
  2) I tried atexit and registering a function to do the cleanup.
 Ctrl-C'ing
  c-vol (from screen ) gives the same issue.. shares is empty ([]), but
 this
  time i see that my atexit handler called twice (once for each backend)
  3) In general, whats the right way to do cleanup inside cinder volume
 driver
  when a service is going down or being restarted ?
  4) The solution should work in both devstack (ctrl-c to shutdown c-vol
  service) and production (where we do service restart c-vol)
 
  Would appreciate a response
 
  thanx,
  deepak
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list

[openstack-dev] [Cinder] Whats the way to do cleanup during service shutdown / restart ?

2014-04-04 Thread Deepak Shetty
resendign it with correct cinder prefix in subject.

thanx,
deepak


On Thu, Apr 3, 2014 at 7:44 PM, Deepak Shetty dpkshe...@gmail.com wrote:


 Hi,
 I am looking to umount the glsuterfs shares that are mounted as part
 of gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
 devstack env) or when c-vol service is being shutdown.

 I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
 didn't work

  def __del__(self):
  LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
 self._mounted_shares)
  for share in self._mounted_shares:
  mount_path = self._get_mount_point_for_share(share)
  command = ['umount', mount_path]
  self._do_umount(command, True, share)

 self._mounted_shares is defined in the base class (RemoteFsDriver)

1. ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
Caught SIGINT, stopping children
2. 2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-]
Caught SIGTERM, exiting
3. 2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-]
Caught SIGTERM, exiting
4. 2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
Waiting on 2 children to exit
5. 2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-]
Child 30185 exited with status 1
6. 2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-]
DPKS: Inside __del__ Hurray!, shares=[]
7. 2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-]
Child 30186 exited with status 1
8. Exception TypeError: 'NoneType' object is not callable in bound
method GlusterfsDriver.__del__ of
cinder.volume.drivers.glusterfs.GlusterfsDriver object at 0x2777ed0
ignored
9. [stack@devstack-vm tempest]$

 So the _mounted_shares is empty ([]) which isn't true since I have 2
 glsuterfs shares mounted and when i print _mounted_shares in other parts of
 code, it does show me the right thing.. as below...

 From volume/drivers/glusterfs.py @ line 1062:
 LOG.debug(_('Available shares: %s') % self._mounted_shares)

 which dumps the debugprint  as below...

 2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
 [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None]* Available shares:
 [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']*from 
 (pid=30185) _ensure_shares_mounted
 /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
  This brings in few Qs ( I am usign devstack env) ...

 1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
 gluster backends setup, hence 2 cinder-volume instances, but i see __del__
 being called once only (as per above debug prints)
 2) I tried atexit and registering a function to do the cleanup. Ctrl-C'ing
 c-vol (from screen ) gives the same issue.. shares is empty ([]), but this
 time i see that my atexit handler called twice (once for each backend)
 3) In general, whats the right way to do cleanup inside cinder volume
 driver when a service is going down or being restarted ?
 4) The solution should work in both devstack (ctrl-c to shutdown c-vol
 service) and production (where we do service restart c-vol)

 Would appreciate a response

 thanx,
 deepak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-03 Thread Deepak Shetty
Hi,
I am looking to umount the glsuterfs shares that are mounted as part of
gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in devstack
env) or when c-vol service is being shutdown.

I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it didn't
work

 def __del__(self):
LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
self._mounted_shares)
for share in self._mounted_shares:
mount_path = self._get_mount_point_for_share(share)
command = ['umount', mount_path]
self._do_umount(command, True, share)

self._mounted_shares is defined in the base class (RemoteFsDriver)

   1. ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
   Caught SIGINT, stopping children
   2. 2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-]
   Caught SIGTERM, exiting
   3. 2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-]
   Caught SIGTERM, exiting
   4. 2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
   Waiting on 2 children to exit
   5. 2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-]
   Child 30185 exited with status 1
   6. 2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-]
   DPKS: Inside __del__ Hurray!, shares=[]
   7. 2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-]
   Child 30186 exited with status 1
   8. Exception TypeError: 'NoneType' object is not callable in bound
   method GlusterfsDriver.__del__ of
   cinder.volume.drivers.glusterfs.GlusterfsDriver object at 0x2777ed0
   ignored
   9. [stack@devstack-vm tempest]$

So the _mounted_shares is empty ([]) which isn't true since I have 2
glsuterfs shares mounted and when i print _mounted_shares in other parts of
code, it does show me the right thing.. as below...

From volume/drivers/glusterfs.py @ line 1062:
LOG.debug(_('Available shares: %s') % self._mounted_shares)

which dumps the debugprint  as below...

2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
[req-2cf69316-cc42-403a-96f1-90e8e77375aa None None]* Available shares:
[u'devstack-vm.localdomain:/gvol1',
u'devstack-vm.localdomain:/gvol1']*from (pid=30185)
_ensure_shares_mounted
/opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
 This brings in few Qs ( I am usign devstack env) ...

1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
gluster backends setup, hence 2 cinder-volume instances, but i see __del__
being called once only (as per above debug prints)
2) I tried atexit and registering a function to do the cleanup. Ctrl-C'ing
c-vol (from screen ) gives the same issue.. shares is empty ([]), but this
time i see that my atexit handler called twice (once for each backend)
3) In general, whats the right way to do cleanup inside cinder volume
driver when a service is going down or being restarted ?
4) The solution should work in both devstack (ctrl-c to shutdown c-vol
service) and production (where we do service restart c-vol)

Would appreciate a response

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev