Re: [openstack-dev] [nova] Deleting 'default' security group for deleted tenant with nova-net

2015-05-07 Thread Chris St. Pierre
Jinkies, that sounds like *work*. Got any links to docs I can start diving
into? In particular, keystone audit events and anything that might be handy
about the solution proposal you mention. Keystone is mostly foreign
territory to me so some learning will be in order.

Thanks!

On Thu, May 7, 2015 at 12:49 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Hi Chris,

 So there is no rule saying you can't ask keystone. However, we do emit
 events (audit, needs to be configured) to the message bus when tenants (or
 in v3 parlance, projects) are deleted. This allows nova to mark things in a
 way to cleanup / do direct cleanup.

 There have been a few conversations about this, but we haven't made
 significant progress (as far as I know) on this topic.

 The best solution proposal (iirc) was that we need to creat a listener or
 similar that the other services could hook a callback to that will do the
 cleanup directly rather than require blocking the main API for the cleanup.

 Keystone is open to these improvements and ideas. It just doesn't scale of
 every action from every service has to ask keystone if thing still
 exists. Let's make sure we don't start using a pattern that will cause
 significant issues down the road.

 --Morgan

 Sent via mobile

 On May 7, 2015, at 09:37, Chris St. Pierre chris.a.st.pie...@gmail.com
 wrote:

 This bug recently came to my attention:
 https://bugs.launchpad.net/nova/+bug/1241587

 I've reopened it, because it is an actual problem, especially for people
 using nova-network and Rally, which creates and deletes tons of tenants.

 The obvious simple solution is to allow deletion of the 'default' security
 group if it is assigned to a tenant that doesn't exist, but I wasn't sure
 what the most acceptable way to do that within Nova would be. Is it
 acceptable to perform a call to the Keystone API to check for the tenant?
 Or is there another, better way?

 Alternatively, is there a better way to tackle the problem altogether?

 Thanks!

 --
 Chris St. Pierre

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Deleting 'default' security group for deleted tenant with nova-net

2015-05-07 Thread Chris St. Pierre
This bug recently came to my attention:
https://bugs.launchpad.net/nova/+bug/1241587

I've reopened it, because it is an actual problem, especially for people
using nova-network and Rally, which creates and deletes tons of tenants.

The obvious simple solution is to allow deletion of the 'default' security
group if it is assigned to a tenant that doesn't exist, but I wasn't sure
what the most acceptable way to do that within Nova would be. Is it
acceptable to perform a call to the Keystone API to check for the tenant?
Or is there another, better way?

Alternatively, is there a better way to tackle the problem altogether?

Thanks!

-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-novaclient] Better wording for secgroup-*-default-rules? help text

2015-03-10 Thread Chris St. Pierre
I've just filed a bug on the confusing wording of help text for the
secgroup-{add,delete,list}-default-rules? commands:
https://bugs.launchpad.net/python-novaclient/+bug/1430354

As I note in the bug, though, I'm not sure the best way to fix this. In an
unconstrained world, I'd like to see something like:

secgroup-add-default-rule   Add a rule to the set of rules that will be
added to the 'default' security group in a newly-created tenant.

But that's obviously excessively verbose. And the help strings are pulled
from the docstrings of the functions that implement the commands, so we're
limited to what can fit in a one-line docstring. (We could add another
source of help documentation -- e.g., `desc = getattr(callback, help,
callback.__doc__) or ''` on novaclient/shell.py line 531 -- but that seems
like it should be a last resort.)

How can we clean up the wording to make it clear that the default security
group is, in fact, not the 'default' security group or the security
group which is default, but rather another beast entirely which isn't even
actually a security group?

Naming: still the hardest problem in computer science. :(

-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient] Better wording for secgroup-*-default-rules? help text

2015-03-10 Thread Chris St. Pierre
On Tue, Mar 10, 2015 at 4:50 PM, melanie witt melwi...@gmail.com wrote:

 I don't think your suggestion for the help text is excessively verbose.
 There are already longer help texts for some commands than that, and I
 think it's important to accurately explain what commands do. You can use a
 multiline docstring to have a longer help text.


Ah, look at that! In some other projects, flake8 complains about a
docstring whose first line doesn't end in a period, so I didn't think it'd
be possible. If you don't think that's excessively verbose, there'll be a
patch in shortly. Thanks!

Why do you say the default security group isn't actually a security
 group? The fact that it's per-tenant and therefore not necessarily
 consistent?


That's precisely the confusion -- the security group name 'default' is, of
course, a security group. But the default security group, as referenced
by the help text for these commands, is actually a sort of
meta-security-group object that is only used to populate the 'default'
security group in new tenants. It is not, in and of itself, an actual
security group. That is, adding a new rule with 'nova
secgroup-add-default-rules' has absolutely no effect on what network
traffic is allowed between guests; it only affects new tenants created
afterwards.

-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] File-backed glance scrubber queue

2015-02-13 Thread Chris St. Pierre
That's good to know, but I'm still just the weensiest bit confused. The
code is unreachable and unusable -- which is a bit more forceful than just
redundant or deprecated. Can it be removed? Does Zhi Yan have plans to do
that? Is there anything I can do to help?

Thanks!

On Fri, Feb 13, 2015 at 5:19 AM, Flavio Percoco fla...@redhat.com wrote:

 On 12/02/15 09:34 -0800, Chris St. Pierre wrote:

 Yeah, that commit definitely disables the file-backed queue -- it
 certainly
 *looks* like we want to be rid of it, but all of the code is left in
 place and
 even updated to support the new format. So my confusion remains.
 Hopefully Zhi
 Yan can clarify.

 Link added. Thanks.



 Hi Chris,

 I touched bases with Zhi Yan and my understanding is right. Since
 Juno, we switched to using a queue based on database instead of file
 and the file queue is considered redundant and on its way to be
 deprecated.

 I'll also reply on the review,

 Thanks for bringing this up,
 Flavio


 On Thu, Feb 12, 2015 at 12:59 AM, Flavio Percoco fla...@redhat.com
 wrote:

On 11/02/15 13:42 -0800, Chris St. Pierre wrote:

I recently proposed a change to glance to turn the file-backed
 scrubber
queue
files into JSON: https://review.openstack.org/#/c/145223/

As I looked into it more, though, it turns out that the file-backed
queue is no
longer usable; it was killed by the implementation of this
blueprint: https://
blueprints.launchpad.net/glance/+spec/image-location-status

But what's not clear is if the implementation of that blueprint
 should
have
killed the file-backed scrubber queue, or if that was even
 intended.
Two things
contribute to the lack of clarity:

1. The file-backed scrubber code was left in, even though it is
unreachable.

2. The ordering of the commits is strange. Namely, commit 66d24bb
(https://
review.openstack.org/#/c/67115/) killed the file-backed queue, and
then,
*after* that change, 70e0a24 (https://review.openstack.org/
 #/c/67122/)
updates
the queue file format. So it's not clear why the queue file format
would be
updated if it was intended that the file-backed queue was no longer
usable.

Can someone clarify what was intended here? If killing the
 file-backed
scrubber
queue was deliberate, then let's finish the job and excise that
 code.
If not,
then let's make sure that code is reachable again, and I'll
 resurrect
my
blueprint to make the queue files suck less.

Either way I'm happy to make the changes, I'm just not sure what
 the
goal of
these changes was, and how to properly proceed.

Thanks for any clarification anyone can offer.


I believe the commit you're looking for is this one:
f338a5c870a36e493f8c818fa783942d1e0565a4

There the scrubber queue was switched on purpose, which leads to the
conclusion that we're moving away from it. I've not participated in
discussions around the change related to the scrubber queue so I'll
let Zhi Yan weight in here.

Thanks for bringing this up,
Flavio

P.S: Would you mind putting a link to this discussion on the spec
review?





--
Chris St. Pierre



 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?
subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco
  
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Chris St. Pierre


  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] File-backed glance scrubber queue

2015-02-12 Thread Chris St. Pierre
Yeah, that commit definitely disables the file-backed queue -- it certainly
*looks* like we want to be rid of it, but all of the code is left in place
and even updated to support the new format. So my confusion remains.
Hopefully Zhi Yan can clarify.

Link added. Thanks.

On Thu, Feb 12, 2015 at 12:59 AM, Flavio Percoco fla...@redhat.com wrote:

 On 11/02/15 13:42 -0800, Chris St. Pierre wrote:

 I recently proposed a change to glance to turn the file-backed scrubber
 queue
 files into JSON: https://review.openstack.org/#/c/145223/

 As I looked into it more, though, it turns out that the file-backed queue
 is no
 longer usable; it was killed by the implementation of this
 blueprint: https://
 blueprints.launchpad.net/glance/+spec/image-location-status

 But what's not clear is if the implementation of that blueprint should
 have
 killed the file-backed scrubber queue, or if that was even intended. Two
 things
 contribute to the lack of clarity:

 1. The file-backed scrubber code was left in, even though it is
 unreachable.

 2. The ordering of the commits is strange. Namely, commit 66d24bb
 (https://
 review.openstack.org/#/c/67115/) killed the file-backed queue, and then,
 *after* that change, 70e0a24 (https://review.openstack.org/#/c/67122/)
 updates
 the queue file format. So it's not clear why the queue file format would
 be
 updated if it was intended that the file-backed queue was no longer
 usable.

 Can someone clarify what was intended here? If killing the file-backed
 scrubber
 queue was deliberate, then let's finish the job and excise that code. If
 not,
 then let's make sure that code is reachable again, and I'll resurrect my
 blueprint to make the queue files suck less.

 Either way I'm happy to make the changes, I'm just not sure what the goal
 of
 these changes was, and how to properly proceed.

 Thanks for any clarification anyone can offer.


 I believe the commit you're looking for is this one:
 f338a5c870a36e493f8c818fa783942d1e0565a4

 There the scrubber queue was switched on purpose, which leads to the
 conclusion that we're moving away from it. I've not participated in
 discussions around the change related to the scrubber queue so I'll
 let Zhi Yan weight in here.

 Thanks for bringing this up,
 Flavio

 P.S: Would you mind putting a link to this discussion on the spec
 review?




 --
 Chris St. Pierre


  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] File-backed glance scrubber queue

2015-02-11 Thread Chris St. Pierre
I recently proposed a change to glance to turn the file-backed scrubber
queue files into JSON: https://review.openstack.org/#/c/145223/

As I looked into it more, though, it turns out that the file-backed queue
is no longer usable; it was killed by the implementation of this blueprint:
https://blueprints.launchpad.net/glance/+spec/image-location-status

But what's not clear is if the implementation of that blueprint should have
killed the file-backed scrubber queue, or if that was even intended. Two
things contribute to the lack of clarity:

1. The file-backed scrubber code was left in, even though it is unreachable.

2. The ordering of the commits is strange. Namely, commit 66d24bb (
https://review.openstack.org/#/c/67115/) killed the file-backed queue, and
then, *after* that change, 70e0a24 (https://review.openstack.org/#/c/67122/)
updates the queue file format. So it's not clear why the queue file format
would be updated if it was intended that the file-backed queue was no
longer usable.

Can someone clarify what was intended here? If killing the file-backed
scrubber queue was deliberate, then let's finish the job and excise that
code. If not, then let's make sure that code is reachable again, and I'll
resurrect my blueprint to make the queue files suck less.

Either way I'm happy to make the changes, I'm just not sure what the goal
of these changes was, and how to properly proceed.

Thanks for any clarification anyone can offer.

-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-18 Thread Chris St. Pierre
I wasn't suggesting that we *actually* use filesystem link count, and make
hard links or whatever for every time the image is used. That's prima facie
absurd, for a lot more reasons that you point out. I was suggesting a new
database field that tracks the number of times an image is in use, by
*analogy* with filesystem link counts. (If I wanted to be unnecessarily
abrasive I might say, This is a textbook example of something called an
analogy, but I'm not interested in being unnecessarily abrasive.)

Overloading the protected flag is *still* a terrible hack. Even if we
tracked the initial state of protected and restored that state when an
image went out of use, that would negate the ability to make an image
protected while it was in use and expect that change to remain in place. So
that just violates the principle of least surprise. Of course, we could
have glance modify the original_protected_state flag when that flag is
non-null and the user changes the actual protected flag, but this is just
layering hacks upon hacks. By actually tracking the number of times an
image is in use, we can preserve the ability to protect images *and* avoid
deleting images in use.

On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno kuv...@hp.com wrote:

  I think that’s horrible idea. How do we do that store independent with
 the linking dependencies?



 We should not depend universal use case like this on limited subset of
 backends, specially non-OpenStack ones. Glance (nor Nova) should never
 depend having direct access to the actual medium where the images are
 stored. I think this is school book example for something called database.
 Well arguable if this should be tracked at Glance or Nova, but definitely
 not a dirty hack expecting specific backend characteristics.



 As mentioned before the protected image property is to ensure that the
 image does not get deleted, that is also easy to track when the images are
 queried. Perhaps the record needs to track the original state of protected
 flag, image id and use count. 3 column table and couple of API calls. Lets
 not at least make it any more complicated than it needs to be if such
 functionality is desired.



 -  Erno



 *From:* Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
 *Sent:* 17 December 2014 20:34

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
 use?



 Guess that's a implementation detail. Depends on the way you go about
 using what's available now, I suppose.



 Thanks,
 -Nikhil
   --

 *From:* Chris St. Pierre [chris.a.st.pie...@gmail.com]
 *Sent:* Wednesday, December 17, 2014 2:07 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
 use?

 I was assuming atomic increment/decrement operations, in which case I'm
 not sure I see the race conditions. Or is atomism assuming too much?



 On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar 
 nikhil.koma...@rackspace.com wrote:

  That looks like a decent alternative if it works. However, it would be
 too racy unless we we implement a test-and-set for such properties or there
 is a different job which queues up these requests and perform sequentially
 for each tenant.



 Thanks,
 -Nikhil
   --

 *From:* Chris St. Pierre [chris.a.st.pie...@gmail.com]
 *Sent:* Wednesday, December 17, 2014 10:23 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
 use?

 That's unfortunately too simple. You run into one of two cases:



 1. If the job automatically removes the protected attribute when an image
 is no longer in use, then you lose the ability to use protected on images
 that are not in use. I.e., there's no way to say, nothing is currently
 using this image, but please keep it around. (This seems particularly
 useful for snapshots, for instance.)



 2. If the job does not automatically remove the protected attribute, then
 an image would be protected if it had ever been in use; to delete an image,
 you'd have to manually un-protect it, which is a workflow that quite
 explicitly defeats the whole purpose of flagging images as protected when
 they're in use.



 It seems like flagging an image as *not* in use is actually a fairly
 difficult problem, since it requires consensus among all components that
 might be using images.



 The only solution that readily occurs to me would be to add something like
 a filesystem link count to images in Glance. Then when Nova spawns an
 instance, it increments the usage count; when the instance is destroyed,
 the usage count is decremented. And similarly with other components that
 use images. An image could only be deleted when its usage count was zero.



 There are ample opportunities to get out of sync there, but it's at least

Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-18 Thread Chris St. Pierre
Presumably to prevent images from being deleted for arbitrary reasons that
are left to the administrator(s) of each individual implementation of
OpenStack, though. Using the protected flag to prevent images that are in
use from being deleted obviates the ability to use it for arbitrary
protection. That is, it can either be used as a general purpose flag to
prevent deletion of an image; or it can be used as a flag for images that
are in use and thus must not be deleted; but it cannot be used for both.
(At least, not without a wild and woolly network of hacks to ensure that it
can serve both purposes.)

Given the general-purpose nature of the flag, I don't think that something
that should be taken away from the administrators. And yet, it's very
desirable to prevent deletion of images that are in use. I think both of
these things should be supported, at the same time on the same
installation. I do not consider it a solution to the problem that images
can be deleted in use to take the protected flag away from arbitrary,
bespoke use.

On Thu, Dec 18, 2014 at 6:44 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/18/2014 02:08 PM, Chris St. Pierre wrote:

 I wasn't suggesting that we *actually* use filesystem link count, and
 make hard links or whatever for every time the image is used. That's
 prima facie absurd, for a lot more reasons that you point out. I was
 suggesting a new database field that tracks the number of times an image
 is in use, by *analogy* with filesystem link counts. (If I wanted to be
 unnecessarily abrasive I might say, This is a textbook example of
 something called an analogy, but I'm not interested in being
 unnecessarily abrasive.)

 Overloading the protected flag is *still* a terrible hack. Even if we
 tracked the initial state of protected and restored that state when an
 image went out of use, that would negate the ability to make an image


 I guess I don't understand what you consider to be overloading of the
 protected flag. The original purpose of the protected flag was to protect
 images from being deleted.

 Best,
 -jay

  protected while it was in use and expect that change to remain in place.
 So that just violates the principle of least surprise. Of course, we
 could have glance modify the original_protected_state flag when that
 flag is non-null and the user changes the actual protected flag, but
 this is just layering hacks upon hacks. By actually tracking the number
 of times an image is in use, we can preserve the ability to protect
 images *and* avoid deleting images in use.

 On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno kuv...@hp.com
 mailto:kuv...@hp.com wrote:

 I think that’s horrible idea. How do we do that store independent
 with the linking dependencies?

 __ __

 We should not depend universal use case like this on limited subset
 of backends, specially non-OpenStack ones. Glance (nor Nova) should
 never depend having direct access to the actual medium where the
 images are stored. I think this is school book example for something
 called database. Well arguable if this should be tracked at Glance
 or Nova, but definitely not a dirty hack expecting specific backend
 characteristics.

 __ __

 As mentioned before the protected image property is to ensure that
 the image does not get deleted, that is also easy to track when the
 images are queried. Perhaps the record needs to track the original
 state of protected flag, image id and use count. 3 column table and
 couple of API calls. Lets not at least make it any more complicated
 than it needs to be if such functionality is desired.

 __ __

 __-__Erno

 __ __

 *From:*Nikhil Komawar [mailto:nikhil.koma...@rackspace.com
 mailto:nikhil.koma...@rackspace.com]
 *Sent:* 17 December 2014 20:34


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [glance] Option to skip deleting
 images in use?

 __ __

 Guess that's a implementation detail. Depends on the way you go
 about using what's available now, I suppose.

 __ __

 Thanks,
 -Nikhil

 
 

 *From:*Chris St. Pierre [chris.a.st.pie...@gmail.com
 mailto:chris.a.st.pie...@gmail.com]
 *Sent:* Wednesday, December 17, 2014 2:07 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [glance] Option to skip deleting
 images in use?

 I was assuming atomic increment/decrement operations, in which case
 I'm not sure I see the race conditions. Or is atomism assuming too
 much?

 __ __

 On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar
 nikhil.koma...@rackspace.com mailto:nikhil.koma...@rackspace.com
 wrote:

 That looks like a decent alternative if it works. However

Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-17 Thread Chris St. Pierre
That's unfortunately too simple. You run into one of two cases:

1. If the job automatically removes the protected attribute when an image
is no longer in use, then you lose the ability to use protected on images
that are not in use. I.e., there's no way to say, nothing is currently
using this image, but please keep it around. (This seems particularly
useful for snapshots, for instance.)

2. If the job does not automatically remove the protected attribute, then
an image would be protected if it had ever been in use; to delete an image,
you'd have to manually un-protect it, which is a workflow that quite
explicitly defeats the whole purpose of flagging images as protected when
they're in use.

It seems like flagging an image as *not* in use is actually a fairly
difficult problem, since it requires consensus among all components that
might be using images.

The only solution that readily occurs to me would be to add something like
a filesystem link count to images in Glance. Then when Nova spawns an
instance, it increments the usage count; when the instance is destroyed,
the usage count is decremented. And similarly with other components that
use images. An image could only be deleted when its usage count was zero.

There are ample opportunities to get out of sync there, but it's at least a
sketch of something that might work, and isn't *too* horribly hackish.
Thoughts?

On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 A simple solution that wouldn’t require modification of glance would be a
 cron job
 that lists images and snapshots and marks them protected while they are in
 use.

 Vish

 On Dec 16, 2014, at 3:19 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:

  On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
  No, I'm looking to prevent images that are in use from being deleted.
 In
  use and protected are disjoint sets.
 
  I have seen multiple cases of images (and snapshots) being deleted while
  still in use in Nova, which leads to some very, shall we say,
  interesting bugs and support problems.
 
  I do think that we should try and determine a way forward on this, they
  are indeed disjoint sets. Setting an image as protected is a proactive
  measure, we should try and figure out a way to keep tenants from
  shooting themselves in the foot if possible.
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-17 Thread Chris St. Pierre
I was assuming atomic increment/decrement operations, in which case I'm not
sure I see the race conditions. Or is atomism assuming too much?

On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar 
nikhil.koma...@rackspace.com wrote:

  That looks like a decent alternative if it works. However, it would be
 too racy unless we we implement a test-and-set for such properties or there
 is a different job which queues up these requests and perform sequentially
 for each tenant.

 Thanks,
 -Nikhil
   --
 *From:* Chris St. Pierre [chris.a.st.pie...@gmail.com]
 *Sent:* Wednesday, December 17, 2014 10:23 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
 use?

   That's unfortunately too simple. You run into one of two cases:

  1. If the job automatically removes the protected attribute when an
 image is no longer in use, then you lose the ability to use protected on
 images that are not in use. I.e., there's no way to say, nothing is
 currently using this image, but please keep it around. (This seems
 particularly useful for snapshots, for instance.)

  2. If the job does not automatically remove the protected attribute,
 then an image would be protected if it had ever been in use; to delete an
 image, you'd have to manually un-protect it, which is a workflow that quite
 explicitly defeats the whole purpose of flagging images as protected when
 they're in use.

  It seems like flagging an image as *not* in use is actually a fairly
 difficult problem, since it requires consensus among all components that
 might be using images.

  The only solution that readily occurs to me would be to add something
 like a filesystem link count to images in Glance. Then when Nova spawns an
 instance, it increments the usage count; when the instance is destroyed,
 the usage count is decremented. And similarly with other components that
 use images. An image could only be deleted when its usage count was zero.

  There are ample opportunities to get out of sync there, but it's at
 least a sketch of something that might work, and isn't *too* horribly
 hackish. Thoughts?

 On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya vishvana...@gmail.com
  wrote:

 A simple solution that wouldn’t require modification of glance would be a
 cron job
 that lists images and snapshots and marks them protected while they are
 in use.

 Vish

 On Dec 16, 2014, at 3:19 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:

  On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
  No, I'm looking to prevent images that are in use from being deleted.
 In
  use and protected are disjoint sets.
 
  I have seen multiple cases of images (and snapshots) being deleted while
  still in use in Nova, which leads to some very, shall we say,
  interesting bugs and support problems.
 
  I do think that we should try and determine a way forward on this, they
  are indeed disjoint sets. Setting an image as protected is a proactive
  measure, we should try and figure out a way to keep tenants from
  shooting themselves in the foot if possible.
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
 Chris St. Pierre

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Chris St. Pierre
Currently, with delay_delete enabled, the Glance scrubber happily deletes
whatever images you ask it to. That includes images that are currently in
use by Nova guests, which can really hose things. It'd be nice to have an
option to tell the scrubber to skip deletion of images that are currently
in use, which is fairly trivial to check for and provides a nice measure of
protection.

Without delay_delete enabled, checking for images in use likely takes too
much time, so this would be limited to just images that are scrubbed with
delay_delete.

I wanted to bring this up here before I go to the trouble of writing a spec
for it, particularly since it doesn't appear that glance currently talks to
Nova as a client at all. Is this something that folks would be interested
in having? Thanks!

-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Chris St. Pierre
The goal here is protection against deletion of in-use images, not a
workaround that can be executed by an admin. For instance, someone without
admin still can't do that, and someone with a fat finger can still delete
images in use.

Don't lose your data is a fine workaround for taking backups, but most of
us take backups anyway. Same deal.

On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 Just set the images to is_public=False as an admin and they'll disappear
 from everyone except the admin.

 -jay


 On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

 Currently, with delay_delete enabled, the Glance scrubber happily
 deletes whatever images you ask it to. That includes images that are
 currently in use by Nova guests, which can really hose things. It'd be
 nice to have an option to tell the scrubber to skip deletion of images
 that are currently in use, which is fairly trivial to check for and
 provides a nice measure of protection.

 Without delay_delete enabled, checking for images in use likely takes
 too much time, so this would be limited to just images that are scrubbed
 with delay_delete.

 I wanted to bring this up here before I go to the trouble of writing a
 spec for it, particularly since it doesn't appear that glance currently
 talks to Nova as a client at all. Is this something that folks would be
 interested in having? Thanks!

 --
 Chris St. Pierre


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Chris St. Pierre
No, I'm looking to prevent images that are in use from being deleted. In
use and protected are disjoint sets.

On Tue, Dec 16, 2014 at 3:36 PM, Fei Long Wang feil...@catalyst.net.nz
wrote:

  Hi Chris,

 Are you looking for the 'protected' attribute? You can mark an image with
 'protected'=True, then the image can't be deleted by accidentally.

 On 17/12/14 10:23, Chris St. Pierre wrote:

 The goal here is protection against deletion of in-use images, not a
 workaround that can be executed by an admin. For instance, someone without
 admin still can't do that, and someone with a fat finger can still delete
 images in use.

  Don't lose your data is a fine workaround for taking backups, but most
 of us take backups anyway. Same deal.

 On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 Just set the images to is_public=False as an admin and they'll disappear
 from everyone except the admin.

 -jay


 On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

  Currently, with delay_delete enabled, the Glance scrubber happily
 deletes whatever images you ask it to. That includes images that are
 currently in use by Nova guests, which can really hose things. It'd be
 nice to have an option to tell the scrubber to skip deletion of images
 that are currently in use, which is fairly trivial to check for and
 provides a nice measure of protection.

 Without delay_delete enabled, checking for images in use likely takes
 too much time, so this would be limited to just images that are scrubbed
 with delay_delete.

 I wanted to bring this up here before I go to the trouble of writing a
 spec for it, particularly since it doesn't appear that glance currently
 talks to Nova as a client at all. Is this something that folks would be
 interested in having? Thanks!

 --
 Chris St. Pierre


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
 Chris St. Pierre


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Cheers  Best regards,
 Fei Long Wang (王飞龙)
 --
 Senior Cloud Software Engineer
 Tel: +64-48032246
 Email: flw...@catalyst.net.nz
 Catalyst IT Limited
 Level 6, Catalyst House, 150 Willis Street, Wellington
 --


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-18 Thread Chris St. Pierre
On Thu, Sep 18, 2014 at 4:19 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 Correct validation history is

 Essex: use anything you want!
 Folsom: strict asci!
 [..]
 Juno: strict asci!


I'm not sure that's quite right. My patch doesn't actually add Unicode
support; that was already added in
825499fffc7a320466e976d2842e175c2d158c0e, which appears to have gone in for
Icehouse.  So:

Essex: Use anything you want
Folsom: Strict ASCII, inconsistent restrictions
Grizzly: Strict ASCII, inconsistent restrictions
Icehouse: Unicode, inconsistent restrictions
Juno: Unicode, consistent restrictions
Kilo (?): Use anything you want

At any rate, if accepting Unicode is an issue, then it's been an issue for
a while.

-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Chris St. Pierre
On Mon, Sep 15, 2014 at 4:34 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 To arbitrarily restrict the user is a bug.


QFT.

This is why I don't feel like a blueprint should be necessary -- this is a
fairly simple changes that fixes what's pretty undeniably a bug. I also
don't see much consensus on whether or not I need to go through the
interminable blueprint process to get this accepted.

So since everyone seems to think that this is at least not a bad idea, and
since no one seems to know why it was originally changed, what stands
between me and a +2?

Thanks.

-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Chris St. Pierre
On Mon, Sep 15, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote:

 I believe I did:

 http://lists.openstack.org/pipermail/openstack-dev/2014-
 September/045924.html


Sorry, missed your explanation. I think Sean's suggestion -- to keep ID
fields restricted, but de-restrict name fields -- walks a nice middle
ground between database bloat/performance concerns and user experience.


  what

 stands between me and a +2?


 Bug fix priorities, feature freeze exceptions, and review load.
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Well, sure. I meant other than that. :)

My review is at https://review.openstack.org/#/c/119421/ if anyone does
find time to +N it. Thanks all!

-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Chris St. Pierre
Linking clearly isn't my strong suit:
https://review.openstack.org/#/c/119741/

On Mon, Sep 15, 2014 at 1:58 PM, Chris St. Pierre stpie...@metacloud.com
wrote:

 On Mon, Sep 15, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote:

 I believe I did:

 http://lists.openstack.org/pipermail/openstack-dev/2014-
 September/045924.html


 Sorry, missed your explanation. I think Sean's suggestion -- to keep ID
 fields restricted, but de-restrict name fields -- walks a nice middle
 ground between database bloat/performance concerns and user experience.


  what

 stands between me and a +2?


 Bug fix priorities, feature freeze exceptions, and review load.
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Well, sure. I meant other than that. :)

 My review is at https://review.openstack.org/#/c/119421/ if anyone does
 find time to +N it. Thanks all!

 --
 Chris St. Pierre
 Senior Software Engineer
 metacloud.com




-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Expand resource name allowed characters

2014-09-12 Thread Chris St. Pierre
We have proposed that the allowed characters for all resource names in Nova
(flavors, aggregates, etc.) be expanded to all printable unicode characters
and horizontal spaces: https://review.openstack.org/#/c/119741

Currently, the only allowed characters in most resource names are
alphanumeric, space, and [.-_].

We have proposed this change for two principal reasons:

1. We have customers who have migrated data forward since Essex, when no
restrictions were in place, and thus have characters in resource names that
are disallowed in the current version of OpenStack. This is only likely to
be useful to people migrating from Essex or earlier, since the current
restrictions were added in Folsom.

2. It's pretty much always a bad idea to add unnecessary restrictions
without a good reason. While we don't have an immediate need to use, for
example, the ever-useful http://codepoints.net/U+1F4A9 in a flavor name,
it's hard to come up with a reason people *shouldn't* be allowed to use it.

That said, apparently people have had a need to not be allowed to use some
characters, but it's not clear why:
https://bugs.launchpad.net/nova/+bug/977187

So I guess if anyone knows any reason why these printable characters should
not be joined in holy resource naming, speak now or forever hold your peace.

Thanks!

-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev