Re: [openstack-dev] [Nova][Heat] Where does Shelving belong
I think the allure of labeling this as 'orchestration' comes from the reliance on multiple services to make this feature work. Heck, booting an instance is something that should be handled by 'orchestration'. Booting an instance requires the cooperation of many services -- and while I won't go into the details I still have high hopes we can steer this logic out of nova and into other services for which it's more suited to. Does that mean Heat? No. http://i.imgur.com/QZxAv.gif -- Heat is a different type of 'orchestration'. The 'orchestration' logic I'm talking about is currently handled by Nova's conductor. Long story short -- I think the change, where you have it, should work well and is the correct place to hold this logic when compared with similar tasks in Nova. Brian On Jun 25, 2013, at 10:22 AM, Andrew Laski andrew.la...@rackspace.com wrote: I have a couple of reviews up to introduce the concept of shelving an instance into Nova. The question has been raised as to whether or not this belongs in Nova, or more rightly belongs in Heat. The blueprint for this feature can be found at https://blueprints.launchpad.net/nova/+spec/shelve-instance, but to make things easy I'll outline some of the goals here. The main use case that's being targeted is a user who wishes to stop an instance at the end of a workday and then restart it again at the start of their next workday, either the next day or after a weekend. From a service provider standpoint the difference between shelving and stopping an instance is that the contract allows removing that instance from the hypervisor at any point so unshelving may move it to another host. From a user standpoint what they're looking for is: The ability to retain the endpoint for API calls on that instance. So v2/tenant_id/servers/server_id continues to work after the instance is unshelved. All networking, attached volumes, admin pass, metadata, and other user configurable properties remain unchanged when shelved/unshelved. Other properties like task/vm/power state, host, *_at, may change. The ability to see that instance in their list of servers when shelved. Again, the objection that has been raised is that it seems like orchestration and therefore would belong in Heat. While this is somewhat similar to a snapshot/destroy/rebuild workflow there are certain properties of shelving in Nova that I can't see how to reproduce by handling this externally. At least not without exposing Nova internals beyond a comfortable level. So I'd like to understand what the thinking is around why this belongs in Heat, and how that could be accomplished. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Heat] Where does Shelving belong
On Tue, Jun 25, 2013 at 7:22 AM, Andrew Laski andrew.la...@rackspace.comwrote: I have a couple of reviews up to introduce the concept of shelving an instance into Nova. The question has been raised as to whether or not this belongs in Nova, or more rightly belongs in Heat. The blueprint for this feature can be found at https://blueprints.launchpad.** net/nova/+spec/shelve-instancehttps://blueprints.launchpad.net/nova/+spec/shelve-instance **, but to make things easy I'll outline some of the goals here. The main use case that's being targeted is a user who wishes to stop an instance at the end of a workday and then restart it again at the start of their next workday, either the next day or after a weekend. From a service provider standpoint the difference between shelving and stopping an instance is that the contract allows removing that instance from the hypervisor at any point so unshelving may move it to another host. the part that caught my eye as something that *may* be in heat's domain and is at least worth a discussion is the snapshotting and periodic task part. from what I can tell, it sounds like the use case is for this is: I want to 'shutdown' my VM overnight and save money since I am not using it, but I want to keep everything looking the same. But in this use case I would want to automatically 'shelve' my instance off the compute-server every night (not leave it on the server) and every morning I would want it to autostart before I get to work (and re-attach my volume and re-associate my floating-ip). All of this sounds much closer to using heat and snapshotting then using 'shelving.' Additionally, storing the shelved instance locally on the compute-node until a simple periodic task to migrates 'shelved' instances off into deep storage seems like it has undesired side-effects. For example, as long as the shelved instance is on a compute-node, you have to reserve CPU resources for it, otherwise the instance may not be able to resume on the same compute-node invalidating the benefits (as far as I can tell) of keeping the instance locally snapshotted. From a user standpoint what they're looking for is: The ability to retain the endpoint for API calls on that instance. So v2/tenant_id/servers/**server_id continues to work after the instance is unshelved. All networking, attached volumes, admin pass, metadata, and other user configurable properties remain unchanged when shelved/unshelved. Other properties like task/vm/power state, host, *_at, may change. The ability to see that instance in their list of servers when shelved. This sounds like a good reason to keep this in nova. Again, the objection that has been raised is that it seems like orchestration and therefore would belong in Heat. While this is somewhat similar to a snapshot/destroy/rebuild workflow there are certain properties of shelving in Nova that I can't see how to reproduce by handling this externally. At least not without exposing Nova internals beyond a comfortable level. What properties are those, and more importantly why I need them? So I'd like to understand what the thinking is around why this belongs in Heat, and how that could be accomplished. __**_ OpenStack-dev mailing list OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Heat] Where does Shelving belong
On 06/25/13 at 09:42am, Joe Gordon wrote: On Tue, Jun 25, 2013 at 7:22 AM, Andrew Laski andrew.la...@rackspace.comwrote: I have a couple of reviews up to introduce the concept of shelving an instance into Nova. The question has been raised as to whether or not this belongs in Nova, or more rightly belongs in Heat. The blueprint for this feature can be found at https://blueprints.launchpad.** net/nova/+spec/shelve-instancehttps://blueprints.launchpad.net/nova/+spec/shelve-instance **, but to make things easy I'll outline some of the goals here. The main use case that's being targeted is a user who wishes to stop an instance at the end of a workday and then restart it again at the start of their next workday, either the next day or after a weekend. From a service provider standpoint the difference between shelving and stopping an instance is that the contract allows removing that instance from the hypervisor at any point so unshelving may move it to another host. the part that caught my eye as something that *may* be in heat's domain and is at least worth a discussion is the snapshotting and periodic task part. from what I can tell, it sounds like the use case is for this is: I want to 'shutdown' my VM overnight and save money since I am not using it, but I want to keep everything looking the same. But in this use case I would want to automatically 'shelve' my instance off the compute-server every night (not leave it on the server) and every morning I would want it to autostart before I get to work (and re-attach my volume and re-associate my floating-ip). All of this sounds much closer to using heat and snapshotting then using 'shelving.' The periodic task for removing a shelved instance from the hypervisor is a first pass attempt at a mechanism for reclaiming resources, and is under discussion and will probably evolve over time. But the motivation for reclaiming resources will be driven by deployment capacity or the desire to reshuffle instances or maybe something else that's important to a deployer. Not the user. Since I see Heat as an advocate for user requests, not deployer concerns, I still think this falls outside of its concerns. There's no concept of autostart included in shelving. I agree that that gets beyond what should be performed in Nova. Additionally, storing the shelved instance locally on the compute-node until a simple periodic task to migrates 'shelved' instances off into deep storage seems like it has undesired side-effects. For example, as long as the shelved instance is on a compute-node, you have to reserve CPU resources for it, otherwise the instance may not be able to resume on the same compute-node invalidating the benefits (as far as I can tell) of keeping the instance locally snapshotted. You're correct that there's not a large benefit to a deployer unless resources are reclaimed. Perhaps some small power savings, and the freedom to migrate the instance transparently if desired. I would prefer to remove the instance when it's shelved rather than waiting for something, like a periodic task or admin api call, to trigger it. But booting disk based images can take a fairly long time so I've optimized for the case of an instance being shelved for a day or a weekend. That way users get acceptable unshelve times for the expected case, and deployers benefit when an instance is shelved longer term. I don't think this needs to be set in stone and the internal working can be modified as we find ways to improve it. From a user standpoint what they're looking for is: The ability to retain the endpoint for API calls on that instance. So v2/tenant_id/servers/**server_id continues to work after the instance is unshelved. All networking, attached volumes, admin pass, metadata, and other user configurable properties remain unchanged when shelved/unshelved. Other properties like task/vm/power state, host, *_at, may change. The ability to see that instance in their list of servers when shelved. This sounds like a good reason to keep this in nova. Again, the objection that has been raised is that it seems like orchestration and therefore would belong in Heat. While this is somewhat similar to a snapshot/destroy/rebuild workflow there are certain properties of shelving in Nova that I can't see how to reproduce by handling this externally. At least not without exposing Nova internals beyond a comfortable level. What properties are those, and more importantly why I need them? Mainly uuid, but also the server listing. If Heat snapshots and removes an instance it has no way to recreate it with the same uuid. As much as I wish it wasn't the case, this is important to users. So I'd like to understand what the thinking is around why this belongs in Heat, and how that could be accomplished. __**_ OpenStack-dev mailing list OpenStack-dev@lists.openstack.**org