Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions
Excerpts from Simon Pasquier's message of 2013-10-03 07:12:51 -0700: Hi Clint, Thanks for the reply! I'll update the bug you raised with more information. In the meantime, I agree with you that cfn-hup is enough for now. BTW, is there any bug or missing feature that would prevent me from replacing cfn-hup by os-collect-config? The only problem might be that currently os-collect-config can only watch one path, but it was designed to watch multiple paths. That is just a bug, and hopefully will get fixed soon. Also cfn-init will not know how to read the config info that os-collect-config produces, so if you are using cfn-init it is still better to use cfn-hup. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions
Hi Christopher, Thanks for replying! I've been out last week hence this late email. Le 20/09/2013 21:22, Christopher Armstrong a écrit : Hello Simon! I've put responses below. I'm kind of confused about your examples though, because you don't show anything that depends on ComputeReady in your template. I guess I can imagine some scenarios, but it's not very clear to me how this works. It'd be nice to make sure the new autoscaling solution that we're working on will support your case in a nice way, but I think we need some more information about what you're doing. The only time this would have an effect is if there's another resource depending on the ComputeReady /that's also being updated at the same time/, because the only effect that a dependency has is to wait until it is met before performing create, update, or delete operations on other resources. So I think it would be nice to understand your use case a little bit more before continuing discussion. I'm not sure I understand which template you're talking about: is it [1] or [2]? In both cases, nothing depends on ComputeReady: this is the guard condition and it is the last resource being created. And since it depends on the NumberOfComputes or NumberOfWaitConditions parameter, it gets updated when I update one of these. [1] http://paste.openstack.org/show/47142/ [2] http://paste.openstack.org/show/47148/ -- Simon Pasquier Software Engineer Bull, Architect of an Open World Phone: + 33 4 76 29 71 49 http://www.bull.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions
Hi Clint, Thanks for the reply! I'll update the bug you raised with more information. In the meantime, I agree with you that cfn-hup is enough for now. BTW, is there any bug or missing feature that would prevent me from replacing cfn-hup by os-collect-config? Simon Le 20/09/2013 22:12, Clint Byrum a écrit : Excerpts from Simon Pasquier's message of 2013-09-17 05:57:58 -0700: Hello, I'm testing stack updates with instance group and wait conditions and I'd like to get feedback from the Heat community. My template declares an instance group resource with size = N and a wait condition resource with count = N (N being passed as a parameter of the template). Each group's instance is calling cfn-signal (with a different id!) at the end of the user data script and my stack creates with no error. Now when I update my stack to run N+X instances, the instance group gets updated with size=N+X but since the wait condition is deleted and recreated, the count value should either be updated to X or my existing instances should re-execute cfn-signal. That is a bug, the count should be something that can be updated in-place. https://bugs.launchpad.net/heat/+bug/1228362 Once that is fixed, there will be an odd interaction between the groups though. Any new instances will add to the count, but removed instances will not decrease it. I'm not sure how to deal with that particular quirk. That said, rolling updates will likely produce some changes to the way updates interact with wait conditions so that we can let instances and/or monitoring systems feed back when an instance is ready. That will also help deal with the problem you are seeing. In the mean time, cfn-hup is exactly what you want, and I see no problem with re-running cfn-signal after an update to signal that the update has applied. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Simon Pasquier Software Engineer Bull, Architect of an Open World Phone: + 33 4 76 29 71 49 http://www.bull.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions
Hello Simon! I've put responses below. On Tue, Sep 17, 2013 at 7:57 AM, Simon Pasquier simon.pasqu...@bull.net wrote: Hello, I'm testing stack updates with instance group and wait conditions and I'd like to get feedback from the Heat community. My template declares an instance group resource with size = N and a wait condition resource with count = N (N being passed as a parameter of the template). Each group's instance is calling cfn-signal (with a different id!) at the end of the user data script and my stack creates with no error. Now when I update my stack to run N+X instances, the instance group gets updated with size=N+X but since the wait condition is deleted and recreated, the count value should either be updated to X or my existing instances should re-execute cfn-signal. This is a pretty interesting scenario; I don't think we have a very good solution for it yet. To cope with this situation, I've found 2 options: 1/ declare 2 parameters in my template: nb of instances (N for creation, N+X for update) and count of wait conditions (N for creation, X for update). See [1] for the details. 2/ declare only one parameter in my template (the size of the group) and leverage cfn-hup on the existing instances to re-execute cfn-signal. See [2] for the details. The solution 1 is not really user-friendly and I found that solution 2 is a bit complicated. Does anybody know a simpler way to achieve the same result? I definitely think #1 is better than #2, but you're right, it's also not very nice. I'm kind of confused about your examples though, because you don't show anything that depends on ComputeReady in your template. I guess I can imagine some scenarios, but it's not very clear to me how this works. It'd be nice to make sure the new autoscaling solution that we're working on will support your case in a nice way, but I think we need some more information about what you're doing. The only time this would have an effect is if there's another resource depending on the ComputeReady *that's also being updated at the same time*, because the only effect that a dependency has is to wait until it is met before performing create, update, or delete operations on other resources. So I think it would be nice to understand your use case a little bit more before continuing discussion. -- IRC: radix Christopher Armstrong Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions
Excerpts from Simon Pasquier's message of 2013-09-17 05:57:58 -0700: Hello, I'm testing stack updates with instance group and wait conditions and I'd like to get feedback from the Heat community. My template declares an instance group resource with size = N and a wait condition resource with count = N (N being passed as a parameter of the template). Each group's instance is calling cfn-signal (with a different id!) at the end of the user data script and my stack creates with no error. Now when I update my stack to run N+X instances, the instance group gets updated with size=N+X but since the wait condition is deleted and recreated, the count value should either be updated to X or my existing instances should re-execute cfn-signal. That is a bug, the count should be something that can be updated in-place. https://bugs.launchpad.net/heat/+bug/1228362 Once that is fixed, there will be an odd interaction between the groups though. Any new instances will add to the count, but removed instances will not decrease it. I'm not sure how to deal with that particular quirk. That said, rolling updates will likely produce some changes to the way updates interact with wait conditions so that we can let instances and/or monitoring systems feed back when an instance is ready. That will also help deal with the problem you are seeing. In the mean time, cfn-hup is exactly what you want, and I see no problem with re-running cfn-signal after an update to signal that the update has applied. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat] question about stack updates, instance groups and wait conditions
Hello, I'm testing stack updates with instance group and wait conditions and I'd like to get feedback from the Heat community. My template declares an instance group resource with size = N and a wait condition resource with count = N (N being passed as a parameter of the template). Each group's instance is calling cfn-signal (with a different id!) at the end of the user data script and my stack creates with no error. Now when I update my stack to run N+X instances, the instance group gets updated with size=N+X but since the wait condition is deleted and recreated, the count value should either be updated to X or my existing instances should re-execute cfn-signal. To cope with this situation, I've found 2 options: 1/ declare 2 parameters in my template: nb of instances (N for creation, N+X for update) and count of wait conditions (N for creation, X for update). See [1] for the details. 2/ declare only one parameter in my template (the size of the group) and leverage cfn-hup on the existing instances to re-execute cfn-signal. See [2] for the details. The solution 1 is not really user-friendly and I found that solution 2 is a bit complicated. Does anybody know a simpler way to achieve the same result? Regards, [1] http://paste.openstack.org/show/47142/ [2] http://paste.openstack.org/show/47148/ -- Simon Pasquier Software Engineer Bull, Architect of an Open World Phone: + 33 4 76 29 71 49 http://www.bull.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev