2014-12-22 21:50 GMT+08:00 Sylvain Bauza <[email protected]>: > > Le 22/12/2014 13:37, Alex Xu a écrit : > > > > 2014-12-22 10:36 GMT+08:00 Lingxian Kong <[email protected]>: > >> 2014-12-22 9:21 GMT+08:00 Alex Xu <[email protected]>: >> > >> > >> > 2014-12-22 9:01 GMT+08:00 Lingxian Kong <[email protected]>: >> >> >> >> >> >> >> but what if the compute node is back to normal? There will be >> >> instances in the same server group with affinity policy, but located >> >> in different hosts. >> >> >> > >> > If operator decide to evacuate the instance from the failed host, we >> should >> > fence the failed host first. >> >> Yes, actually. I mean the recommandation or prerequisite should be >> emphasized somewhere, e.g. the Operation Guide, otherwise it'll make >> things more confused. But the issue you are working around is indeed a >> problem we should solve. >> >> > Yea, you are right, we should doc it if we think this make sense. Thanks! > > > > As I said, I'm not in favor of adding more complexity in the instance > group setup that is done in the conductor for basic race condition reasons. >
Emm...anyway we can resolve it for now? > > If I understand correctly, the problem is when there is only one host for > all the instances belonging to a group with affinity filter and this host > is down, then the filter will deny any other host and consequently the > request will fail while it should succeed. > > Yes, you understand correctly. Thanks for explain that, that's good for other people to understand what we talking about. > Is this really a problem ? I mean, it appears to me that's a normal > behaviour because a filter is by definition an *hard* policy. > Yea, it isn't problem for normal case. But it's problem for VM HA. So I want to ask whether we should tell user if you use *hard* policy, that means you lose the VM HA. If we choice that, maybe we should doc at somewhere to notice user. But if user can have *hard* policy and VM HA at sametime and we aren't break anything(except a little complex code), that's sounds good for user. > > So, provided you would like to implement *soft* policies, that sounds more > likely a *weigher* that you would like to have : ie. make sure that hosts > running existing instances in the group are weighted more than other ones > so they'll be chosen every time, but in case they're down, allow the > scheduler to pick other hosts. > yes, soft policy didn't have this problem. > > HTH, > -Sylvain > > > > > -- >> Regards! >> ----------------------------------- >> Lingxian Kong >> >> _______________________________________________ >> OpenStack-dev mailing list >> [email protected] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > _______________________________________________ > OpenStack-dev mailing > [email protected]http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
