weizhouapache commented on issue #5527:
URL: https://github.com/apache/cloudstack/issues/5527#issuecomment-1041968420


   
   
   > I think anti affinity groups need to have more attributes. A strict 
anti-affinity might require cluster spreading or even pod spreading for a VM. 
In those cases a VM should not be started at all if any VM in the group is 
already running on each host/cluster/pod. If the anti-affinity is not strict 
one might argue for instance, "no third VM may be started if any resource 
(host/cluster/pod) doesn't host two VMs from the group yet".
   > 
   
   @DaanHoogland 
   this is how it works  in cloudstack for now. 
   VMs with same host anti-affinity can not be deployed to the same host. It 
leads to an issue that user cannot deploy VMs if there are not enough hosts.
   
   we could rename the current "host anti-affinity" to "Strict host 
anti-affinity" , and create a type like "Loose anti-host affinity". The new 
type works like guest os preference. VMs are preferably allocated to different 
hosts, but if there are not enough hosts, VMs can be deployed to same host as 
other vm with same affinity group.
   
   anyway, it is not the root cause of this issue I believe.
   
   > of course there are monsters hiding in all details for this.
   > 
   > > @Pearl1594 @davidjumani @weizhouapache @rohityadavcloud @DaanHoogland 
@andrijapanicsb this will a proper definition, VM deployments are always 
failing considering the anti-affinity groups, however if the VMs are 
stopped/started belonging to the anti-affinity group it is possible to have VMs 
running on the same host as this issue describes (anti affinity for 3 VMs with 
2 hosts). Should stop/start also fail if the anti-affinity rules are not met? 
Looking forward to hearing your input
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to