btzq opened a new issue, #9145:
URL: https://github.com/apache/cloudstack/issues/9145

   <!--
   Verify first that your issue/request is not already reported on GitHub.
   Also test if the latest release and main branch are affected too.
   Always add information AFTER of these HTML comments, but no need to delete 
the comments.
   -->
   
   ##### ISSUE TYPE
   <!-- Pick one below and delete the rest -->
    * Bug Report
   
   ##### COMPONENT NAME
   <!--
   Categorize the issue, e.g. API, VR, VPN, UI, etc.
   -->
   ~~~
   Autoscaling, Load Balancer
   ~~~
   
   ##### CLOUDSTACK VERSION
   <!--
   New line separated list of affected versions, commit ID for issues on main 
branch.
   -->
   
   ~~~
   4.19.0
   ~~~
   
   ##### CONFIGURATION
   <!--
   Information about the configuration if relevant, e.g. basic network, 
advanced networking, etc.  N/A otherwise
   -->
   
   
   ##### OS / ENVIRONMENT
   <!--
   Information about the environment if relevant, N/A otherwise
   -->
   
   
   ##### SUMMARY
   <!-- Explain the problem/feature briefly -->
   
   We have a few Autoscale Rules running in production. For simplicity, lets 
say we have 1 Autoscale Rule, with 4 VMs currently under the Rule. 
   
   2 of the VMs are located in a server that has experienced a node failure.
   
   During the node failure, the VMs are restarted in a new node.  The 2 VMs 
started up again in the new server. When you go to the Autoscale Load Balancing 
Rule, you will notice that these 2 VMs are not under the Autoscale Rule 
anymore. They are now orphans. 
   
   But, the autoscale rule will still state it has a total of 4 scaled up VMs
   
   In summary, 
   - Total No. Of VMs resulting from Autoscale : 4 VMs
   - No. of VMs under LB Rule: 2 VMs
   - No. Of VMs as Orphans : 2VMs
   
   ##### STEPS TO REPRODUCE
   <!--
   For bugs, show exactly how to reproduce the problem, using a minimal 
test-case. Use Screenshots if accurate.
   
   For new features, show how the feature would be used.
   -->
   
   <!-- Paste example playbooks or commands between quotes below -->
   ~~~
   - Create Autoscale Rule with Multiple VMs
   - Live Migrate VMs to another host
   - Force Power Off the Host to simulate node failure
   - Wait for VMs to restart in new Node
   - Go to Autoscale Load Balancer, and display the list of VMs under the load 
balancing rule
   - If cannot replicate the issue, retry it again until it happens. It may be 
intermittent.
   ~~~
   
   <!-- You can also paste gist.github.com links for larger files -->
   
   ##### EXPECTED RESULTS
   <!-- What did you expect to happen when running the steps above? -->
   
   ~~~
   A logic should be implemented to:
   - Scale up the VMs when it has detected the node is down (to fufill the 
scale up requirement)
   - Do not restart the VMs from the host failure to prevent confusion.
   ~~~
   
   ##### ACTUAL RESULTS
   <!-- What actually happened? -->
   
   <!-- Paste verbatim command output between quotes below -->
   ~~~
   VMs are missing from the load balancing rule
   ~~~
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to