satishglondhe commented on issue #5976:
URL: https://github.com/apache/cloudstack/issues/5976#issuecomment-1053948242


   Yes, the Kubernetes nodes' names and the names of VMs in the cloudstack UI 
match. I tried creating HPA, the metrics server as you mentioned, increased the 
load, following this article 
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
 . I was able to see, hpa creating new pods when load was increased. But, after 
the resources on the node reached it's limit, the remaining pods went into 
pending state. I monitored the cluster autoscaler logs for quite a while, but 
the logs only mentioned failed to get node info groups. It did not detect that 
the pods were unschedulable, and didn't come up with a plan to scale. I am 
attaching the log cycle for cluster autoscaler pod.
   root@ip-172-31-92-239:~# kubectl logs -f cluster-autoscaler-6fdd97ff58-c5nw2 
-n kube-system
   I0226 19:37:39.546044       1 flags.go:52] FLAG: --add-dir-header="false"
   I0226 19:37:39.546130       1 flags.go:52] FLAG: --address=":8085"
   I0226 19:37:39.546149       1 flags.go:52] FLAG: --alsologtostderr="false"
   I0226 19:37:39.546163       1 flags.go:52] FLAG: 
--aws-use-static-instance-list="false"
   I0226 19:37:39.546175       1 flags.go:52] FLAG: 
--balance-similar-node-groups="false"
   I0226 19:37:39.546188       1 flags.go:52] FLAG: 
--balancing-ignore-label="[]"
   I0226 19:37:39.546200       1 flags.go:52] FLAG: 
--cloud-config="/config/cloud-config"
   I0226 19:37:39.546213       1 flags.go:52] FLAG: 
--cloud-provider="cloudstack"
   I0226 19:37:39.546250       1 flags.go:52] FLAG: 
--cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16"
   I0226 19:37:39.546277       1 flags.go:52] FLAG: 
--cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
   I0226 19:37:39.546302       1 flags.go:52] FLAG: --cluster-name=""
   I0226 19:37:39.546315       1 flags.go:52] FLAG: 
--clusterapi-cloud-config-authoritative="false"
   I0226 19:37:39.546328       1 flags.go:52] FLAG: --cores-total="0:320000"
   I0226 19:37:39.546348       1 flags.go:52] FLAG: --estimator="binpacking"
   I0226 19:37:39.546386       1 flags.go:52] FLAG: --expander="random"
   I0226 19:37:39.546417       1 flags.go:52] FLAG: 
--expendable-pods-priority-cutoff="-10"
   I0226 19:37:39.546439       1 flags.go:52] FLAG: --gpu-total="[]"
   I0226 19:37:39.546453       1 flags.go:52] FLAG: 
--ignore-daemonsets-utilization="false"
   I0226 19:37:39.546473       1 flags.go:52] FLAG: 
--ignore-mirror-pods-utilization="false"
   I0226 19:37:39.546486       1 flags.go:52] FLAG: --ignore-taint="[]"
   I0226 19:37:39.546499       1 flags.go:52] FLAG: --kubeconfig=""
   I0226 19:37:39.546534       1 flags.go:52] FLAG: --kubernetes=""
   I0226 19:37:39.546564       1 flags.go:52] FLAG: --leader-elect="true"
   I0226 19:37:39.546587       1 flags.go:52] FLAG: 
--leader-elect-lease-duration="15s"
   I0226 19:37:39.546601       1 flags.go:52] FLAG: 
--leader-elect-renew-deadline="10s"
   I0226 19:37:39.546614       1 flags.go:52] FLAG: 
--leader-elect-resource-lock="leases"
   I0226 19:37:39.546628       1 flags.go:52] FLAG: 
--leader-elect-resource-name=""
   I0226 19:37:39.546648       1 flags.go:52] FLAG: 
--leader-elect-resource-namespace=""
   I0226 19:37:39.546679       1 flags.go:52] FLAG: 
--leader-elect-retry-period="2s"
   I0226 19:37:39.546709       1 flags.go:52] FLAG: --log-backtrace-at=":0"
   I0226 19:37:39.546727       1 flags.go:52] FLAG: --log-dir=""
   I0226 19:37:39.546749       1 flags.go:52] FLAG: --log-file=""
   I0226 19:37:39.546762       1 flags.go:52] FLAG: --log-file-max-size="1800"
   I0226 19:37:39.546775       1 flags.go:52] FLAG: --logtostderr="true"
   I0226 19:37:39.546795       1 flags.go:52] FLAG: 
--max-autoprovisioned-node-group-count="15"
   I0226 19:37:39.546829       1 flags.go:52] FLAG: 
--max-bulk-soft-taint-count="10"
   I0226 19:37:39.546854       1 flags.go:52] FLAG: 
--max-bulk-soft-taint-time="3s"
   I0226 19:37:39.546876       1 flags.go:52] FLAG: --max-empty-bulk-delete="10"
   I0226 19:37:39.546889       1 flags.go:52] FLAG: --max-failing-time="15m0s"
   I0226 19:37:39.546902       1 flags.go:52] FLAG: 
--max-graceful-termination-sec="600"
   I0226 19:37:39.546914       1 flags.go:52] FLAG: --max-inactivity="10m0s"
   I0226 19:37:39.546934       1 flags.go:52] FLAG: 
--max-node-provision-time="15m0s"
   I0226 19:37:39.546965       1 flags.go:52] FLAG: --max-nodes-total="0"
   I0226 19:37:39.546995       1 flags.go:52] FLAG: 
--max-total-unready-percentage="45"
   I0226 19:37:39.547017       1 flags.go:52] FLAG: --memory-total="0:6400000"
   I0226 19:37:39.547032       1 flags.go:52] FLAG: --min-replica-count="0"
   I0226 19:37:39.547044       1 flags.go:52] FLAG: --namespace="kube-system"
   I0226 19:37:39.547057       1 flags.go:52] FLAG: 
--new-pod-scale-up-delay="0s"
   I0226 19:37:39.547069       1 flags.go:52] FLAG: 
--node-autoprovisioning-enabled="false"
   I0226 19:37:39.547105       1 flags.go:52] FLAG: 
--node-deletion-delay-timeout="2m0s"
   I0226 19:37:39.547135       1 flags.go:52] FLAG: 
--node-group-auto-discovery="[]"
   I0226 19:37:39.547157       1 flags.go:52] FLAG: 
--nodes="[1:3:1038d730-7444-4fde-bf1b-9f86712943df]"
   I0226 19:37:39.547171       1 flags.go:52] FLAG: --ok-total-unready-count="3"
   I0226 19:37:39.547191       1 flags.go:52] FLAG: --profiling="false"
   I0226 19:37:39.547204       1 flags.go:52] FLAG: --regional="false"
   I0226 19:37:39.547216       1 flags.go:52] FLAG: 
--scale-down-candidates-pool-min-count="50"
   I0226 19:37:39.547251       1 flags.go:52] FLAG: 
--scale-down-candidates-pool-ratio="0.1"
   I0226 19:37:39.547281       1 flags.go:52] FLAG: 
--scale-down-delay-after-add="10m0s"
   I0226 19:37:39.547302       1 flags.go:52] FLAG: 
--scale-down-delay-after-delete="0s"
   I0226 19:37:39.547316       1 flags.go:52] FLAG: 
--scale-down-delay-after-failure="3m0s"
   I0226 19:37:39.547328       1 flags.go:52] FLAG: --scale-down-enabled="true"
   I0226 19:37:39.547348       1 flags.go:52] FLAG: 
--scale-down-gpu-utilization-threshold="0.5"
   I0226 19:37:39.547362       1 flags.go:52] FLAG: 
--scale-down-non-empty-candidates-count="30"
   I0226 19:37:39.547393       1 flags.go:52] FLAG: 
--scale-down-unneeded-time="10m0s"
   I0226 19:37:39.547423       1 flags.go:52] FLAG: 
--scale-down-unready-time="20m0s"
   I0226 19:37:39.547438       1 flags.go:52] FLAG: 
--scale-down-utilization-threshold="0.5"
   I0226 19:37:39.547451       1 flags.go:52] FLAG: --scale-up-from-zero="true"
   I0226 19:37:39.547463       1 flags.go:52] FLAG: --scan-interval="10s"
   I0226 19:37:39.547475       1 flags.go:52] FLAG: --skip-headers="false"
   I0226 19:37:39.547488       1 flags.go:52] FLAG: --skip-log-headers="false"
   I0226 19:37:39.547508       1 flags.go:52] FLAG: 
--skip-nodes-with-local-storage="false"
   I0226 19:37:39.547543       1 flags.go:52] FLAG: 
--skip-nodes-with-system-pods="true"
   I0226 19:37:39.547567       1 flags.go:52] FLAG: --stderrthreshold="0"
   I0226 19:37:39.547589       1 flags.go:52] FLAG: 
--unremovable-node-recheck-timeout="5m0s"
   I0226 19:37:39.547618       1 flags.go:52] FLAG: --v="4"
   I0226 19:37:39.547644       1 flags.go:52] FLAG: --vmodule=""
   I0226 19:37:39.547692       1 flags.go:52] FLAG: 
--write-status-configmap="true"
   I0226 19:37:39.547723       1 main.go:378] Cluster Autoscaler 1.19.0-beta.1
   I0226 19:37:39.557847       1 leaderelection.go:246] attempting to acquire 
leader lease kube-system/cluster-autoscaler...
   I0226 19:37:39.560045       1 leaderelection.go:345] lock is held by 
cluster-autoscaler-6f546b8c58-g74wq and has not yet expired
   I0226 19:37:39.560103       1 leaderelection.go:251] failed to acquire lease 
kube-system/cluster-autoscaler
   I0226 19:37:43.014256       1 leaderelection.go:345] lock is held by 
cluster-autoscaler-6f546b8c58-g74wq and has not yet expired
   I0226 19:37:43.014273       1 leaderelection.go:251] failed to acquire lease 
kube-system/cluster-autoscaler
   I0226 19:37:47.274930       1 leaderelection.go:345] lock is held by 
cluster-autoscaler-6f546b8c58-g74wq and has not yet expired
   I0226 19:37:47.274944       1 leaderelection.go:251] failed to acquire lease 
kube-system/cluster-autoscaler
   I0226 19:37:50.872948       1 leaderelection.go:345] lock is held by 
cluster-autoscaler-6f546b8c58-g74wq and has not yet expired
   I0226 19:37:50.872961       1 leaderelection.go:251] failed to acquire lease 
kube-system/cluster-autoscaler
   I0226 19:37:53.926798       1 leaderelection.go:345] lock is held by 
cluster-autoscaler-6f546b8c58-g74wq and has not yet expired
   I0226 19:37:53.926944       1 leaderelection.go:251] failed to acquire lease 
kube-system/cluster-autoscaler
   I0226 19:37:56.953890       1 leaderelection.go:256] successfully acquired 
lease kube-system/cluster-autoscaler
   I0226 19:37:56.954242       1 event_sink_logging_wrapper.go:48] 
Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", 
Name:"cluster-autoscaler", UID:"f8780dc2-891a-4515-9a73-d6e4de715780", 
APIVersion:"coordination.k8s.io/v1", ResourceVersion:"64606", FieldPath:""}): 
type: 'Normal' reason: 'LeaderElection' cluster-autoscaler-6fdd97ff58-c5nw2 
became leader
   I0226 19:37:56.958415       1 reflector.go:210] Starting reflector *v1.Pod 
(1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:188
   I0226 19:37:56.958755       1 reflector.go:246] Listing and watching *v1.Pod 
from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:188
   I0226 19:37:56.958631       1 reflector.go:210] Starting reflector *v1.Pod 
(1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:212
   I0226 19:37:56.959304       1 reflector.go:246] Listing and watching *v1.Pod 
from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:212
   I0226 19:37:56.958671       1 reflector.go:210] Starting reflector *v1.Node 
(1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
   I0226 19:37:56.959468       1 reflector.go:246] Listing and watching 
*v1.Node from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
   I0226 19:37:56.958680       1 reflector.go:210] Starting reflector *v1.Node 
(1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
   I0226 19:37:56.959610       1 reflector.go:246] Listing and watching 
*v1.Node from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
   I0226 19:37:56.958686       1 reflector.go:210] Starting reflector 
*v1beta1.PodDisruptionBudget (1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309
   I0226 19:37:56.959774       1 reflector.go:246] Listing and watching 
*v1beta1.PodDisruptionBudget from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309
   I0226 19:37:56.958699       1 reflector.go:210] Starting reflector 
*v1.DaemonSet (1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:320
   I0226 19:37:56.959956       1 reflector.go:246] Listing and watching 
*v1.DaemonSet from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:320
   I0226 19:37:56.958707       1 reflector.go:210] Starting reflector 
*v1.ReplicationController (1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:329
   I0226 19:37:56.960207       1 reflector.go:246] Listing and watching 
*v1.ReplicationController from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:329
   I0226 19:37:56.958725       1 reflector.go:210] Starting reflector *v1.Job 
(1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:338
   I0226 19:37:56.960784       1 reflector.go:246] Listing and watching *v1.Job 
from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:338
   I0226 19:37:56.958733       1 reflector.go:210] Starting reflector 
*v1.ReplicaSet (1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:347
   I0226 19:37:56.961023       1 reflector.go:246] Listing and watching 
*v1.ReplicaSet from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:347
   I0226 19:37:56.958740       1 reflector.go:210] Starting reflector 
*v1.StatefulSet (1h0m0s) from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:356
   I0226 19:37:57.044500       1 reflector.go:246] Listing and watching 
*v1.StatefulSet from 
/home/djumani/lab/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:356
   I0226 19:37:57.346841       1 registry.go:166] Registering SelectorSpread 
plugin
   I0226 19:37:57.346860       1 registry.go:166] Registering SelectorSpread 
plugin
   I0226 19:37:57.355433       1 cloud_provider_builder.go:29] Building 
cloudstack cloud provider.
   I0226 19:37:57.355701       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:37:57.448589       1 reflector.go:210] Starting reflector 
*v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.448616       1 reflector.go:246] Listing and watching 
*v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.448934       1 reflector.go:210] Starting reflector 
*v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.448947       1 reflector.go:246] Listing and watching 
*v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.449213       1 reflector.go:210] Starting reflector 
*v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.449239       1 reflector.go:246] Listing and watching 
*v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.449511       1 reflector.go:210] Starting reflector *v1.Pod 
(0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.449524       1 reflector.go:246] Listing and watching *v1.Pod 
from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.544676       1 reflector.go:210] Starting reflector *v1.Node 
(0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.544720       1 reflector.go:246] Listing and watching 
*v1.Node from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545074       1 reflector.go:210] Starting reflector 
*v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545086       1 reflector.go:246] Listing and watching 
*v1.Service from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545412       1 reflector.go:210] Starting reflector 
*v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545424       1 reflector.go:246] Listing and watching 
*v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545689       1 reflector.go:210] Starting reflector 
*v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545701       1 reflector.go:246] Listing and watching 
*v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545951       1 reflector.go:210] Starting reflector 
*v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.545963       1 reflector.go:246] Listing and watching 
*v1.StorageClass from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.546269       1 reflector.go:210] Starting reflector 
*v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.546281       1 reflector.go:246] Listing and watching 
*v1.CSINode from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.546558       1 reflector.go:210] Starting reflector 
*v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
   I0226 19:37:57.546571       1 reflector.go:246] Listing and watching 
*v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
   W0226 19:37:57.547858       1 warnings.go:70] policy/v1beta1 
PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use 
policy/v1 PodDisruptionBudget
   W0226 19:37:57.550087       1 warnings.go:70] policy/v1beta1 
PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use 
policy/v1 PodDisruptionBudget
   I0226 19:37:57.644331       1 request.go:581] Throttling request took 
97.592619ms, request: 
GET:https://10.96.0.1:443/api/v1/replicationcontrollers?limit=500&resourceVersion=0
   W0226 19:37:57.645374       1 warnings.go:70] policy/v1beta1 
PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use 
policy/v1 PodDisruptionBudget
   W0226 19:37:57.750211       1 warnings.go:70] policy/v1beta1 
PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use 
policy/v1 PodDisruptionBudget
   I0226 19:37:57.844566       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:37:57.845813       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc00112a180 
0xc00112a1b0]}
   I0226 19:37:57.846030       1 main.go:280] Registered cleanup signal handler
   I0226 19:37:57.846150       1 node_instances_cache.go:156] Start refreshing 
cloud provider node instances cache
   I0226 19:37:57.846239       1 node_instances_cache.go:168] Refresh cloud 
provider node instances cache finished, refresh took 4.652µs
   I0226 19:38:07.846246       1 static_autoscaler.go:229] Starting main loop
   I0226 19:38:07.846488       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:38:07.881961       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:38:07.882417       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000568cc0 
0xc000569230]}
   E0226 19:38:07.882509       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:38:17.882782       1 static_autoscaler.go:229] Starting main loop
   I0226 19:38:17.882968       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:38:17.915704       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:38:17.916052       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc0010b6510 
0xc0010b6540]}
   E0226 19:38:17.916262       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:38:27.916535       1 static_autoscaler.go:229] Starting main loop
   I0226 19:38:27.916742       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:38:27.949574       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:38:27.949944       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc0010de2d0 
0xc0010de300]}
   E0226 19:38:27.950067       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:38:37.950294       1 static_autoscaler.go:229] Starting main loop
   I0226 19:38:37.950724       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:38:37.986748       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:38:37.988575       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000c74d20 
0xc000c74d50]}
   E0226 19:38:37.988820       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-node-17f362f60c4 in cluster
   I0226 19:38:47.990150       1 static_autoscaler.go:229] Starting main loop
   I0226 19:38:47.990450       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:38:48.025570       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:38:48.025909       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc00112bb60 
0xc00112bb90]}
   E0226 19:38:48.025983       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:38:58.026181       1 static_autoscaler.go:229] Starting main loop
   I0226 19:38:58.026322       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:38:58.059232       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:38:58.059452       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000b83800 
0xc000b83830]}
   E0226 19:38:58.059515       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:39:08.059645       1 static_autoscaler.go:229] Starting main loop
   I0226 19:39:08.059759       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:39:08.097115       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:39:08.097587       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000b34600 
0xc000b34630]}
   E0226 19:39:08.097712       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:39:18.097917       1 static_autoscaler.go:229] Starting main loop
   I0226 19:39:18.098085       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:39:18.135679       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:39:18.136097       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000bb2330 
0xc000bb2390]}
   E0226 19:39:18.136211       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-node-17f362f60c4 in cluster
   I0226 19:39:28.136375       1 static_autoscaler.go:229] Starting main loop
   I0226 19:39:28.136557       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:39:28.170063       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:39:28.170688       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000b5c1e0 
0xc000b5c210]}
   E0226 19:39:28.170871       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:39:38.171127       1 static_autoscaler.go:229] Starting main loop
   I0226 19:39:38.171272       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:39:38.204127       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:39:38.204608       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000b53530 
0xc000b53560]}
   E0226 19:39:38.204739       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:39:48.204940       1 static_autoscaler.go:229] Starting main loop
   I0226 19:39:48.205066       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:39:48.235646       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:39:48.236045       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000bbb4d0 
0xc000bbb500]}
   E0226 19:39:48.236109       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:39:57.846437       1 node_instances_cache.go:156] Start refreshing 
cloud provider node instances cache
   I0226 19:39:57.846502       1 node_instances_cache.go:168] Refresh cloud 
provider node instances cache finished, refresh took 5.778µs
   I0226 19:39:58.236278       1 static_autoscaler.go:229] Starting main loop
   I0226 19:39:58.236397       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:39:58.270179       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:39:58.270559       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000eb2f30 
0xc000eb2f60]}
   E0226 19:39:58.270726       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-node-17f362f60c4 in cluster
   I0226 19:40:08.271034       1 static_autoscaler.go:229] Starting main loop
   I0226 19:40:08.271154       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:40:08.303001       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:40:08.303372       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000f7a780 
0xc000f7a7b0]}
   E0226 19:40:08.303450       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:40:18.303605       1 static_autoscaler.go:229] Starting main loop
   I0226 19:40:18.303732       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:40:18.339958       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:40:18.340366       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc001088b40 
0xc001088b70]}
   E0226 19:40:18.340562       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:40:28.340915       1 static_autoscaler.go:229] Starting main loop
   I0226 19:40:28.341042       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:40:28.377760       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:40:28.378288       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc001096e10 
0xc001096e40]}
   E0226 19:40:28.378496       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:40:38.378731       1 static_autoscaler.go:229] Starting main loop
   I0226 19:40:38.378897       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:40:38.410575       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:40:38.410884       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc000ffac90 
0xc000ffacc0]}
   E0226 19:40:38.410944       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster
   I0226 19:40:48.411112       1 static_autoscaler.go:229] Starting main loop
   I0226 19:40:48.411270       1 client.go:169] NewAPIRequest API request 
URL:http://103.13.114.141:8080/client/api?apiKey=*&command=listKubernetesClusters&id=1038d730-7444-4fde-bf1b-9f86712943df&response=json&signature=*
   I0226 19:40:48.450260       1 client.go:175] NewAPIRequest response status 
code:200
   I0226 19:40:48.450577       1 cloudstack_manager.go:88] Got cluster : 
&{1038d730-7444-4fde-bf1b-9f86712943df maroon 0 0 1 1 [0xc0010bf2f0 
0xc0010bf320]}
   E0226 19:40:48.450672       1 static_autoscaler.go:271] Failed to get node 
infos for groups: Unable to find node maroon-control-17f362f2d09 in cluster


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to