GitHub user n4l5u0r added a comment to the discussion: VR only starts to work 
at 100% after migration

> ok, so it looks like not a vlan misconfiguration issue of guest network
> 
> 
> 
> can you check if the public IP of CKS nodes are reachable from management 
> server when VR is running on various host ? ACS configures the CKS nodes via 
> the Public IP.
> 
> ```
> 
> ssh -i ~cloud/.ssh/id_rsa -p 2222 cloud@<Public ip address of CKS cluster>
> 
> ```
> 
> just to verify if the issue with public network
> 
> 

It is accessible from KVM hosts not MGMT hosts as the mgmt network is isolated 
on 10.10.0.0/20 and the public network is on 10.40.0.0/20.


Have just a proxy on port 8080 from public network to mgmt network to reach the 
endpoint.url to update the CKS status. This endpoint is triggered properly once 
the VR has been migrated once.

Our current setup:

[Mgmt hosts <- VIP]<-[haproxy<-KVM hosts<-VIP]<-VR

HAPROXY switch the 8080 http traffic from 10.40.0.0/20 to 10.10.0.0/20

VIP though keepalived on each host

GitHub link: 
https://github.com/apache/cloudstack/discussions/12209#discussioncomment-15205819

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to