Hello list,

Few hour ago I successfully (?) created a cluster and added two nodes, ve1
and ve2. Each node has vmbr0 which is bridged to eth0 and has public IPs
from datacenter, and vmbr1 which is bridged to eth1. eth1 of both nodes are
linked to a switch and use a 10.0.0.0/16 internal network.
Everything is working fine and both vosts ping via vmbr1... except the fact
the migration of vms happens via vmbr0 at ~50mb/s... I'd want instead to do
this via vmbr1 and our internal network and fastly.

actually "pvecm status" on both nodes gives the public IP as the Node
Address... there is a way to change that IPs to internal vmbr1 ones without
pain?
It's only a /etc/hosts issue or that kind of node configuration is stored
elsewhere?

thanks in advance

-- 
Giampaolo Bozzali a.k.a Panda^(funk) - http://pandafunk.blogspot.com
_______________________________________________
pve-user mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to