On 26.09.2018 17:26, lists wrote:
Hi all,
I am going to make some change to our proxmox networking, and I'd like
some fresh eyes to take a look at my plans... :-)
We have three pve hosts, ceph network is a meshed 10G network setup,
directly connecting the pve hosts to each other. Client access is on a
'regular' ip 1G NIC.
Sample /etc/network/interfaces (server pve10) for current config:
# client access
auto vmbr0
iface vmbr0 inet static
address 192.168.89.10
netmask 255.255.255.0
gateway 192.168.89.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
# to pve2/ceph
auto eth2
iface eth2 inet static
address 10.10.89.1
netmask 255.255.255.0
mtu 9000
up route add -net 10.10.89.2 netmask 255.255.255.255 dev eth2
down route del -net 10.10.89.2 netmask 255.255.255.255 dev eth2
# to pve3/ceph
auto eth3
iface eth3 inet static
address 10.10.89.1
netmask 255.255.255.0
mtu 9000
up route add -net 10.10.89.3 netmask 255.255.255.255 dev eth3
down route del -net 10.10.89.3 netmask 255.255.255.255 dev eth3
See for more info on the meshed network:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server. It
works very nicely.
Now, we want to change networking from the above to: dual 10G lacp
bonds per server to our hp procurve chassis.
So, in order to change as little as possible, I would like to keep
ceph config the same, meaning: retain all IPs/config, and use
something like this:
auto bond0
iface bond0 inet manual
slaves eth2 eth3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer3+4
allow-hotplug vmbr0
auto vmbr0
# client access
iface vmbr0 inet static
address 192.168.89.10
netmask 255.255.255.0
gateway 192.168.89.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
# ip for access to other cephs
iface vmbr0 inet static
address 10.10.89.1
netmask 255.255.255.0
Then cable eth2 / eth3 to the lacp ports on the HP procurve.
I am assuming that this would make all traffic (ceph and VMs) float
over the same two 10G lacp wires, and both ceph and VMs would not
notice any difference. I'm also assuming that no other config changes
would be required at all.
So, any errors in the above reasoning? I realise we cannot have jumbo
frames in this setup, but I don't think I mind. I also realise that
currently we have seperated ceph and VMs traffic, and in the new
situation we don't anymore, but this seems accepted (perhaps even
recommended) for small networks like ours at the ceph mailinglist
nowadays.
So... feedback to all of the above please... :-)
multiple identical stancas like
iface vmbr0 inet static
will likely fail.
you can (but should not) add multiple ip addresses on a interface but
use something like
up ip addr add 1.2.3.4/24 dev vmbr0 on the first entry.
what i do is...
bundle physical links together with lacp bond like you do here.
run multiple vlans over the physical bond. (i have vm's in many
different vlans)
have a vlan aware bridge.
use vlans for ip addresses for the vlan aware bridge.
use the vlan tag in the vm config to connect a vm to a given vlan on the
vlan aware bridge
but if you do not need the added complexity of the vlan aware bridge you
can do something like...
iface eth0 inet manual
mtu 9000
iface eth1 inet manual
mtu 9000
#bond of interfaces
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
mtu 9000
#vmbr bridge for vm's on vlan 10 (bond0.10)
auto vmbr0
iface vmbr0 inet static
address 192.168.89.10
netmask 255.255.255.0
gateway 192.168.89.1
bridge_ports bond0.10
bridge_stp off
bridge_fd 0
ip interface for ceph
auto bond0.20
iface bond0.20 inet static
address 10.10.89.1
netmask 255.255.255.0
good luck
Ronny Aasen
_______________________________________________
pve-user mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user