LACP is used for both switches.

My Proxmox Servers are using bonding mode 6 but I get strange bandwith problems:

target host .... host20
------------------------------------------------------------------
 run 1:          42.3 Mbits/sec
 run 2:          880 Mbits/sec
 run 3:          105 Mbits/sec
 run 4:          35.9 Mbits/sec
 run 5:          36.1 Mbits/sec
------------------------------------------------------------------
 average ....... 219.86 Mbits/sec


-- 
Grüsse
 
Daniel

Am 04.09.17, 15:31 schrieb "pve-user im Auftrag von Mark Schouten" 
<[email protected] im Auftrag von [email protected]>:

    You cannot just LACP over different switches. It should be a stack of 
switches.
    
    
    Met vriendelijke groeten,
    
    -- 
    Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
    Mark Schouten  | Tuxis Internet Engineering
    KvK: 61527076 | http://www.tuxis.nl/
    T: 0318 200208 | [email protected]
    
    
    
     Van:   Daniel <[email protected]> 
     Aan:   PVE User List <[email protected]> 
     Verzonden:   1-9-2017 22:22 
     Onderwerp:   [PVE-User] Bonding and packetloss 
    
    Hi there, 
     
    here is a small overview if my Network: 
     
    2x HP Switches. Both are connected with 4x 1Gbit with a LACP Trunk to each 
other – Working as expected. 
     
    Now my Problem, I configured all my hosts with Bond Mode 6 and conncted 1 
NIC to Switch One and the other to Switch Two 
    Sometimes I got packetloss and see a Kernel error like this: vmbr0: 
received packet on bond0 with own address as source address 
(addr:0c:c4:7a:aa:5c:e4, vlan:0) 
     
    Some hosts are working pretty well and some has Packet Loss. 
    After adding some “rules” to a host which has loss the error messages 
disappear but the loss (less loss) still exists. 
    Is there any special hint what can be the matter? When I change to 
active/passive mode all is fine. 
     
    This is my interfaces config which has packetloss: 
     
    auto lo 
    iface lo inet loopback 
     
    iface eno1 inet manual 
     
    iface eno2 inet manual 
     
    auto bond0 
    iface bond0 inet manual 
                    slaves eno1 eno2 
                    bond_miimon 100 
                    bond_mode 6 
     
    auto vmbr0 
    iface vmbr0 inet static 
                    address  10.0.2.111 
                    netmask  255.255.255.0 
                    gateway  10.0.2.1 
                    bridge_ports bond0 
                    bridge_stp off 
                    bridge_fd 0 
                    bridge_maxage 0 
                    bridge_ageing 0 
                    bridge_maxwait 0 
     
    I am absolutely without any glue ☹ tested a lot and nothing really helps to 
solve this problem. 
     
    -- 
    Grüsse 
     
    Daniel 
    _______________________________________________
    pve-user mailing list
    [email protected]
    https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
    _______________________________________________
    pve-user mailing list
    [email protected]
    https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
    

_______________________________________________
pve-user mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to