Re: [ClusterLabs] corosync 3.0.1 on Debian/Buster reports some MTU errors

2019-11-21 Thread Jean-Francois Malouin
Hi, * christine caulfield [20191121 03:19]: > On 18/11/2019 21:31, Jean-Francois Malouin wrote: > > Hi, > > > > Maybe not directly a pacemaker question but maybe some of you have seen this > > problem: > > > > A 2 node pacemaker cluster running corosync-3.0.1 with dual communication > > ring

Re: [ClusterLabs] corosync 3.0.1 on Debian/Buster reports some MTU errors

2019-11-21 Thread christine caulfield
On 18/11/2019 21:31, Jean-Francois Malouin wrote: Hi, Maybe not directly a pacemaker question but maybe some of you have seen this problem: A 2 node pacemaker cluster running corosync-3.0.1 with dual communication ring sometimes reports errors like this in the corosync log file: [KNET ]

Re: [ClusterLabs] corosync 3.0.1 on Debian/Buster reports some MTU errors

2019-11-20 Thread Jean-Francois Malouin
No one is willing to take a shot at this? I had a fencing event related to that yesterday morning Nov 19 08:04:01 node2 corosync[14399]: [KNET ] link: host: 1 link: 0 is down Nov 19 08:04:01 node2 corosync[14399]: [KNET ] host: host: 1 (passive) best link: 1 (pri: 1) ... Nov 19 08:05:04

[ClusterLabs] corosync 3.0.1 on Debian/Buster reports some MTU errors

2019-11-18 Thread Jean-Francois Malouin
Hi, Maybe not directly a pacemaker question but maybe some of you have seen this problem: A 2 node pacemaker cluster running corosync-3.0.1 with dual communication ring sometimes reports errors like this in the corosync log file: [KNET ] pmtud: PMTUD link change for host: 2 link: 0 from 470 to