-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Split brain. Do you use two_node option in corosync?
On May 23, 2016 5:24:45 AM GMT+02:00, haoyun wrote:
>hello everyone~
>my pve cluster with shared storage,
>I run a vm in this physical machine,and I active this volumes in
I just upgraded all my cluster nodes and rebooted. Upon reboot two of them
will not join the cluster.
When I run pvecm status on the ones that are seeing each other I get:
Quorum information
--
Date: Sun May 22 21:50:10 2016
Quorum provider: corosync_votequorum
Nodes: 4
Node ID:
Hello Daniel Eschner
if you can, let us know what settings you used to fix the issue.
I'll put on pve wiki sometime.
did you use cli or http?
On Sun, May 22, 2016 at 12:48 PM, Daniel Eschner
wrote:
> Problem is fixed. It was a configuration problem with my switches.
Problem is fixed. It was a configuration problem with my switches.
it seems that they have Multicast groups and serval Ports.
> Am 22.05.2016 um 18:01 schrieb Daniel Eschner :
>
> Hi all,
>
> anyone know what happend with that issue:
>
> root@host01:~# omping -c 60 -i 1
Hi all,
anyone know what happend with that issue:
root@host01:~# omping -c 60 -i 1 -q host01 host02 host03 host04 host05 host06
host07 |grep multicast
host02 : multicast, xmt/rcv/%loss = 60/60/0%, min/avg/max/std-dev =
0.076/0.155/0.274/0.048
host03 : multicast, xmt/rcv/%loss = 60/60/0%,
Yep was my fault ;)
Need to start that command on all nodes ;)
Then it works without problems
> Am 22.05.2016 um 16:18 schrieb Michael Rasmussen :
>
> On Sun, 22 May 2016 16:16:29 +0200
> Daniel Eschner wrote:
>
>> Is that correct?
>>
>> root@host01:~#
On Sun, 22 May 2016 16:16:29 +0200
Daniel Eschner wrote:
> Is that correct?
>
> root@host01:~# omping host02
> omping: Can't find local address in arguments
>
> when i omping host01 it works
>
> host01 : waiting for response msg
> host01 : joined (S,G) = (*,
Is that correct?
root@host01:~# omping host02
omping: Can't find local address in arguments
when i omping host01 it works
host01 : waiting for response msg
host01 : joined (S,G) = (*, 232.43.211.234), pinging
host01 : unicast, seq=1, size=69 bytes, dist=0, time=0.006ms
host01 : multicast,
On Sun, 22 May 2016 15:57:32 +0200
Daniel Eschner wrote:
>
> Its a typical Nework design without VLANs an so on. Just a Simple Switch
> where the Servers are connected.
> Nothink special. After a couple if minutes its running. Realy strange.
Brand and model of the switch?
Mhh, maybe it can be a multicast problem :-(
host02 : response message never received
host03 : response message never received
host04 : response message never received
host05 : response message never received
Thats what i see when i use omping ;(
So maybe my switch config is broken… lets see
> Am 22.05.2016 um 15:55 schrieb Michael Rasmussen :
>
> On Sun, 22 May 2016 15:47:59 +0200
> Daniel Eschner wrote:
>
>> Mhh
>>
>> have corosync problem with bonding maybe?
>>
> Looks more like multicast problem to me.
Its a typical Nework design
On Sun, 22 May 2016 15:47:59 +0200
Daniel Eschner wrote:
> Mhh
>
> have corosync problem with bonding maybe?
>
Looks more like multicast problem to me.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
Mhh
have corosync problem with bonding maybe?
May 22 15:45:23 host01 pmxcfs[2046]: [status] notice: node lost quorum
May 22 15:45:23 host01 pmxcfs[2046]: [dcdb] crit: received write while not
quorate - trigger resync
May 22 15:45:23 host01 pmxcfs[2046]: [dcdb] crit: leaving CPG group
May 22
On Sun, 22 May 2016 14:20:23 +0200
Daniel Eschner wrote:
> Hope yes ;)
>
> Just one Node of 10 Nodes making trouble :-(
> Is there anyway to test it easily?
>
https://pve.proxmox.com/wiki/Troubleshooting_multicast,_quorum_and_cluster_issues
--
Hilsen/Regards
Michael
Syslog warming ?
Verstuurd vanaf mijn iPhone
Op 22 mei 2016 om 14:33 heeft Dietmar Maurer het volgende
geschreven:
>> Thats what not working :-(
>
> What does not work exactly? Do you get an error message?
> Or does the removed node still appears on the GUI?
>
>
> Thats what not working :-(
What does not work exactly? Do you get an error message?
Or does the removed node still appears on the GUI?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hope yes ;)
Just one Node of 10 Nodes making trouble :-(
Is there anyway to test it easily?
> Am 22.05.2016 um 14:12 schrieb Michael Rasmussen :
>
> Have you verified that multicast is working properly?
>
> On May 22, 2016 1:50:16 PM GMT+02:00, Daniel Eschner
Thats what not working :-(
I Reinstall the whole Cluster no again :-(
Third try :-(
> Am 22.05.2016 um 07:05 schrieb Dietmar Maurer :
>
>> is it possible to force delnode? Dont know why but it seems i have a lot of
>> trouble with Proxmox cluster.
>>
>> Nothing happens
18 matches
Mail list logo