Re: [PVE-User] run vm in two physical machine with the same disk image

2016-05-22 Thread Michael Rasmussen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Split brain. Do you use two_node option in corosync? On May 23, 2016 5:24:45 AM GMT+02:00, haoyun wrote: >hello everyone~ >my pve cluster with shared storage, >I run a vm in this physical machine,and I active this volumes in

[PVE-User] Upgaded all nodes. Now two of them will not rejoin cluster

2016-05-22 Thread Ben Hambleton
I just upgraded all my cluster nodes and rebooted. Upon reboot two of them will not join the cluster. When I run pvecm status on the ones that are seeing each other I get: Quorum information -- Date: Sun May 22 21:50:10 2016 Quorum provider: corosync_votequorum Nodes: 4 Node ID:

Re: [PVE-User] Multicast Problems

2016-05-22 Thread Robert Fantini
Hello Daniel Eschner if you can, let us know what settings you used to fix the issue. I'll put on pve wiki sometime. did you use cli or http? On Sun, May 22, 2016 at 12:48 PM, Daniel Eschner wrote: > Problem is fixed. It was a configuration problem with my switches.

Re: [PVE-User] Multicast Problems

2016-05-22 Thread Daniel Eschner
Problem is fixed. It was a configuration problem with my switches. it seems that they have Multicast groups and serval Ports. > Am 22.05.2016 um 18:01 schrieb Daniel Eschner : > > Hi all, > > anyone know what happend with that issue: > > root@host01:~# omping -c 60 -i 1

[PVE-User] Multicast Problems

2016-05-22 Thread Daniel Eschner
Hi all, anyone know what happend with that issue: root@host01:~# omping -c 60 -i 1 -q host01 host02 host03 host04 host05 host06 host07 |grep multicast host02 : multicast, xmt/rcv/%loss = 60/60/0%, min/avg/max/std-dev = 0.076/0.155/0.274/0.048 host03 : multicast, xmt/rcv/%loss = 60/60/0%,

Re: [PVE-User] delnode not possible

2016-05-22 Thread Daniel Eschner
Yep was my fault ;) Need to start that command on all nodes ;) Then it works without problems > Am 22.05.2016 um 16:18 schrieb Michael Rasmussen : > > On Sun, 22 May 2016 16:16:29 +0200 > Daniel Eschner wrote: > >> Is that correct? >> >> root@host01:~#

Re: [PVE-User] delnode not possible

2016-05-22 Thread Michael Rasmussen
On Sun, 22 May 2016 16:16:29 +0200 Daniel Eschner wrote: > Is that correct? > > root@host01:~# omping host02 > omping: Can't find local address in arguments > > when i omping host01 it works > > host01 : waiting for response msg > host01 : joined (S,G) = (*,

Re: [PVE-User] delnode not possible

2016-05-22 Thread Daniel Eschner
Is that correct? root@host01:~# omping host02 omping: Can't find local address in arguments when i omping host01 it works host01 : waiting for response msg host01 : joined (S,G) = (*, 232.43.211.234), pinging host01 : unicast, seq=1, size=69 bytes, dist=0, time=0.006ms host01 : multicast,

Re: [PVE-User] delnode not possible

2016-05-22 Thread Michael Rasmussen
On Sun, 22 May 2016 15:57:32 +0200 Daniel Eschner wrote: > > Its a typical Nework design without VLANs an so on. Just a Simple Switch > where the Servers are connected. > Nothink special. After a couple if minutes its running. Realy strange. Brand and model of the switch?

Re: [PVE-User] delnode not possible

2016-05-22 Thread Daniel Eschner
Mhh, maybe it can be a multicast problem :-( host02 : response message never received host03 : response message never received host04 : response message never received host05 : response message never received Thats what i see when i use omping ;( So maybe my switch config is broken… lets see

Re: [PVE-User] delnode not possible

2016-05-22 Thread Daniel Eschner
> Am 22.05.2016 um 15:55 schrieb Michael Rasmussen : > > On Sun, 22 May 2016 15:47:59 +0200 > Daniel Eschner wrote: > >> Mhh >> >> have corosync problem with bonding maybe? >> > Looks more like multicast problem to me. Its a typical Nework design

Re: [PVE-User] delnode not possible

2016-05-22 Thread Michael Rasmussen
On Sun, 22 May 2016 15:47:59 +0200 Daniel Eschner wrote: > Mhh > > have corosync problem with bonding maybe? > Looks more like multicast problem to me. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc

Re: [PVE-User] delnode not possible

2016-05-22 Thread Daniel Eschner
Mhh have corosync problem with bonding maybe? May 22 15:45:23 host01 pmxcfs[2046]: [status] notice: node lost quorum May 22 15:45:23 host01 pmxcfs[2046]: [dcdb] crit: received write while not quorate - trigger resync May 22 15:45:23 host01 pmxcfs[2046]: [dcdb] crit: leaving CPG group May 22

Re: [PVE-User] delnode not possible

2016-05-22 Thread Michael Rasmussen
On Sun, 22 May 2016 14:20:23 +0200 Daniel Eschner wrote: > Hope yes ;) > > Just one Node of 10 Nodes making trouble :-( > Is there anyway to test it easily? > https://pve.proxmox.com/wiki/Troubleshooting_multicast,_quorum_and_cluster_issues -- Hilsen/Regards Michael

Re: [PVE-User] delnode not possible

2016-05-22 Thread Bart Lageweg | Bizway
Syslog warming ? Verstuurd vanaf mijn iPhone Op 22 mei 2016 om 14:33 heeft Dietmar Maurer het volgende geschreven: >> Thats what not working :-( > > What does not work exactly? Do you get an error message? > Or does the removed node still appears on the GUI? > >

Re: [PVE-User] delnode not possible

2016-05-22 Thread Dietmar Maurer
> Thats what not working :-( What does not work exactly? Do you get an error message? Or does the removed node still appears on the GUI? ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Re: [PVE-User] delnode not possible

2016-05-22 Thread Daniel Eschner
Hope yes ;) Just one Node of 10 Nodes making trouble :-( Is there anyway to test it easily? > Am 22.05.2016 um 14:12 schrieb Michael Rasmussen : > > Have you verified that multicast is working properly? > > On May 22, 2016 1:50:16 PM GMT+02:00, Daniel Eschner

Re: [PVE-User] delnode not possible

2016-05-22 Thread Daniel Eschner
Thats what not working :-( I Reinstall the whole Cluster no again :-( Third try :-( > Am 22.05.2016 um 07:05 schrieb Dietmar Maurer : > >> is it possible to force delnode? Dont know why but it seems i have a lot of >> trouble with Proxmox cluster. >> >> Nothing happens