Re: [PVE-User] Ris: Ceph MON quorum problem

2019-09-16 Thread Fabrizio Cuseo
Answer following: - Il 16-set-19, alle 14:49, Ronny Aasen ronny+pve-u...@aasen.cx ha scritto: > with 2 rooms there is no way to avoid a split brain situation unless you > have a tie breaker outside one of those 2 rooms. > > Run a Mon on a neutral third location is the quick, correct, and

Re: [PVE-User] Ceph MON quorum problem

2019-09-16 Thread Fabrizio Cuseo
THank you Humberto, but my problem is not related on proxmox quorum, but ceph mon quorum. Regards, Fabrizio - Il 16-set-19, alle 12:58, Humberto Jose De Sousa ha scritto: > Hi. > You could try the qdevice: >

Re: [PVE-User] Kernel 5.3 and Proxmox Ceph nodes

2019-09-16 Thread Ricardo Correa
Another 5.3 fix that might be interesting for some is https://github.com/lxc/lxd/issues/5193#issuecomment-502857830 which allows (or takes us one step closer) to running a kubelet in LXC containers. On 16.09.19, 12:55, "pve-user on behalf of Gilberto Nunes" wrote: Oh! I sorry! I didn't

Re: [PVE-User] Ceph MON quorum problem

2019-09-16 Thread Humberto Jose De Sousa via pve-user
--- Begin Message --- Hi. You could try the qdevice: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support Humberto De: "Fabrizio Cuseo" Para: "pve-user" Enviadas: Sexta-feira, 13 de setembro de 2019 16:42:06 Assunto: [PVE-User] Ceph MON quorum problem

Re: [PVE-User] Kernel 5.3 and Proxmox Ceph nodes

2019-09-16 Thread Gilberto Nunes
Oh! I sorry! I didn't sent the link which I referred to https://www.phoronix.com/scan.php?page=news_item=Ceph-Linux-5.3-Changes --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em seg, 16 de set de 2019 às 05:50, Ronny Aasen

Re: [PVE-User] Empty virtual disk

2019-09-16 Thread Ronny Aasen
On 15.09.2019 22:55, Joe Garvey wrote: Hello all, I had to reboot a QEMU based VM yesterday and after rebooting it reported there was no boot disk. The disk has lost all content in the hard drive. There aren't even any partition. I booted the VM with acronis disk recovery and it showed the

Re: [PVE-User] Kernel 5.3 and Proxmox Ceph nodes

2019-09-16 Thread Ronny Aasen
On 16.09.2019 03:17, Gilberto Nunes wrote: Hi there I read this about kernel 5.3 and ceph, and I am curious... I have a 6 nodes proxmox ceph cluster with luminous... Should be a good idea to user kernel 5.3 from here: https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.3/ --- Gilberto Nunes

[PVE-User] Recurring crashes after cluster upgrade from 5 to 6

2019-09-16 Thread Laurent CARON
Hi, After upgrading our 4 node cluster from PVE 5 to 6, we experience constant crashed (once every 2 days). Those crashes seem related to corosync. Since numerous users are reporting sych issues (broken cluster after upgrade, unstabilities, ...) I wonder if it is possible to downgrade