Mandi! Adam Thompson
In chel di` si favelave...
> Soft-mounting NFS introduces the *possibility* of accidental (and silent)
> data loss in corner cases, however. There's a lot of strident "soft-mounting
> is dangerous!" rhetoric on the net, it is a reasonable option in many cases
> as long
er-boun...@pve.proxmox.com] On
> Behalf Of Lindsay Mathieson
> Sent: September 16, 2016 17:08
> To: PVE User List <pve-user@pve.proxmox.com>
> Subject: Re: [PVE-User] Ceph and cold bootstrap...
>
> On 17/09/2016 12:54 AM, Brian :: wrote:
> > When NFS hangs you practically nee
I have set up a 3-node PVE+Ceph node and recolated it without issues.
It is true that when powering on the cluster care must be taken so that
all nodes start within about a minute; I tuned grub waiting seconds for
this, so that fast nodes wait for slow nodes.
El 16/09/16 a las 14:02, Adam
We've observed that if any of the nodes boot much faster or slower than the
other nodes, this causes big problems with both CEPH and PVE, particularly with
quorum issues.
I've just finished switching a 9-node cluster to NFS because CEPH was too
unreliable after repeated power failure crashes.
Mandi! Fabian Grünbichler
In chel di` si favelave...
> two ceph nodes, two mons and two osds are all way too few for a
> (production) ceph setup.
I know, this is my 'test' ceph cluster as stated... ;-)
> at least three nodes/mons (for quorum reasons),
> and multiple osds per storage node
I'm testing some error condition on my test ceph storage cluster.
Today i've booted it (cold boot, was off by yesterday).
The log say:
2016-09-16 09:38:38.015517 mon.0 10.27.251.7:6789/0 59 : cluster [INF] mon.0
calling new monitor election
2016-09-16 09:38:38.078034 mon.1