...@fisk.me.uk escreveu:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jason Dillaman
Sent: 14 July 2017 16:40
To: li...@marcelofrota.info
Cc: ceph-users ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph mount rbd
On Fri
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Jason Dillaman
> Sent: 14 July 2017 16:40
> To: li...@marcelofrota.info
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Ceph mount rb
On Fri, Jul 14, 2017 at 9:44 AM, wrote:
> Gonzalo,
>
>
>
> You are right, i told so much about my enviroment actual and maybe i didn't
> know explain my problem the better form, with ceph in the moment, mutiple
> hosts clients can mount and write datas in my system and
Gonzalo,
You are right, i told so much about my enviroment actual and maybe i didn't
know explain my problem the better form, with ceph in the moment, mutiple hosts
clients can mount and write datas in my system and this is one problem, because
i could have filesystem corruption.
Example,
Hi,
Why you would like to maintain copies by yourself. You replicate on ceph
and then on different files inside ceph? Let ceph take care of counting.
Create a pool with 3 or more copies and let ceph take care of what's
stored and where.
Best regards,
El 13/07/17 a las 17:06,
I will explain More about my system actual, in the moment i have 2 machines
using drbd in mode master/slave and i running the aplication in machine master,
but existing 2 questions importants in my enviroment with drbd actualy :
1 - If machine one is master and mounting partitions, the slave
r luminous to test it)
- Mail original -
De: "Jason Dillaman" <jdill...@redhat.com>
À: "Maged Mokhtar" <mmokh...@petasan.org>
Cc: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Jeudi 29 Juin 2017 02:02:44
Objet: Re: [ceph-users] Ceph mount rbd
... a
... additionally, the forthcoming 4.12 kernel release will support
non-cooperative exclusive locking. By default, since 4.9, when the
exclusive-lock feature is enabled, only a single client can write to the
block device at a time -- but they will cooperatively pass the lock back
and forth upon