Re: [ceph-users] Ceph mount rbd

2017-07-17 Thread lista

Dear,

In your last message i has understood, the exclusive-lock is work in kernel 4.9 
or higher and this could help-me, with don't permission write in two machines, 
but this feature only avaible in kernel 4.12, is right ?

I will reading more about the pacemaker, in my environment testing, i Would use 
the heartbeat, but the pacemaker it seams to be one alternative better.



Thanks a Lot
Marcelo 

By default, since 4.9, 
 when the exclusive-lock feature is enabled, only a single client 
 can write to 
the 
 block device at a time -

Em 14/07/2017, Nick Fisk n...@fisk.me.uk escreveu:
 
 
  -Original Message- 
  From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jason Dillaman 
  Sent: 14 July 2017 16:40 
  To: li...@marcelofrota.info 
  Cc: ceph-users ceph-users@lists.ceph.com 
  Subject: Re: [ceph-users] Ceph mount rbd 
  
  On Fri, Jul 14, 2017 at 9:44 AM, li...@marcelofrota.info wrote: 
   Gonzalo, 
   
   
   
   You are right, i told so much about my enviroment actual and maybe i 
   didn't know explain my problem the better form, with ceph in the 
   moment, mutiple hosts clients can mount and write datas in my system 
   and this is one problem, because i could have filesystem corruption. 
   
   
   
   Example, today, if runing the comand in two machines in the same time, 
   it will work. 
   
   
   
   mount /dev/rbd0 /mnt/veeamrepo 
   
   cd /mnt/veeamrepo ; touch testfile.txt 
   
   
   
   I need ensure, only one machine will can execute this. 
   
  
  A user could do the same thing with any number of remote block devices (i.e. I could map an iSCSI target multiple times). As I said 
  before, you can use the "exclusive" option available since kernel 4.12, roll your own solution using the advisory locks available from 
  the rbd CLI, or just use CephFS if you want to be able to access a file system on multiple hosts. 
 
 Pacemaker, will also prevent a RBD to be mounted multiple times, if you want to manage the fencing outside of Ceph. 
 
  
   
   Thanks a lot, 
   
   Marcelo 
   
   
   Em 14/07/2017, Gonzalo Aguilar Delgado gagui...@aguilardelgado.com 
   escreveu: 
   
   
   Hi, 
   
   Why you would like to maintain copies by yourself. You replicate on 
   ceph and then on different files inside ceph? Let ceph take care of counting. 
   Create a pool with 3 or more copies and let ceph take care of what's 
   stored and where. 
   
   Best regards, 
   
   
   El 13/07/17 a las 17:06, li...@marcelofrota.info escribi: 

I will explain More about my system actual, in the moment i have 2 
machines using drbd in mode master/slave and i running the 
aplication in machine master, but existing 2 questions importants 
in my enviroment with drbd actualy : 

1 - If machine one is master and mounting partitions, the slave 
don't can mount the system, Unless it happens one problem in 
machine master, this is one mode, to prevent write in filesystem 
incorrect 

2 - When i write data in machine master in drbd, the drbd write 
datas in slave machine Automatically, with this, if one problem 
happens in node master, the machine slave have coppy the data. 

In the moment, in my enviroment testing with ceph, using the 
version 
4.10 of kernel and i mount the system in two machines in the same 
time, in production enviroment, i could serious problem with this 
comportament. 

How can i use the ceph and Ensure that I could get these 2 
behaviors kept in a new environment with Ceph? 

Thanks a lot, 

Marcelo 


Em 28/06/2017, Jason Dillaman jdill...@redhat.com escreveu: 
 ... additionally, the forthcoming 4.12 kernel release will 
 support non-cooperative exclusive locking. By default, since 4.9, 
 when the exclusive-lock feature is enabled, only a single client 
 can write to 
the 
 block device at a time -- but they will cooperatively pass the 
 lock 
back 
 and forth upon write request. With the new "rbd map" option, you 
 can 
map a 
 image on exactly one host and prevent other hosts from mapping 
 the 
image. 
 If that host should die, the exclusive-lock will automatically 
 become available to other hosts for mapping. 
 
 Of course, I always have to ask the use-case behind mapping the 
 same 
image 
 on multiple hosts. Perhaps CephFS would be a better fit if you 
 are 
trying 
 to serve out a filesystem? 
 
 On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar 
mmokh...@petasan.org wrote: 
 
  On 2017-06-28 22:55, li...@marcelofrota.info wrote: 
  
  Hi People, 
  
  I am testing the new enviroment, with ceph + rbd with ubuntu 
16.04, and i 
  have one question. 
  
  I have my cluster ceph and mount the using the comands to ceph 
  in 
my linux 
  enviroment

Re: [ceph-users] Ceph mount rbd

2017-07-14 Thread Nick Fisk


> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> Jason Dillaman
> Sent: 14 July 2017 16:40
> To: li...@marcelofrota.info
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Ceph mount rbd
> 
> On Fri, Jul 14, 2017 at 9:44 AM,  <li...@marcelofrota.info> wrote:
> > Gonzalo,
> >
> >
> >
> > You are right, i told so much about my enviroment actual and maybe i
> > didn't know explain my problem the better form, with ceph in the
> > moment, mutiple hosts clients can mount and write datas in my system
> > and this is one problem, because i could have filesystem corruption.
> >
> >
> >
> > Example, today, if runing the comand in two machines in the same time,
> > it will work.
> >
> >
> >
> > mount /dev/rbd0 /mnt/veeamrepo
> >
> > cd /mnt/veeamrepo ; touch testfile.txt
> >
> >
> >
> > I need ensure, only one machine will can execute this.
> >
> 
> A user could do the same thing with any number of remote block devices (i.e. 
> I could map an iSCSI target multiple times). As I said
> before, you can use the "exclusive" option available since kernel 4.12, roll 
> your own solution using the advisory locks available from
> the rbd CLI, or just use CephFS if you want to be able to access a file 
> system on multiple hosts.

Pacemaker, will also prevent a RBD to be mounted multiple times, if you want to 
manage the fencing outside of Ceph.

> 
> >
> > Thanks a lot,
> >
> > Marcelo
> >
> >
> > Em 14/07/2017, Gonzalo Aguilar Delgado <gagui...@aguilardelgado.com>
> > escreveu:
> >
> >
> >> Hi,
> >>
> >> Why you would like to maintain copies by yourself. You replicate on
> >> ceph and then on different files inside ceph? Let ceph take care of 
> >> counting.
> >> Create a pool with 3 or more copies and let ceph take care of what's
> >> stored and where.
> >>
> >> Best regards,
> >>
> >>
> >> El 13/07/17 a las 17:06, li...@marcelofrota.info escribió:
> >> >
> >> > I will explain More about my system actual, in the moment i have 2
> >> > machines using drbd in mode master/slave and i running the
> >> > aplication in machine master, but existing 2 questions importants
> >> > in my enviroment with drbd actualy :
> >> >
> >> > 1 - If machine one is master and mounting partitions, the slave
> >> > don't can mount the system, Unless it happens one problem in
> >> > machine master, this is one mode, to prevent write in filesystem
> >> > incorrect
> >> >
> >> > 2 - When i write data in machine master in drbd, the drbd write
> >> > datas in slave machine Automatically, with this, if one problem
> >> > happens in node master, the machine slave have coppy the data.
> >> >
> >> > In the moment, in my enviroment testing with ceph, using the
> >> > version
> >> > 4.10 of kernel and i mount the system in two machines in the same
> >> > time, in production enviroment, i could serious problem with this
> >> > comportament.
> >> >
> >> > How can i use the ceph and Ensure that I could get these 2
> >> > behaviors kept in a new environment with Ceph?
> >> >
> >> > Thanks a lot,
> >> >
> >> > Marcelo
> >> >
> >> >
> >> > Em 28/06/2017, Jason Dillaman <jdill...@redhat.com> escreveu:
> >> > > ... additionally, the forthcoming 4.12 kernel release will
> >> > > support non-cooperative exclusive locking. By default, since 4.9,
> >> > > when the exclusive-lock feature is enabled, only a single client
> >> > > can write to
> >> > the
> >> > > block device at a time -- but they will cooperatively pass the
> >> > > lock
> >> > back
> >> > > and forth upon write request. With the new "rbd map" option, you
> >> > > can
> >> > map a
> >> > > image on exactly one host and prevent other hosts from mapping
> >> > > the
> >> > image.
> >> > > If that host should die, the exclusive-lock will automatically
> >> > > become available to other hosts for mapping.
> >> > >
> >> > > Of course, I always have to ask the use-case behind mapping the
> >> > > same
> >

Re: [ceph-users] Ceph mount rbd

2017-07-14 Thread Jason Dillaman
On Fri, Jul 14, 2017 at 9:44 AM,   wrote:
> Gonzalo,
>
>
>
> You are right, i told so much about my enviroment actual and maybe i didn't
> know explain my problem the better form, with ceph in the moment, mutiple
> hosts clients can mount and write datas in my system and this is one
> problem, because i could have filesystem corruption.
>
>
>
> Example, today, if runing the comand in two machines in the same time, it
> will work.
>
>
>
> mount /dev/rbd0 /mnt/veeamrepo
>
> cd /mnt/veeamrepo ; touch testfile.txt
>
>
>
> I need ensure, only one machine will can execute this.
>

A user could do the same thing with any number of remote block devices
(i.e. I could map an iSCSI target multiple times). As I said before,
you can use the "exclusive" option available since kernel 4.12, roll
your own solution using the advisory locks available from the rbd CLI,
or just use CephFS if you want to be able to access a file system on
multiple hosts.

>
> Thanks a lot,
>
> Marcelo
>
>
> Em 14/07/2017, Gonzalo Aguilar Delgado 
> escreveu:
>
>
>> Hi,
>>
>> Why you would like to maintain copies by yourself. You replicate on ceph
>> and then on different files inside ceph? Let ceph take care of counting.
>> Create a pool with 3 or more copies and let ceph take care of what's
>> stored and where.
>>
>> Best regards,
>>
>>
>> El 13/07/17 a las 17:06, li...@marcelofrota.info escribió:
>> >
>> > I will explain More about my system actual, in the moment i have 2
>> > machines using drbd in mode master/slave and i running the aplication
>> > in machine master, but existing 2 questions importants in my
>> > enviroment with drbd actualy :
>> >
>> > 1 - If machine one is master and mounting partitions, the slave don't
>> > can mount the system, Unless it happens one problem in machine master,
>> > this is one mode, to prevent write in filesystem incorrect
>> >
>> > 2 - When i write data in machine master in drbd, the drbd write datas
>> > in slave machine Automatically, with this, if one problem happens in
>> > node master, the machine slave have coppy the data.
>> >
>> > In the moment, in my enviroment testing with ceph, using the version
>> > 4.10 of kernel and i mount the system in two machines in the same
>> > time, in production enviroment, i could serious problem with this
>> > comportament.
>> >
>> > How can i use the ceph and Ensure that I could get these 2 behaviors
>> > kept in a new environment with Ceph?
>> >
>> > Thanks a lot,
>> >
>> > Marcelo
>> >
>> >
>> > Em 28/06/2017, Jason Dillaman  escreveu:
>> > > ... additionally, the forthcoming 4.12 kernel release will support
>> > > non-cooperative exclusive locking. By default, since 4.9, when the
>> > > exclusive-lock feature is enabled, only a single client can write to
>> > the
>> > > block device at a time -- but they will cooperatively pass the lock
>> > back
>> > > and forth upon write request. With the new "rbd map" option, you can
>> > map a
>> > > image on exactly one host and prevent other hosts from mapping the
>> > image.
>> > > If that host should die, the exclusive-lock will automatically become
>> > > available to other hosts for mapping.
>> > >
>> > > Of course, I always have to ask the use-case behind mapping the same
>> > image
>> > > on multiple hosts. Perhaps CephFS would be a better fit if you are
>> > trying
>> > > to serve out a filesystem?
>> > >
>> > > On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar
>> >  wrote:
>> > >
>> > > > On 2017-06-28 22:55, li...@marcelofrota.info wrote:
>> > > >
>> > > > Hi People,
>> > > >
>> > > > I am testing the new enviroment, with ceph + rbd with ubuntu
>> > 16.04, and i
>> > > > have one question.
>> > > >
>> > > > I have my cluster ceph and mount the using the comands to ceph in
>> > my linux
>> > > > enviroment :
>> > > >
>> > > > rbd create veeamrepo --size 20480
>> > > > rbd --image veeamrepo info
>> > > > modprobe rbd
>> > > > rbd map veeamrepo
>> > > > rbd feature disable veeamrepo exclusive-lock object-map fast-diff
>> > > > deep-flatten
>> > > > mkdir /mnt/veeamrepo
>> > > > mount /dev/rbd0 /mnt/veeamrepo
>> > > >
>> > > > The comands work fine, but i have one problem, in the moment, i
>> > can mount
>> > > > the /mnt/veeamrepo in the same time in 2 machines, and this is a
>> > bad option
>> > > > for me in the moment, because this could generate one filesystem
>> > corrupt.
>> > > >
>> > > > I need only one machine to be allowed to mount and write at a time.
>> > > >
>> > > > Example if machine1 mount the /mnt/veeamrepo and machine2 try
>> > mount, one
>> > > > error would be displayed, show message the machine can not mount,
>> > because
>> > > > the system already mounted in machine1.
>> > > >
>> > > > Someone, could help-me with this or give some tips, for solution my
>> > > > problem. ?
>> > > >
>> > > > Thanks a lot
>> > > >
>> > > > ___
>> > > > ceph-users mailing 

Re: [ceph-users] Ceph mount rbd

2017-07-14 Thread lista

Gonzalo,

You are right, i told so much about my enviroment actual and maybe i didn't 
know explain my problem the better form, with ceph in the moment, mutiple hosts 
clients can mount and write datas in my system and this is one problem, because 
i could have filesystem corruption.

Example, today, if runing the comand in two machines in the same time, it will 
work.

mount /dev/rbd0 /mnt/veeamrepo
cd /mnt/veeamrepo ; touch testfile.txt

I need ensure, only one machine will can execute this.

Thanks a lot,
Marcelo

Em 14/07/2017, Gonzalo Aguilar Delgado gagui...@aguilardelgado.com
escreveu:
 Hi, 
 
 Why you would like to maintain copies by yourself. You replicate on ceph 
 and then on different files inside ceph? Let ceph take care of counting. 
 Create a pool with 3 or more copies and let ceph take care of what's 
 stored and where. 
 
 Best regards, 
 
 
 El 13/07/17 a las 17:06, li...@marcelofrota.info escribi: 
  
  I will explain More about my system actual, in the moment i have 2 
  machines using drbd in mode master/slave and i running the aplication 
  in machine master, but existing 2 questions importants in my 
  enviroment with drbd actualy : 
  
  1 - If machine one is master and mounting partitions, the slave don't 
  can mount the system, Unless it happens one problem in machine master, 
  this is one mode, to prevent write in filesystem incorrect 
  
  2 - When i write data in machine master in drbd, the drbd write datas 
  in slave machine Automatically, with this, if one problem happens in 
  node master, the machine slave have coppy the data. 
  
  In the moment, in my enviroment testing with ceph, using the version 
  4.10 of kernel and i mount the system in two machines in the same 
  time, in production enviroment, i could serious problem with this 
  comportament. 
  
  How can i use the ceph and Ensure that I could get these 2 behaviors 
  kept in a new environment with Ceph? 
  
  Thanks a lot, 
  
  Marcelo 
  
  
  Em 28/06/2017, Jason Dillaman jdill...@redhat.com escreveu: 
   ... additionally, the forthcoming 4.12 kernel release will support 
   non-cooperative exclusive locking. By default, since 4.9, when the 
   exclusive-lock feature is enabled, only a single client can write to 
  the 
   block device at a time -- but they will cooperatively pass the lock 
  back 
   and forth upon write request. With the new "rbd map" option, you can 
  map a 
   image on exactly one host and prevent other hosts from mapping the 
  image. 
   If that host should die, the exclusive-lock will automatically become 
   available to other hosts for mapping. 
   
   Of course, I always have to ask the use-case behind mapping the same 
  image 
   on multiple hosts. Perhaps CephFS would be a better fit if you are 
  trying 
   to serve out a filesystem? 
   
   On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar 
  mmokh...@petasan.org wrote: 
   
On 2017-06-28 22:55, li...@marcelofrota.info wrote: 

Hi People, 

I am testing the new enviroment, with ceph + rbd with ubuntu 
  16.04, and i 
have one question. 

I have my cluster ceph and mount the using the comands to ceph in 
  my linux 
enviroment : 

rbd create veeamrepo --size 20480 
rbd --image veeamrepo info 
modprobe rbd 
rbd map veeamrepo 
rbd feature disable veeamrepo exclusive-lock object-map fast-diff 
deep-flatten 
mkdir /mnt/veeamrepo 
mount /dev/rbd0 /mnt/veeamrepo 

The comands work fine, but i have one problem, in the moment, i 
  can mount 
the /mnt/veeamrepo in the same time in 2 machines, and this is a 
  bad option 
for me in the moment, because this could generate one filesystem 
  corrupt. 

I need only one machine to be allowed to mount and write at a time. 

Example if machine1 mount the /mnt/veeamrepo and machine2 try 
  mount, one 
error would be displayed, show message the machine can not mount, 
  because 
the system already mounted in machine1. 

Someone, could help-me with this or give some tips, for solution my 
problem. ? 

Thanks a lot 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 



You can use Pacemaker to map the rbd and mount the filesystem on 1 
  server 
and in case of failure switch to another server. 


___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


   
   
   -- 
   Jason 
  
  
  
  ___ 
  ceph-users mailing list 
  ceph-users@lists.ceph.com 
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 ___ 
 ceph-users mailing list 
 ceph-users@lists.ceph.com 
 

Re: [ceph-users] Ceph mount rbd

2017-07-14 Thread Gonzalo Aguilar Delgado

Hi,

Why you would like to maintain copies by yourself. You replicate on ceph 
and then on different files inside ceph? Let ceph take care of counting. 
Create a pool with 3 or more copies and let ceph take care of what's 
stored and where.


Best regards,


El 13/07/17 a las 17:06, li...@marcelofrota.info escribió:


I will explain More about my system actual, in the moment i have 2 
machines using drbd in mode master/slave and i running the aplication 
in machine master, but existing 2 questions importants in my 
enviroment with drbd actualy :


1 - If machine one is master and mounting partitions, the slave don't 
can mount the system, Unless it happens one problem in machine master, 
this is one mode, to prevent write in filesystem incorrect


2 - When i write data in machine master in drbd, the drbd write datas 
in slave machine Automatically, with this, if one problem happens in 
node master, the machine slave have coppy the data.


In the moment, in my enviroment testing with ceph, using the version 
4.10 of kernel and i mount the system in two machines in the same 
time, in production enviroment, i could serious problem with this 
comportament.


How can i use the ceph and Ensure that I could get these 2 behaviors 
kept in a new environment with Ceph?


Thanks a lot,

Marcelo


Em 28/06/2017, Jason Dillaman  escreveu:
> ... additionally, the forthcoming 4.12 kernel release will support
> non-cooperative exclusive locking. By default, since 4.9, when the
> exclusive-lock feature is enabled, only a single client can write to 
the
> block device at a time -- but they will cooperatively pass the lock 
back
> and forth upon write request. With the new "rbd map" option, you can 
map a
> image on exactly one host and prevent other hosts from mapping the 
image.

> If that host should die, the exclusive-lock will automatically become
> available to other hosts for mapping.
>
> Of course, I always have to ask the use-case behind mapping the same 
image
> on multiple hosts. Perhaps CephFS would be a better fit if you are 
trying

> to serve out a filesystem?
>
> On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar 
 wrote:

>
> > On 2017-06-28 22:55, li...@marcelofrota.info wrote:
> >
> > Hi People,
> >
> > I am testing the new enviroment, with ceph + rbd with ubuntu 
16.04, and i

> > have one question.
> >
> > I have my cluster ceph and mount the using the comands to ceph in 
my linux

> > enviroment :
> >
> > rbd create veeamrepo --size 20480
> > rbd --image veeamrepo info
> > modprobe rbd
> > rbd map veeamrepo
> > rbd feature disable veeamrepo exclusive-lock object-map fast-diff
> > deep-flatten
> > mkdir /mnt/veeamrepo
> > mount /dev/rbd0 /mnt/veeamrepo
> >
> > The comands work fine, but i have one problem, in the moment, i 
can mount
> > the /mnt/veeamrepo in the same time in 2 machines, and this is a 
bad option
> > for me in the moment, because this could generate one filesystem 
corrupt.

> >
> > I need only one machine to be allowed to mount and write at a time.
> >
> > Example if machine1 mount the /mnt/veeamrepo and machine2 try 
mount, one
> > error would be displayed, show message the machine can not mount, 
because

> > the system already mounted in machine1.
> >
> > Someone, could help-me with this or give some tips, for solution my
> > problem. ?
> >
> > Thanks a lot
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > You can use Pacemaker to map the rbd and mount the filesystem on 1 
server

> > and in case of failure switch to another server.
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
>
>
> --
> Jason



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph mount rbd

2017-07-13 Thread lista

I will explain More about my system actual, in the moment i have 2 machines 
using drbd in mode master/slave and i running the aplication in machine master, 
but existing 2 questions importants in my enviroment with drbd actualy :

1 - If machine one is master and mounting partitions, the slave don't can mount 
the system, Unless it happens one problem in machine master, this is one mode, 
to prevent write in filesystem incorrect

2 - When i write data in machine master in drbd, the drbd write datas in slave 
machine Automatically, with this, if one problem happens in node master, the 
machine slave have coppy the data.

In the moment, in my enviroment testing with ceph, using the version 4.10 of 
kernel and i mount the system in two machines in the same time, in production 
enviroment, i could serious problem with this comportament.

How can i use the ceph and Ensure that I could get these 2 behaviors kept in a 
new environment with Ceph?


Thanks a lot,
Marcelo

Em 28/06/2017, Jason Dillaman jdill...@redhat.com escreveu:
 ... additionally, the forthcoming 4.12 kernel release will support 
 non-cooperative exclusive locking. By default, since 4.9, when the 
 exclusive-lock feature is enabled, only a single client can write to the 
 block device at a time -- but they will cooperatively pass the lock back 
 and forth upon write request. With the new "rbd map" option, you can map a 
 image on exactly one host and prevent other hosts from mapping the image. 
 If that host should die, the exclusive-lock will automatically become 
 available to other hosts for mapping. 
 
 Of course, I always have to ask the use-case behind mapping the same image 
 on multiple hosts. Perhaps CephFS would be a better fit if you are trying 
 to serve out a filesystem? 
 
 On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar mmokh...@petasan.org wrote: 
 
  On 2017-06-28 22:55, li...@marcelofrota.info wrote: 
  
  Hi People, 
  
  I am testing the new enviroment, with ceph + rbd with ubuntu 16.04, and i 
  have one question. 
  
  I have my cluster ceph and mount the using the comands to ceph in my linux 
  enviroment : 
  
  rbd create veeamrepo --size 20480 
  rbd --image veeamrepo info 
  modprobe rbd 
  rbd map veeamrepo 
  rbd feature disable veeamrepo exclusive-lock object-map fast-diff 
  deep-flatten 
  mkdir /mnt/veeamrepo 
  mount /dev/rbd0 /mnt/veeamrepo 
  
  The comands work fine, but i have one problem, in the moment, i can mount 
  the /mnt/veeamrepo in the same time in 2 machines, and this is a bad option 
  for me in the moment, because this could generate one filesystem corrupt. 
  
  I need only one machine to be allowed to mount and write at a time. 
  
  Example if machine1 mount the /mnt/veeamrepo and machine2 try mount, one 
  error would be displayed, show message the machine can not mount, because 
  the system already mounted in machine1. 
  
  Someone, could help-me with this or give some tips, for solution my 
  problem. ? 
  
  Thanks a lot 
  
  ___ 
  ceph-users mailing list 
  ceph-users@lists.ceph.com 
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
  
  
  
  You can use Pacemaker to map the rbd and mount the filesystem on 1 server 
  and in case of failure switch to another server. 
  
  
  ___ 
  ceph-users mailing list 
  ceph-users@lists.ceph.com 
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
  
  
 
 
 -- 
 Jason___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph mount rbd

2017-06-30 Thread Alexandre DERUMIER
>>Of course, I always have to ask the use-case behind mapping the same image on 
>>multiple hosts. Perhaps CephFS would be a better fit if you are trying to 
>>serve out a filesystem?

Hi jason,

Currently I'm sharing rbd images between multiple webservers vm with ocfs2 on 
top.

They have old kernels, so can't use cephfs for now . 

some servers have also between 20-30millions files, so I need to test cephfs to 
see if it can handle between 100-150 millions files (which are handle by 5 rbd 
images).

Can cephfs handle so much files currently ?  (I waiting for luminous to test it)







- Mail original -
De: "Jason Dillaman" <jdill...@redhat.com>
À: "Maged Mokhtar" <mmokh...@petasan.org>
Cc: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Jeudi 29 Juin 2017 02:02:44
Objet: Re: [ceph-users] Ceph mount rbd

... additionally, the forthcoming 4.12 kernel release will support 
non-cooperative exclusive locking. By default, since 4.9, when the 
exclusive-lock feature is enabled, only a single client can write to the block 
device at a time -- but they will cooperatively pass the lock back and forth 
upon write request. With the new "rbd map" option, you can map a image on 
exactly one host and prevent other hosts from mapping the image. If that host 
should die, the exclusive-lock will automatically become available to other 
hosts for mapping. 
Of course, I always have to ask the use-case behind mapping the same image on 
multiple hosts. Perhaps CephFS would be a better fit if you are trying to serve 
out a filesystem? 

On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar < [ mailto:mmokh...@petasan.org 
| mmokh...@petasan.org ] > wrote: 





On 2017-06-28 22:55, [ mailto:li...@marcelofrota.info | li...@marcelofrota.info 
] wrote: 

BQ_BEGIN



Hi People, 

I am testing the new enviroment, with ceph + rbd with ubuntu 16.04, and i have 
one question. 

I have my cluster ceph and mount the using the comands to ceph in my linux 
enviroment : 

rbd create veeamrepo --size 20480 
rbd --image veeamrepo info 
modprobe rbd 
rbd map veeamrepo 
rbd feature disable veeamrepo exclusive-lock object-map fast-diff deep-flatten 
mkdir /mnt/veeamrepo 
mount /dev/rbd0 /mnt/veeamrepo 

The comands work fine, but i have one problem, in the moment, i can mount the 
/mnt/veeamrepo in the same time in 2 machines, and this is a bad option for me 
in the moment, because this could generate one filesystem corrupt. 

I need only one machine to be allowed to mount and write at a time. 

Example if machine1 mount the /mnt/veeamrepo and machine2 try mount, one error 
would be displayed, show message the machine can not mount, because the system 
already mounted in machine1. 

Someone, could help-me with this or give some tips, for solution my problem. ? 

Thanks a lot 

___ 
ceph-users mailing list 
[ mailto:ceph-users@lists.ceph.com | ceph-users@lists.ceph.com ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 






You can use Pacemaker to map the rbd and mount the filesystem on 1 server and 
in case of failure switch to another server. 

___ 
ceph-users mailing list 
[ mailto:ceph-users@lists.ceph.com | ceph-users@lists.ceph.com ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 


BQ_END




-- 
Jason 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph mount rbd

2017-06-28 Thread Jason Dillaman
... additionally, the forthcoming 4.12 kernel release will support
non-cooperative exclusive locking. By default, since 4.9, when the
exclusive-lock feature is enabled, only a single client can write to the
block device at a time -- but they will cooperatively pass the lock back
and forth upon write request. With the new "rbd map" option, you can map a
image on exactly one host and prevent other hosts from mapping the image.
If that host should die, the exclusive-lock will automatically become
available to other hosts for mapping.

Of course, I always have to ask the use-case behind mapping the same image
on multiple hosts. Perhaps CephFS would be a better fit if you are trying
to serve out a filesystem?

On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar  wrote:

> On 2017-06-28 22:55, li...@marcelofrota.info wrote:
>
> Hi People,
>
> I am testing the new enviroment, with ceph + rbd with ubuntu 16.04, and i
> have one question.
>
> I have my cluster ceph and mount the using the comands to ceph in my linux
> enviroment :
>
> rbd create veeamrepo --size 20480
> rbd --image veeamrepo info
> modprobe rbd
> rbd map veeamrepo
> rbd feature disable veeamrepo exclusive-lock object-map fast-diff
> deep-flatten
> mkdir /mnt/veeamrepo
> mount /dev/rbd0 /mnt/veeamrepo
>
> The comands work fine, but i have one problem, in the moment, i can mount
> the /mnt/veeamrepo in the same time in 2 machines, and this is a bad option
> for me in the moment, because this could generate one filesystem corrupt.
>
> I need only one machine to be allowed to mount and write at a time.
>
> Example if machine1 mount the /mnt/veeamrepo and machine2 try mount, one
> error would be displayed, show message the machine can not mount, because
> the system already mounted in machine1.
>
> Someone, could help-me with this or give some tips, for solution my
> problem. ?
>
> Thanks a lot
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> You can use Pacemaker to map the rbd and mount the filesystem on 1 server
> and in case of failure switch to another server.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com