Thanks a lot for all your comments,

If you don't see any problem... I will enable the following features that
might fit my requirements:

Layering
Striping
Exclusive locking
Object map
Fast-diff

Thanks a lot
Óscar Segarra




2017-11-14 16:56 GMT+01:00 Jason Dillaman <jdill...@redhat.com>:

> From the documentation [1]:
>
> shareable
> If present, this indicates the device is expected to be shared between
> domains (assuming the hypervisor and OS support this), which means that
> caching should be deactivated for that device.
>
> Basically, it's the use-case for putting a clustered file system (or
> similar) on top of the block device. For the vast majority of cases, you
> shouldn't enable this in libvirt.
>
> [1] https://libvirt.org/formatdomain.html#elementsDisks
>
> On Tue, Nov 14, 2017 at 10:49 AM, Oscar Segarra <oscar.sega...@gmail.com>
> wrote:
>
>> Hi Jason,
>>
>> The big use-case for sharing a block device is if you set up a clustered
>> file system on top of it, and I'd argue that you'd probably be better
>> off using CephFS.
>> --> Nice to know!
>>
>> Thanks a lot for your clarifications, in this case I referenced the
>> shareable flag that one can see in the KVM. I'd like to know the suggested
>> configuration for rbd images and live migration....
>>
>> [image: Imágenes integradas 1]
>>
>> Thanks a lot.
>>
>> 2017-11-14 16:36 GMT+01:00 Jason Dillaman <jdill...@redhat.com>:
>>
>>> On Tue, Nov 14, 2017 at 10:25 AM, Oscar Segarra <oscar.sega...@gmail.com>
>>> wrote:
>>> > In my environment, I have a Centos7 updated todate.... therefore, all
>>> > features might work as expected to do...
>>> >
>>> > Regarding the other question, do you suggest making the virtual disk
>>> > "shareable" in rbd?
>>>
>>> Assuming you are refering to the "--image-shared" option when creating
>>> an image, the answer is no. That is just a short-cut to disable all
>>> features that depend on the exclusive lock. The big use-case for
>>> sharing a block device is if you set up a clustered file system on top
>>> of it, and I'd argue that you'd probably be better off using CephFS.
>>>
>>> > Thanks a lot
>>> >
>>> > 2017-11-14 15:58 GMT+01:00 Jason Dillaman <jdill...@redhat.com>:
>>> >>
>>> >> Concur -- there aren't any RBD image features that should prevent live
>>> >> migration when using a compatible version of librbd. If, however, you
>>> >> had two hosts where librbd versions were out-of-sync and they didn't
>>> >> support the same features, you could hit an issue if a VM with fancy
>>> >> new features was live migrated to a host where those features aren't
>>> >> supported since the destination host wouldn't be able to open the
>>> >> image.
>>> >>
>>> >> On Tue, Nov 14, 2017 at 7:55 AM, Cassiano Pilipavicius
>>> >> <cassi...@tips.com.br> wrote:
>>> >> > Hi Oscar, exclusive-locking should not interfere with
>>> live-migration. I
>>> >> > have
>>> >> > a small virtualization cluster backed by ceph/rbd and I can migrate
>>> all
>>> >> > the
>>> >> > VMs which RBD image have exclusive-lock enabled without any issue.
>>> >> >
>>> >> >
>>> >> >
>>> >> > Em 11/14/2017 9:47 AM, Oscar Segarra escreveu:
>>> >> >
>>> >> > Hi Konstantin,
>>> >> >
>>> >> > Thanks a lot for your advice...
>>> >> >
>>> >> > I'm specially interested in feature "Exclusive locking". Enabling
>>> this
>>> >> > feature can affect live/offline migration? In this scenario
>>> >> > (online/offline
>>> >> > migration)  I don't know if two hosts (source and destination) need
>>> >> > access
>>> >> > to the same rbd image at the same time
>>> >> >
>>> >> > It looks that enabling Exlucisve locking you can enable some other
>>> >> > interessant features like "Object map" and/or "Fast diff" for
>>> backups.
>>> >> >
>>> >> > Thanks a lot!
>>> >> >
>>> >> > 2017-11-14 12:26 GMT+01:00 Konstantin Shalygin <k0...@k0ste.ru>:
>>> >> >>
>>> >> >> On 11/14/2017 06:19 PM, Oscar Segarra wrote:
>>> >> >>
>>> >> >> What I'm trying to do is reading documentation in order to
>>> understand
>>> >> >> how
>>> >> >> features work and what are they for.
>>> >> >>
>>> >> >> http://tracker.ceph.com/issues/15000
>>> >> >>
>>> >> >>
>>> >> >> I would also be happy to read what features have negative sides.
>>> >> >>
>>> >> >>
>>> >> >> The problem is that documentation is not detailed enough.
>>> >> >>
>>> >> >> The proof-test method you suggest I think is not a good procedure
>>> >> >> because
>>> >> >> I want to a void a corrpution in the future due to a bad
>>> configuration
>>> >> >>
>>> >> >>
>>> >> >> So my recommendation: if you can wait - may be from some side you
>>> >> >> receive
>>> >> >> a new information about features. Otherwise - you can set minimal
>>> >> >> features
>>> >> >> (like '3') - this is enough for virtualization (snapshots, clones).
>>> >> >>
>>> >> >> And start your project.
>>> >> >>
>>> >> >> --
>>> >> >> Best regards,
>>> >> >> Konstantin Shalygin
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > _______________________________________________
>>> >> > ceph-users mailing list
>>> >> > ceph-users@lists.ceph.com
>>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >> >
>>> >> >
>>> >> >
>>> >> > _______________________________________________
>>> >> > ceph-users mailing list
>>> >> > ceph-users@lists.ceph.com
>>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >> >
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Jason
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Jason
>>>
>>
>>
>
>
> --
> Jason
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to