Re: [ovirt-users] posix compliant fs with ceph rbd

2017-01-26 Thread Nir Soffer
On Thu, Jan 26, 2017 at 11:39 AM, Yura Poltoratskiy
 wrote:
>
>
> 26.01.2017 11:11, Nir Soffer пишет:
>>
>> On Wed, Jan 25, 2017 at 8:55 PM, Yura Poltoratskiy
>>  wrote:
>>>
>>> Hi,
>>>
>>> I want to use Ceph with oVirt in some non standard way. The main idea is
>>> to
>>> map rbd volume to all computes and to get the same block device, say
>>> /dev/foo/bar, across all nodes, and then use "POSIX compliant file
>>> systems"
>>> option to add Storage Domain.
>>>
>>> Am I crazy?
>>
>> Yes
>
> Thnx :)
>
>>
>>> If not, what should I do next: create a file system on top of
>>> /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
>>> work,
>>> I mean does oVirt compatible with not clustered file system in this
>>> scenario?
>>
>> This can work only for clustered file system, not with XFS. Double
>> mounting will
>> quickly corrupt the file system.
>
> Can you tell me what FS should I choose to do some experiments?

GFS2 is one example.

>
> And in general: what is use cases for option like "POSIX compliant FS"?

The main usecase is to allow users to use a clustered file system they
already have in their organization.

CephFS is also a viable option if you don't want to use Cinder, however
the performance and scalability will be lower than RBD, and the gateway
publishing the cephfs mounts will be a bottleneck.

I think RBD is the best option if you can manage the Cinder deployment
and upgrade required to support it.

We are talking for long time about real Ceph storage type, managing
Ceph directly without Cinder. However we never found the time to work
on this.

>>> Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
>>> scalability and high availability (for example, when one storage node
>>> failed).
>>
>> You have two ways to use ceph:
>>
>> - via cinder - you will get best performance and scalability
>> - via cephfs - you will get all features, works like fault tolerant NFS
>>
>> Nir
>>
>>> Thanks for advice.
>>>
>>> PS. Yes, I know about Gluster but want to use Ceph :)
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] posix compliant fs with ceph rbd

2017-01-26 Thread Yura Poltoratskiy



26.01.2017 11:11, Nir Soffer пишет:

On Wed, Jan 25, 2017 at 8:55 PM, Yura Poltoratskiy
 wrote:

Hi,

I want to use Ceph with oVirt in some non standard way. The main idea is to
map rbd volume to all computes and to get the same block device, say
/dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
option to add Storage Domain.

Am I crazy?

Yes

Thnx :)




If not, what should I do next: create a file system on top of
/dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it work,
I mean does oVirt compatible with not clustered file system in this
scenario?

This can work only for clustered file system, not with XFS. Double mounting will
quickly corrupt the file system.

Can you tell me what FS should I choose to do some experiments?

And in general: what is use cases for option like "POSIX compliant FS"?




Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
scalability and high availability (for example, when one storage node
failed).

You have two ways to use ceph:

- via cinder - you will get best performance and scalability
- via cephfs - you will get all features, works like fault tolerant NFS

Nir


Thanks for advice.

PS. Yes, I know about Gluster but want to use Ceph :)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] posix compliant fs with ceph rbd

2017-01-26 Thread Nir Soffer
On Wed, Jan 25, 2017 at 8:55 PM, Yura Poltoratskiy
 wrote:
> Hi,
>
> I want to use Ceph with oVirt in some non standard way. The main idea is to
> map rbd volume to all computes and to get the same block device, say
> /dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
> option to add Storage Domain.
>
> Am I crazy?

Yes

> If not, what should I do next: create a file system on top of
> /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it work,
> I mean does oVirt compatible with not clustered file system in this
> scenario?

This can work only for clustered file system, not with XFS. Double mounting will
quickly corrupt the file system.

> Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
> scalability and high availability (for example, when one storage node
> failed).

You have two ways to use ceph:

- via cinder - you will get best performance and scalability
- via cephfs - you will get all features, works like fault tolerant NFS

Nir

> Thanks for advice.
>
> PS. Yes, I know about Gluster but want to use Ceph :)
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] posix compliant fs with ceph rbd

2017-01-26 Thread Yura Poltoratskiy
2017-01-25 21:01 GMT+02:00 Logan Kuhn :

> We prefer Ceph too and we've got our ovirt instance configured in two
> different ways.
>
> 1. Openstack Cinder, each VM's disk will have a single volume in ceph with
> all volumes being under the same pool.
>
I am familiar with OpenStack, but do not want to deploy parts of it. That's
why I want just to map rbd and use it like VMware uses mapped datastore:
create a file system on it and create a file like virtual block device per
VM, or even without file system at all just by using LVM.

This scenario is not far from iSCSI: we have mapped one block device (with
LVM on top) across all computes, oVirt agent manage volumes on that block
device, and agent manage also mapping themselves. My idea is to do mapping
block device by hand and all other process grant to oVirt.


> 2. Export an RBD via NFS from a gateway machine, this can be a trivially
> small physical or virtual machine that just exports the NFS share that is
> pointed at whatever RBD you choose to use.
>
I can see two cons:
1. Single point of failure.
2. Potential growth of latency.


>
> Not a direct answer to your question, but hopefully it helps.
>
> Regards,
> Logan
>
> On Wed, Jan 25, 2017 at 12:55 PM, Yura Poltoratskiy  > wrote:
>
>> Hi,
>>
>> I want to use Ceph with oVirt in some non standard way. The main idea is
>> to map rbd volume to all computes and to get the same block device, say
>> /dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
>> option to add Storage Domain.
>>
>> Am I crazy? If not, what should I do next: create a file system on top of
>> /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
>> work, I mean does oVirt compatible with not clustered file system in this
>> scenario?
>>
>> Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
>> scalability and high availability (for example, when one storage node
>> failed).
>>
>> Thanks for advice.
>>
>> PS. Yes, I know about Gluster but want to use Ceph :)
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] posix compliant fs with ceph rbd

2017-01-25 Thread Logan Kuhn
We prefer Ceph too and we've got our ovirt instance configured in two
different ways.

1. Openstack Cinder, each VM's disk will have a single volume in ceph with
all volumes being under the same pool.
2. Export an RBD via NFS from a gateway machine, this can be a trivially
small physical or virtual machine that just exports the NFS share that is
pointed at whatever RBD you choose to use.

Not a direct answer to your question, but hopefully it helps.

Regards,
Logan

On Wed, Jan 25, 2017 at 12:55 PM, Yura Poltoratskiy 
wrote:

> Hi,
>
> I want to use Ceph with oVirt in some non standard way. The main idea is
> to map rbd volume to all computes and to get the same block device, say
> /dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
> option to add Storage Domain.
>
> Am I crazy? If not, what should I do next: create a file system on top of
> /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
> work, I mean does oVirt compatible with not clustered file system in this
> scenario?
>
> Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
> scalability and high availability (for example, when one storage node
> failed).
>
> Thanks for advice.
>
> PS. Yes, I know about Gluster but want to use Ceph :)
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] posix compliant fs with ceph rbd

2017-01-25 Thread Yura Poltoratskiy
Hi,

I want to use Ceph with oVirt in some non standard way. The main idea is to
map rbd volume to all computes and to get the same block device, say
/dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
option to add Storage Domain.

Am I crazy? If not, what should I do next: create a file system on top of
/dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
work, I mean does oVirt compatible with not clustered file system in this
scenario?

Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
scalability and high availability (for example, when one storage node
failed).

Thanks for advice.

PS. Yes, I know about Gluster but want to use Ceph :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users