On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach <[email protected]> wrote:
> Hallo,
> in my OpenStack Mitaka, I have installed the additional service "Manila" with
> a CephFS backend. Everything is working. All shares are created successfully:
>
> manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8
> +-----------------------------+------------------------------------------------------------------------------------------------------------------------+
> | Property | Value
> |
> +-----------------------------+------------------------------------------------------------------------------------------------------------------------+
> | status | available
> |
> | share_type_name | cephfs
> |
> | description | None
> |
> | availability_zone | nova
> |
> | share_network_id | None
> |
> | export_locations |
> |
> | | path =
> 10.152.132.71:6789,10.152.132.72:6789,10.152.132.73:6789:/volumes/_nogroup/b27ad01a-245f-49e2-8974-1ed0ce8e259e
> |
> | | preferred = False
> |
> | | is_admin_only = False
> |
> | | id = 9b7d7e9e-d661-4fa0-89d7-9727efb75554
> |
> | | share_instance_id =
> b27ad01a-245f-49e2-8974-1ed0ce8e259e
> |
> | share_server_id | None
> |
> | host | os-sharedfs@cephfs#cephfs
> |
> | access_rules_status | active
> |
> | snapshot_id | None
> |
> | is_public | False
> |
> | task_state | None
> |
> | snapshot_support | True
> |
> | id | 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8
> |
> | size | 1
> |
> | name | cephshare1
> |
> | share_type | 2a62fda4-82ce-4798-9a85-c800736b01e5
> |
> | has_replicas | False
> |
> | replication_type | None
> |
> | created_at | 2016-05-30T13:09:11.000000
> |
> | share_proto | CEPHFS
> |
> | consistency_group_id | None
> |
> | source_cgsnapshot_member_id | None
> |
> | project_id | cf03dbf6f7d04ff6bda0c65cb8395ded
> |
> | metadata | {}
> |
> +-----------------------------+------------------------------------------------------------------------------------------------------------------------+
>
> When I try to mount the share into a client machine via the kernel driver:
> mount -t ceph
> os-storage01,os-storage02,os-storage03:/volumes/_nogroup/b27ad01a-245f-49e2-8974-1ed0ce8e259e
> /tmp/test -o name=testclient,secret=AQDKO0xX6UgQNhAAe/9nJH1RGes0VzTZ+I04eQ==
>
> I get the following error:
> mount error 5 = Input/output error
>
> When I use "ceph-fuse" without any further modifications, everything is
> working:
> ceph-fuse /tmp/test --id=testclient --conf=/etc/ceph/ceph.conf
> --keyring=/etc/ceph/ceph.client.testclient.keyring
> --client-mountpoint=/volumes/_nogroup/b27ad01a-245f-49e2-8974-1ed0ce8e259e
>
> I have tried to mount the "volumes" folder with kernel driver:
> mount -t ceph os-storage01,os-storage02,os-storage03:/volumes /tmp/test -o
> name=testclient,secret=AQDKO0xX6UgQNhAAe/9nJH1RGes0VzTZ+I04eQ==
>
> It works so far, but when I try to list the folder's content, I get the
> "Input/output error":
> cd /tmp/test/_nogroup
> ls
> ls: cannot access '4f6f4f58-6a54-4300-8cb3-6d3a7debad61': Input/output error
> ls: cannot access '83de097f-4fd4-4168-82e3-697dab6fa645': Input/output error
> ls: cannot access '9a27fbd1-5446-47a3-8de5-0db57f624d09': Input/output error
> ls: cannot access 'b27ad01a-245f-49e2-8974-1ed0ce8e259e': Input/output error
> 4f6f4f58-6a54-4300-8cb3-6d3a7debad61 83de097f-4fd4-4168-82e3-697dab6fa645
> 9a27fbd1-5446-47a3-8de5-0db57f624d09 b27ad01a-245f-49e2-8974-1ed0ce8e259e
>
> I am using Ubuntu Server 16.04 (Xenial) with the current kernel
> "4.4.0-23-generic".
I think this is because the kernel client doesn't support cephfs
namespaces yet. Mitaka shares created with the cephfs native driver
use them instead of RADOS pools for security isolation.
As a workaround, I was going to suggest "cephfs:data_isolated=true"
when creating shares, so that a separate RADOS pool is created instead
of a namespace within the existing pool. Pools are much more expensive
than namespaces in terms of cluster resources though, so it's viable
only if you are going to have a small number of shares.
However, it seems that ceph.dir.layout.pool_namespace is set even if
a separate pool is created:
https://github.com/ceph/ceph/blob/master/src/pybind/ceph_volume_client.py#L472
This seems like over-doing to me. It also prevents the above
workaround from working. John, can we stick an else in there and fixup
{de,}authorize()?
Thanks,
Ilya
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com