Sorry, I was afk. Did you authorize a client against that new cephfs volume? I'm not sure because I did it slightly different and it's an upgraded cluster. But a permission denied sounds like no one is allowed to write into cephfs.

Zitat von "Jens Hyllegaard (Soft Design A/S)" <[email protected]>:

I found out how to get the information:

client.nfs.objstore.ceph-storage-3
        key: AQBCRNtfsBY8IhAA4MFTghHMT4rq58AvAsPclw==
        caps: [mon] allow r
        caps: [osd] allow rw pool=objpool namespace=nfs-ns

Regards

Jens

-----Original Message-----
From: Jens Hyllegaard (Soft Design A/S) <[email protected]>
Sent: 18. december 2020 12:10
To: 'Eugen Block' <[email protected]>; '[email protected]' <[email protected]>
Subject: [ceph-users] Re: Setting up NFS with Octopus

I am sorry, but I am not sure how to do that? We have just started working with Ceph.

-----Original Message-----
From: Eugen Block <[email protected]>
Sent: 18. december 2020 12:06
To: Jens Hyllegaard (Soft Design A/S) <[email protected]>
Subject: Re: [ceph-users] Re: Setting up NFS with Octopus

Oh you're right, it worked for me, I just tried that with a new path and it was created for me. Can you share the client keyrings? I have two nfs daemons running and they have these permissions:

client.nfs.ses7-nfs.host2
         key: AQClNNJf5KHVERAAAzhpp9Mclh5wplrcE9VMkQ==
         caps: [mon] allow r
         caps: [osd] allow rw pool=nfs-test namespace=ganesha
client.nfs.ses7-nfs.host3
         key: AQCqNNJf4rlqBhAARGTMkwXAldeprSYgmPEmJg==
         caps: [mon] allow r
         caps: [osd] allow rw pool=nfs-test namespace=ganesha



Zitat von "Jens Hyllegaard (Soft Design A/S)" <[email protected]>:

On the Create NFS export page it says the directory will be created.

Regards

Jens


-----Original Message-----
From: Eugen Block <[email protected]>
Sent: 18. december 2020 11:52
To: [email protected]
Subject: [ceph-users] Re: Setting up NFS with Octopus

Hi,

is the path (/objstore) present within your CephFS? If not you need to
mount the CephFS root first and create your directory to have NFS
access it.


Zitat von "Jens Hyllegaard (Soft Design A/S)"
<[email protected]>:

Hi.

We are completely new to Ceph, and are exploring using it as an NFS
server at first and expand from there.

However we have not been successful in getting a working solution.

I have set up a test environment with 3 physical servers, each with
one OSD using the guide at:
https://docs.ceph.com/en/latest/cephadm/install/

I created a new replicated pool:
ceph osd pool create objpool replicated

And then I deployed the gateway:
ceph orch apply nfs objstore objpool nfs-ns

I then created a new CephFS volume:
ceph fs volume create objstore

So far so good 😊

My problem is when I try to create the NFS export The settings are as
follows:
Cluster: objstore
Daemons: nfs.objstore
Storage Backend: CephFS
CephFS User ID: admin
CephFS Name: objstore
CephFS Path: /objstore
NFS Protocol: NFSV3
Access Type: RW
Squash: all_squash
Transport protocol: both UDP & TCP
Client: Any client can access

However when I click on Create NFS export, I get:
Failed to create NFS 'objstore:/objstore'

error in mkdirs /objstore: Permission denied [Errno 13]

Has anyone got an idea as to why this is not working?

If you need any further information, do not hesitate to say so.


Best regards,

Jens Hyllegaard
Senior consultant
Soft Design
Rosenkaeret 13 | DK-2860 Søborg | Denmark | +45 39 66 02 00 |
softdesign.dk<http://www.softdesign.dk/> | synchronicer.com


_______________________________________________
ceph-users mailing list -- [email protected] To unsubscribe send an
email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected] To unsubscribe send an
email to [email protected]



_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to