This is the output from ceph status:
cluster:
id: 9d7bc71a-3f88-11eb-bc58-b9cfbaed27d3
health: HEALTH_WARN
1 pool(s) do not have an application enabled
services:
mon: 3 daemons, quorum
ceph-storage-1.softdesign.dk,ceph-storage-2,ceph-storage-3 (age 4d)
mgr: ceph-storage-1.softdesign.dk.vsrdsm(active, since 4d), standbys:
ceph-storage-3.jglzte
mds: objstore:1 {0=objstore.ceph-storage-1.knaufh=up:active} 1 up:standby
osd: 3 osds: 3 up (since 3d), 3 in (since 3d)
task status:
scrub status:
mds.objstore.ceph-storage-1.knaufh: idle
data:
pools: 4 pools, 97 pgs
objects: 31 objects, 25 KiB
usage: 3.1 GiB used, 2.7 TiB / 2.7 TiB avail
pgs: 97 active+clean
io:
client: 170 B/s rd, 0 op/s rd, 0 op/s wr
So everything seems to ok.
I wonder if anyone could guide me from scratch on how to set up the NFS.
I am still not sure if I need to create two different pools, one for NFS daemon
and one for the export?
Regards
Jens
-----Original Message-----
From: Eugen Block <[email protected]>
Sent: 18. december 2020 16:30
To: [email protected]
Subject: [ceph-users] Re: Setting up NFS with Octopus
What is the cluster status? The permissions seem correct, maybe the OSDs have a
problem?
Zitat von "Jens Hyllegaard (Soft Design A/S)" <[email protected]>:
> I have tried mounting the cephFS as two different users.
> I tried creating a user obuser with:
> fs authorize objstore client.objuser / rw
>
> And I tried mounting using the admin user.
>
> The mount works as expected, but neither user is able to create files
> or folders.
> Unless I use sudo, then it works for both users.
>
> The client.objuser keyring is:
>
> client.objuser
> key: AQCGodxfuuLxCBAAMjaSNM58JtkkUwO8UqGGYw==
> caps: [mds] allow rw
> caps: [mon] allow r
> caps: [osd] allow rw tag cephfs data=objstore
>
> Regards
>
> Jens
>
> -----Original Message-----
> From: Eugen Block <[email protected]>
> Sent: 18. december 2020 13:25
> To: Jens Hyllegaard (Soft Design A/S) <[email protected]>
> Cc: '[email protected]' <[email protected]>
> Subject: Re: [ceph-users] Re: Setting up NFS with Octopus
>
> Sorry, I was afk. Did you authorize a client against that new cephfs
> volume? I'm not sure because I did it slightly different and it's an
> upgraded cluster. But a permission denied sounds like no one is
> allowed to write into cephfs.
>
>
> Zitat von "Jens Hyllegaard (Soft Design A/S)"
> <[email protected]>:
>
>> I found out how to get the information:
>>
>> client.nfs.objstore.ceph-storage-3
>> key: AQBCRNtfsBY8IhAA4MFTghHMT4rq58AvAsPclw==
>> caps: [mon] allow r
>> caps: [osd] allow rw pool=objpool namespace=nfs-ns
>>
>> Regards
>>
>> Jens
>>
>> -----Original Message-----
>> From: Jens Hyllegaard (Soft Design A/S)
>> <[email protected]>
>> Sent: 18. december 2020 12:10
>> To: 'Eugen Block' <[email protected]>; '[email protected]'
>> <[email protected]>
>> Subject: [ceph-users] Re: Setting up NFS with Octopus
>>
>> I am sorry, but I am not sure how to do that? We have just started
>> working with Ceph.
>>
>> -----Original Message-----
>> From: Eugen Block <[email protected]>
>> Sent: 18. december 2020 12:06
>> To: Jens Hyllegaard (Soft Design A/S) <[email protected]>
>> Subject: Re: [ceph-users] Re: Setting up NFS with Octopus
>>
>> Oh you're right, it worked for me, I just tried that with a new path
>> and it was created for me.
>> Can you share the client keyrings? I have two nfs daemons running and
>> they have these permissions:
>>
>> client.nfs.ses7-nfs.host2
>> key: AQClNNJf5KHVERAAAzhpp9Mclh5wplrcE9VMkQ==
>> caps: [mon] allow r
>> caps: [osd] allow rw pool=nfs-test namespace=ganesha
>> client.nfs.ses7-nfs.host3
>> key: AQCqNNJf4rlqBhAARGTMkwXAldeprSYgmPEmJg==
>> caps: [mon] allow r
>> caps: [osd] allow rw pool=nfs-test namespace=ganesha
>>
>>
>>
>> Zitat von "Jens Hyllegaard (Soft Design A/S)"
>> <[email protected]>:
>>
>>> On the Create NFS export page it says the directory will be created.
>>>
>>> Regards
>>>
>>> Jens
>>>
>>>
>>> -----Original Message-----
>>> From: Eugen Block <[email protected]>
>>> Sent: 18. december 2020 11:52
>>> To: [email protected]
>>> Subject: [ceph-users] Re: Setting up NFS with Octopus
>>>
>>> Hi,
>>>
>>> is the path (/objstore) present within your CephFS? If not you need
>>> to mount the CephFS root first and create your directory to have NFS
>>> access it.
>>>
>>>
>>> Zitat von "Jens Hyllegaard (Soft Design A/S)"
>>> <[email protected]>:
>>>
>>>> Hi.
>>>>
>>>> We are completely new to Ceph, and are exploring using it as an NFS
>>>> server at first and expand from there.
>>>>
>>>> However we have not been successful in getting a working solution.
>>>>
>>>> I have set up a test environment with 3 physical servers, each with
>>>> one OSD using the guide at:
>>>> https://docs.ceph.com/en/latest/cephadm/install/
>>>>
>>>> I created a new replicated pool:
>>>> ceph osd pool create objpool replicated
>>>>
>>>> And then I deployed the gateway:
>>>> ceph orch apply nfs objstore objpool nfs-ns
>>>>
>>>> I then created a new CephFS volume:
>>>> ceph fs volume create objstore
>>>>
>>>> So far so good 😊
>>>>
>>>> My problem is when I try to create the NFS export The settings are
>>>> as
>>>> follows:
>>>> Cluster: objstore
>>>> Daemons: nfs.objstore
>>>> Storage Backend: CephFS
>>>> CephFS User ID: admin
>>>> CephFS Name: objstore
>>>> CephFS Path: /objstore
>>>> NFS Protocol: NFSV3
>>>> Access Type: RW
>>>> Squash: all_squash
>>>> Transport protocol: both UDP & TCP
>>>> Client: Any client can access
>>>>
>>>> However when I click on Create NFS export, I get:
>>>> Failed to create NFS 'objstore:/objstore'
>>>>
>>>> error in mkdirs /objstore: Permission denied [Errno 13]
>>>>
>>>> Has anyone got an idea as to why this is not working?
>>>>
>>>> If you need any further information, do not hesitate to say so.
>>>>
>>>>
>>>> Best regards,
>>>>
>>>> Jens Hyllegaard
>>>> Senior consultant
>>>> Soft Design
>>>> Rosenkaeret 13 | DK-2860 Søborg | Denmark | +45 39 66 02 00 |
>>>> softdesign.dk<http://www.softdesign.dk/> | synchronicer.com
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list -- [email protected] To unsubscribe send
>>>> an email to [email protected]
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list -- [email protected] To unsubscribe send an
>>> email to [email protected]
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- [email protected] To unsubscribe send an
>> email to [email protected]
>
>
>
> _______________________________________________
> ceph-users mailing list -- [email protected] To unsubscribe send an
> email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected] To unsubscribe send an email to
[email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]