Hello!

Here is the snippet from cephadm.log:

cephadm ['--image', 
'quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86',
 '--timeout', '895', '_orch', 'deploy', '--fsid', 
'9a7485f3-9530-11f0-8bb9-6c92cff2f5d0']
2025-12-10 09:45:26,706 7e80250e7080 DEBUG Loaded deploy configuration: 
{'fsid': '9a7485f3-9530-11f0-8bb9-6c92cff2f5d0', 'name': 
'nfs.rootfs.0.0.atlas.luxjmr', 'image': '', 'deploy_arguments': [], 'params': 
{}, 'meta': {'service_n
ame': 'nfs.rootfs', 'ports': [], 'ip': None, 'deployed_by': 
['quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86'],
 'rank': 0, 'rank_generation': 0, 'extra_container_args': None, 'extra_entry
point_args': None}, 'config_blobs': {'pool': '.nfs', 'namespace': 'rootfs', 
'userid': 'nfs.rootfs.0.0.atlas.luxjmr', 'extra_args': ['-N', 'NIV_EVENT'], 
'files': {'ganesha.conf': '# This file is generated by cephadm.\nNFS_CORE_PARAM 
{\n        Enable_NLM = false;\n        Enable_RQUOTA = false;\n        
Protocols = 3, 4;\n        mount_path_pseudo = true;\n        Enable_UDP = 
false;\n        NFS_Port = 2049;\n        allow_set_io_flusher_fail = true;\n   
     Monitoring_Port = 9587;\n}\n\nNFSv4 {\n        Delegations = false;\n      
  RecoveryBackend = "rados_cluster";\n        Minor_Versions = 1, 2;\n        
Server_Scope = "9a7485f3-9530-11f0-8bb9-6c92cff2f5d0-rootfs";\n        
IdmapConf = "/etc/ganesha/idmap.conf";\n}\n\nRADOS_KV {\n        UserId = 
"nfs.rootfs.0.0.atlas.luxjmr";\n        nodeid = 0;\n        pool = ".nfs";\n   
     namespace = "rootfs";\n}\n\nRADOS_URLS {\n        UserId = 
"nfs.rootfs.0.0.atlas.luxjmr";\n        watch_url = 
"rados://.nfs/rootfs/conf-nfs.rootfs";\n}\n\nRGW {\n        cluster = "ceph";\n 
       name = "client.nfs.rootfs.0.0.atlas.luxjmr-rgw";\n}\n\n%url    
rados://.nfs/rootfs/conf-nfs.rootfs', 'idmap.conf': ''}, 'config': '# minimal 
ceph.conf for 9a7485f3-9530-11f0-8bb9-6c92cff2f5d0\n[global]\n\tfsid = 
9a7485f3-9530-11f0-8bb9-6c92cff2f5d0\n\tmon_host = 
[v2:134.169.37.21:3300/0,v1:134.169.37.21:6789/0] 
[v2:134.169.37.17:3300/0,v1:134.169.37.17:6789/0] 
[v2:134.169.37.12:3300/0,v1:134.169.37.12:6789/0] 
[v2:134.169.37.15:3300/0,v1:134.169.37.15:6789/0] 
[v2:134.169.37.14:3300/0,v1:134.169.37.14:6789/0]\n', 'keyring': 
'[client.nfs.rootfs.0.0.atlas.luxjmr]\nkey = 
AQAlMzlpAZ3EMBAAXYArqFVcLYC0ND5DjamvWw==\n', 'rgw': {'cluster': 'ceph', 'user': 
'nfs.rootfs.0.0.atlas.luxjmr-rgw', 'keyring': 
'[client.nfs.rootfs.0.0.atlas.luxjmr-rgw]\nkey = 
AQAmMzlpyOP1BxAARjK7MCMV/u1CHCuCnCeqXg==\n'}}}
2025-12-10 09:45:26,706 7e80250e7080 DEBUG Determined image: 
'quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86'
2025-12-10 09:45:26,735 7e80250e7080 INFO Non-zero exit code 1 from 
/usr/bin/docker container inspect --format {{.State.Status}} 
ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0-nfs-rootfs-0-0-atlas-luxjmr
2025-12-10 09:45:26,735 7e80250e7080 INFO /usr/bin/docker: stdout 
2025-12-10 09:45:26,735 7e80250e7080 INFO /usr/bin/docker: stderr Error 
response from daemon: No such container: 
ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0-nfs-rootfs-0-0-atlas-luxjmr
2025-12-10 09:45:26,748 7e80250e7080 INFO Non-zero exit code 1 from 
/usr/bin/docker container inspect --format {{.State.Status}} 
ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0-nfs.rootfs.0.0.atlas.luxjmr
2025-12-10 09:45:26,748 7e80250e7080 INFO /usr/bin/docker: stdout 
2025-12-10 09:45:26,748 7e80250e7080 INFO /usr/bin/docker: stderr Error 
response from daemon: No such container: 
ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0-nfs.rootfs.0.0.atlas.luxjmr
2025-12-10 09:45:26,748 7e80250e7080 INFO Deploy daemon 
nfs.rootfs.0.0.atlas.luxjmr ...
2025-12-10 09:45:26,925 7e80250e7080 INFO Verifying port 0.0.0.0:2049 ...
2025-12-10 09:45:26,927 7e80250e7080 INFO Creating ganesha config...
2025-12-10 09:45:26,927 7e80250e7080 INFO Write file: 
/var/lib/ceph/9a7485f3-9530-11f0-8bb9-6c92cff2f5d0/nfs.rootfs.0.0.atlas.luxjmr/etc/ganesha/ganesha.conf
2025-12-10 09:45:26,927 7e80250e7080 INFO Write file: 
/var/lib/ceph/9a7485f3-9530-11f0-8bb9-6c92cff2f5d0/nfs.rootfs.0.0.atlas.luxjmr/etc/ganesha/idmap.conf
2025-12-10 09:45:27,242 7e80250e7080 DEBUG Non-zero exit code 1 from systemctl 
reset-failed 
ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0@nfs.rootfs.0.0.atlas.luxjmr
2025-12-10 09:45:27,242 7e80250e7080 DEBUG systemctl: stderr Failed to reset 
failed state of unit 
ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0@nfs.rootfs.0.0.atlas.luxjmr.service: 
Unit 
ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0@nfs.rootfs.0.0.atlas.luxjmr.service 
not loaded.
2025-12-10 09:45:27,541 7e80250e7080 DEBUG systemctl: stderr Created symlink 
/etc/systemd/system/ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0.target.wants/ceph-9a7485f3-9530-11f0-8bb9-6c92cff2f5d0@nfs.rootfs.0.0.atlas.luxjmr.service
 → /etc/systemd/system/[email protected].
2025-12-10 09:45:27,572 7e80250e7080 DEBUG firewalld does not appear to be 
present
2025-12-10 09:45:27,573 7e80250e7080 DEBUG Not possible to enable service 
<nfs>. firewalld.service is not available
2025-12-10 09:45:27,573 7e80250e7080 DEBUG firewalld does not appear to be 
present
2025-12-10 09:45:27,573 7e80250e7080 DEBUG Not possible to open ports <[2049]>. 
firewalld.service is not available


With best regards
Carsten Götze

------------------------------------------------------------------
Carsten Goetze
Computer Graphics          tel:   +49 531 391-2109
TU Braunschweig            fax:   +49 531 391-2103
Muehlenpfordtstr. 23       eMail: [email protected]
D-38106 Braunschweig       http://www.cg.cs.tu-bs.de/people/goetze

> Am 03.01.2026 um 18:52 schrieb Salatiel Filho <[email protected]>:
> 
> Does nfs deploy in ceph 20.2.0 requires firewalld?
> Would you mind sharing some logs that complain about that ?
> 
> Atenciosamente/Kind regards,
> Salatiel
> 
> On Tue, Dec 16, 2025 at 9:54 AM Carsten Götze via ceph-users
> <[email protected]> wrote:
>> 
>> Hi!
>> 
>> I'm running a ceph cluster using cephadm and recently upgraded from squid to 
>> tentacle 20.2.0.
>> Until recently everything worked fine until I started the nfs module. The 
>> nfs daemons were reported to be running but after some 10 minutes or so all 
>> of them were reported to be dead except one. nfs service on port 2049 was 
>> never provided on any of the nodes even while the daemons were supposed to 
>> be running.
>> As i found out later, the nfs daemons were never started at all, because the 
>> setup process required a systemd-firewalld to be installed on the system 
>> which of course wasn't.
>> After some headaches with the newly installed firewalld I decided to roll 
>> back, delete the firewalld and postpone the nfs deployment.
>> I then tried to stop the nfs daemons with 'ceph orch daemon stop', which did 
>> nothing, even after waiting some 10 minutes. I had to reissue the command 
>> several times to make the reportedly dead nfs daemons vanish from the 'ceph 
>> orch ps' list. The one daemon that was reported to be still running however 
>> would only die after 'ceph orch daemon stop --force' and was in an 'error' 
>> state thereafter and could not be removed from the 'ceph orch ps' list by no 
>> means. So I decided to delete the managing nfs service from the 'ceph orch 
>> ls' list, in hope that it would also tear down the remaining nfs daemon.
>> This obviously was a bad idea since, the service is now in the state of 
>> deleting. However it cannot be deleted, because there is still the one 
>> daemon in error state, which cannot be deleted because it was never running 
>> at all.
>> As a last measure I forcefully removed the docker container on the node with 
>> the cephadm command, but even though there are no traces left of that nfs 
>> daemon, it is still listed when running 'ceph orch ps'.
>> I also noticed that the 'ceph orch device ls' is out of sync with reality 
>> and 'ceph orch ps' is still listing osds that I've already shutdown and 
>> deleted. I therefore suspect, that the orchestrator has stopped collecting 
>> state information from the nodes.
>> Is there a way to force the orchestrator to sync its state information with 
>> the nodes?
>> Where do I find meaningful logs for the orchestrator?
>> 
>> With best regards,
>> Carsten Götze
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to