Hello 

Tried creating EC pool, still facing same issue… 

https://bugzilla.redhat.com/show_bug.cgi?id=2255030

 ceph osd erasure-code-profile set ec-21-profile-isa k=2 m=1 
crush-failure-domain=host plugin=isa
load: isa
 ceph osd erasure-code-profile ls
default
ec-21-profile-isa
 ceph osd pool create ec-21-pool erasure ec-21-profile-isa
pool 'ec-21-pool' created
 ceph osd pool set ec-21-pool  allow_ec_overwrites true
set pool 22 allow_ec_overwrites to true
ceph fs volume create van2devcephfs
 ceph fs add_data_pool van2devcephfs  ec-21-pool

mount -t ceph :/ /new_cephfs -o name=admin,fs=van2devcephfs

# mount -t ceph :/ /new_cephfs -o name=admin,fs=van2devcephfs
mount error: no mds (Metadata Server) is up. The cluster might be laggy, or you 
may not be authorized

Regards
Dev


> On Jul 15, 2025, at 3:08 PM, Devender Singh <deven...@netskrt.io> wrote:
> 
> Tried below too but nothing works, any suggestion?
> 
> # mount -t ceph 10.101.1.246,10.101.1.247,10.101.1.248:/ /mnt/cephfs   -o 
> name=admin,fs=van2.dev,noatime,_netdev,ms_mode=prefer-crc
> 
> mount error: no mds (Metadata Server) is up. The cluster might be laggy, or 
> you may not be authorized
> 
> # mount -t ceph :/ /cephfs/ -o noatime,ms_mode=prefer-crc,_netdev,fs=van2-dev
> mount error: no mds (Metadata Server) is up. The cluster might be laggy, or 
> you may not be authorized
> 
> Regards
> Dev
> 
>> On Jul 15, 2025, at 2:17 PM, Devender Singh <deven...@netskrt.io> wrote:
>> 
>> Hello all 
>> 
>> I have cephfs working fine with running ads but still not able to mount, 
>> getting error mount error: no mds (Metadata Server) is up
>> 
>> Below are details 
>> 
>> 
>> # ceph osd dump | grep min_compat_client
>> require_min_compat_client reef
>> min_compat_client reef
>> 
>> # ceph balancer status
>> {
>>     "active": true,
>>     "last_optimize_duration": "0:00:00.002882",
>>     "last_optimize_started": "Tue Jul 15 21:16:41 2025",
>>     "mode": "upmap-read",
>>     "no_optimization_needed": true,
>>     "optimize_result": "Unable to find further optimization, or pool(s) 
>> pg_num is decreasing, or distribution is already perfect",
>>     "plans": []
>> }
>> 
>> root@van2-host06n:~# dpkg -l ceph-common
>> Desired=Unknown/Install/Remove/Purge/Hold
>> | 
>> Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
>> |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
>> ||/ Name           Version       Architecture Description
>> +++-==============-=============-============-==================================================================
>> ii  ceph-common    19.2.2-1jammy amd64        common utilities to mount and 
>> interact with a ceph storage cluster
>> 
>> root@van2-host06n:~# ceph fs status
>> van2.dev - 0 clients
>> ========
>> RANK  STATE                  MDS                    ACTIVITY     DNS    INOS 
>>   DIRS   CAPS
>>  0    active  van2.dev.van2-host06n.wqttsi  Reqs:    0 /s    10     13     
>> 12      0
>>  1    active  van2.dev.van2-host07n.fngvlt  Reqs:    0 /s    10     13     
>> 11      0
>>         POOL            TYPE     USED  AVAIL
>> cephfs.van2.dev.meta  metadata   230k  53.0T
>> cephfs.van2.dev.data    data       0   53.0T
>>            STANDBY MDS
>> van2-dev.van2-host07n.jnacmh
>> van2-dev.van2-host06n.wpzshe
>> van2-dev.van2-host08n.slrfej
>> MDS version: ceph version 19.2.2 (0eceb0defba60152a8182f7bd87d164b639885b8) 
>> squid (stable)
>> 
>> 
>> SYstemd file. 
>> ===========
>> 
>> # cat cephfs.mount
>> #
>> # this mounts on /cephfs
>> 
>> [Unit]
>> Description=mount /cephfs
>> After=network-online.target
>> Requires=network-online.target
>> 
>> [Mount]
>> Where=/cephfs
>> What=10.101.1.246:6789,10.101.1.247:6789,10.101.1.248:6789:/
>> Type=ceph
>> Options=fs=van2.dev,name=admin,noatime,_netdev,x-systemd.automount
>> 
>> [Install]
>> WantedBy=multi-user.target
>> 
>> 
>> root@van2-host06n:~# journalctl -u cephfs.mount
>> Jul 15 19:31:10 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:32:12 van2-host06n.van2.example.com mount[1809096]: mount error: 
>> no mds (Metadata Server) is up. The clu>
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=>
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:34:59 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:35:59 van2-host06n.van2.example.com mount[1810584]: mount error: 
>> no mds (Metadata Server) is up. The clu>
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=>
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:36:16 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:37:17 van2-host06n.van2.example.com mount[1811498]: mount error: 
>> no mds (Metadata Server) is up. The clu>
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=>
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:49:38 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:50:40 van2-host06n.van2.example.com mount[1817486]: mount error: 
>> no mds (Metadata Server) is up. The clu>
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=>
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:51:58 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=killed, status=>
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'signal'.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: Unit 
>> process 1817555 (mount.ceph) remains >
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Unmounted mount 
>> /cephfs.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Found left-over process 1817555 (mount.cep>
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: This usually 
>> indicates unclean termination of a previous>
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:52:59 van2-host06n.van2.example.com mount[1817555]: mount error: 
>> no mds (Metadata Server) is up. The clu>
>> lines 1-29...skipping...
>> Jul 15 19:31:10 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:32:12 van2-host06n.van2.example.com mount[1809096]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be>
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:34:59 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:35:59 van2-host06n.van2.example.com mount[1810584]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be>
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:36:16 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:37:17 van2-host06n.van2.example.com mount[1811498]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be>
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:49:38 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:50:40 van2-host06n.van2.example.com mount[1817486]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be>
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:51:58 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=killed, status=15/TERM
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'signal'.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: Unit 
>> process 1817555 (mount.ceph) remains running after unit stopped.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Unmounted mount 
>> /cephfs.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Found left-over process 1817555 (mount.ceph) in control group while starting 
>> uni>
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: This usually 
>> indicates unclean termination of a previous run, or service implementation 
>> defici>
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:52:59 van2-host06n.van2.example.com mount[1817555]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be>
>> Jul 15 19:53:26 van2-host06n.van2.example.com mount[1817572]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be>
>> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:56:29 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:57:29 van2-host06n.van2.example.com mount[1817655]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be>
>> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> lines 1-38...skipping...
>> Jul 15 19:31:10 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:32:12 van2-host06n.van2.example.com mount[1809096]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:34:59 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:35:59 van2-host06n.van2.example.com mount[1810584]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:36:16 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:37:17 van2-host06n.van2.example.com mount[1811498]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:49:38 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:50:40 van2-host06n.van2.example.com mount[1817486]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:51:58 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=killed, status=15/TERM
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'signal'.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: Unit 
>> process 1817555 (mount.ceph) remains running after unit stopped.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Unmounted mount 
>> /cephfs.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Found left-over process 1817555 (mount.ceph) in control group while starting 
>> unit. Ignoring.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: This usually 
>> indicates unclean termination of a previous run, or service implementation 
>> deficiencies.
>> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:52:59 van2-host06n.van2.example.com mount[1817555]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:53:26 van2-host06n.van2.example.com mount[1817572]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:56:29 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:57:29 van2-host06n.van2.example.com mount[1817655]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 19:58:50 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 19:59:51 van2-host06n.van2.example.com mount[1818287]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> Jul 15 19:59:51 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Mount process exited, code=exited, status=32/n/a
>> Jul 15 19:59:51 van2-host06n.van2.example.com systemd[1]: cephfs.mount: 
>> Failed with result 'exit-code'.
>> Jul 15 19:59:51 van2-host06n.van2.example.com systemd[1]: Failed to mount 
>> mount /cephfs.
>> Jul 15 20:05:43 van2-host06n.van2.example.com systemd[1]: Mounting mount 
>> /cephfs...
>> Jul 15 20:06:44 van2-host06n.van2.example.com mount[1818912]: mount error: 
>> no mds (Metadata Server) is up. The cluster might be laggy, or you may not 
>> be authorized
>> lines 1-45
>> 
> 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to