No keys are also fine and it keys are not fine,  ceph fuse also should not
work.

Regards
Dev
On Wed, 23 Jul 2025 at 12:02 AM, Patrick Begou <
patrick.be...@univ-grenoble-alpes.fr> wrote:

> Could it be a mismatch with the mount reference ?
> What about trying:
>
> mount -t ceph 10.101.1.246,10.101.1.247,10.101.1.248:/   /mnt/cephfs   -o
> name=proxmoxvolfs,fs=van2-dev,secret=AQBw8X9o8b3GHRAAw58xzwqCP6Zt+WZZbjdb/A==
>
> Did you check you key file content (I've done this mistake with an empty
> key file generated by a wrong command 🙁)
>
> Patrick
>
> Le 22/07/2025 à 22:27, Devender Singh a écrit :
>
> # mount -t ceph 10.101.1.246,10.101.1.247,10.101.1.248:/volumes/_nogroup/test
>   /mnt/cephfs   -o
> name=proxmoxvolfs,fs=van2-dev,secret=AQBw8X9o8b3GHRAAw58xzwqCP6Zt+WZZbjdb/A==
> mount error: no mds (Metadata Server) is up. The cluster might be laggy,
> or you may not be authorized
>
> No luck
>
> Regards
> Dev
>
> On Jul 22, 2025, at 1:19 PM, Devender Singh <deven...@netskrt.io>
> <deven...@netskrt.io> wrote:
>
> Thanks Patrick
>
> But still the same issue
>
> # mount -t ceph 
> 10.101.1.246,10.101.1.247,10.101.1.248:/volumes/_nogroup/test/e393e142-977f-4d4a-817d-41bb721cb09c
>   /mnt/cephfs   -o name=proxmoxvolfs,fs=van2-dev
>
> mount error: no mds (Metadata Server) is up. The cluster might be laggy,
> or you may not be authorized
>
> Whereas ceph-fule is working…
>
> Regards
> Dev
>
>
>
> On Jul 22, 2025, at 2:37 AM, Patrick Begou
> <patrick.be...@univ-grenoble-alpes.fr>
> <patrick.be...@univ-grenoble-alpes.fr> wrote:
>
> Le 22/07/2025 à 11:15, Patrick Begou a écrit :
>
> I Devender,
>
> I went also in some mount troubles with Squid 19.2.2 after adding a second
> cephfs but I'm new to Ceph.
> I was getting the same message as yours:
>
> mount error: no mds (Metadata Server) is up. The cluster might be laggy,
> or you may not be authorized
>
> I had to add the cephfs volume to avoid mismatch between clients and
> volume.
> This is the command I use on the client  with my 5 node ceph cluster to
> mount  the Cephfs storage by hand for testing:
>
> sudo mount -t ceph
> 172.30.1.97,172.30.1.98,172.30.1.99,172.30.1.100,172.30.1.101:/ /mnt -o
> name=*sebfolder*,fs=seb_vol -> my keefile is
> /etc/ceph/ceph.client.sebfolder.keyring -> my cephfs volume is seb_vol
>
>
> Should be read:
>
> sudo mount -t ceph
> 172.30.1.97,172.30.1.98,172.30.1.99,172.30.1.100,172.30.1.101:/ /mnt -o
> name=sebfolder,fs=seb_vol
>
> * my keefile is /etc/ceph/ceph.client.sebfolder.keyring
> * my cephfs volume is seb_vol
>
>
>
> It works
>
> Patrick
>
> Le 21/07/2025 à 23:32, Devender Singh a écrit :
>
> Hello Dario
>
> Thanks for your reply.
> No luck using those too..
>
> Also, I want to mount cephfs to proxmox, there also getting issue
> mounting..
>
> # mount -t cephadmin@.van2-dev=/ /cephfs/ -o
> noatime,ms_mode=prefer-crc,_netdev
>
> mount error: no mds (Metadata Server) is up. The cluster might be laggy,
> or you may not be authorized
>
>
> I am using ceph version 19.2.2 and client is also 19.2.2.
>
> Regards
> Dev
>
> On Jul 18, 2025, at 8:11 AM, Dario Graña<dgr...@pic.es> <dgr...@pic.es>
> wrote:
>
> Hi Devender,
>
> I think the problem is that you are using the old syntax, try with the new
> one [1]
>
> mount -t cephadmin@.van2-dev=/ /cephfs/ -o
> noatime,ms_mode=prefer-crc,_netdev
>
> Regards.
>
> [1]
> https://www.google.com/url?q=https://www.google.com/url?q%3Dhttps://docs.ceph.com/en/squid/man/8/mount.ceph/%26source%3Dgmail-imap%26ust%3D1753457087000000%26usg%3DAOvVaw3RPvx2e7p7nZypeiYOOf9c&source=gmail-imap&ust=1753782129000000&usg=AOvVaw1blQFMK6YVh4Twc1kfXp_-
>
> On Wed, Jul 16, 2025 at 12:11 AM Devender Singh<deven...@netskrt.io>
> wrote:
>
> Tried below too but nothing works, any suggestion?
>
> # mount -t ceph 10.101.1.246,10.101.1.247,10.101.1.248:/ /mnt/cephfs   -o
> name=admin,fs=van2.dev,noatime,_netdev,ms_mode=prefer-crc
>
> mount error: no mds (Metadata Server) is up. The cluster might be laggy,
> or you may not be authorized
>
> # mount -t ceph :/ /cephfs/ -o
> noatime,ms_mode=prefer-crc,_netdev,fs=van2-dev
> mount error: no mds (Metadata Server) is up. The cluster might be laggy,
> or you may not be authorized
>
> Regards
> Dev
>
> On Jul 15, 2025, at 2:17 PM, Devender Singh<deven...@netskrt.io> wrote:
>
> Hello all
>
> I have cephfs working fine with running ads but still not able to mount,
>
> getting error mount error: no mds (Metadata Server) is up
>
> Below are details
>
>
> # ceph osd dump | grep min_compat_client
> require_min_compat_client reef
> min_compat_client reef
>
> # ceph balancer status
> {
>     "active": true,
>     "last_optimize_duration": "0:00:00.002882",
>     "last_optimize_started": "Tue Jul 15 21:16:41 2025",
>     "mode": "upmap-read",
>     "no_optimization_needed": true,
>     "optimize_result": "Unable to find further optimization, or pool(s)
>
> pg_num is decreasing, or distribution is already perfect",
>
>     "plans": []
> }
>
> root@van2-host06n:~# dpkg -l ceph-common
> Desired=Unknown/Install/Remove/Purge/Hold
> |
>
>
> Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
>
>
> |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
> ||/ Name           Version       Architecture Description
>
>
> +++-==============-=============-============-==================================================================
>
>
> ii  ceph-common    19.2.2-1jammy amd64        common utilities to mount
>
> and interact with a ceph storage cluster
>
> root@van2-host06n:~# ceph fs status
> van2.dev - 0 clients
> ========
> RANK  STATE                  MDS ACTIVITY     DNS
>
> INOS   DIRS   CAPS
>
> 0    active van2.dev.van2-host06n.wqttsi  Reqs:    0 /s    10     13
>
>    12      0
>
> 1    active van2.dev.van2-host07n.fngvlt  Reqs:    0 /s    10     13
>
>    11      0
>
>         POOL            TYPE USED  AVAIL
> cephfs.van2.dev.meta  metadata   230k  53.0T
> cephfs.van2.dev.data    data       0   53.0T
>            STANDBY MDS
> van2-dev.van2-host07n.jnacmh
> van2-dev.van2-host06n.wpzshe
> van2-dev.van2-host08n.slrfej
> MDS version: ceph version 19.2.2
>
> (0eceb0defba60152a8182f7bd87d164b639885b8) squid (stable)
>
>
> SYstemd file.
> ===========
>
> # cat cephfs.mount
> #
> # this mounts on /cephfs
>
> [Unit]
> Description=mount /cephfs
> After=network-online.target
> Requires=network-online.target
>
> [Mount]
> Where=/cephfs
> What=10.101.1.246:6789,10.101.1.247:6789,10.101.1.248:6789:/
> Type=ceph
> Options=fs=van2.dev,name=admin,noatime,_netdev,x-systemd.automount
>
> [Install]
> WantedBy=multi-user.target
>
>
> root@van2-host06n:~# journalctl -u cephfs.mount
> Jul 15 19:31:10 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:32:12 van2-host06n.van2.example.com mount[1809096]: mount
>
> error: no mds (Metadata Server) is up. The clu>
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=>
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:34:59 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:35:59 van2-host06n.van2.example.com mount[1810584]: mount
>
> error: no mds (Metadata Server) is up. The clu>
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=>
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:36:16 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:37:17 van2-host06n.van2.example.com mount[1811498]: mount
>
> error: no mds (Metadata Server) is up. The clu>
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=>
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:49:38 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:50:40 van2-host06n.van2.example.com mount[1817486]: mount
>
> error: no mds (Metadata Server) is up. The clu>
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=>
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:51:58 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=killed, status=>
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'signal'.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Unit process 1817555 (mount.ceph) remains >
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Unmounted
>
> mount /cephfs.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Found left-over process 1817555 (mount.cep>
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: This usually
>
> indicates unclean termination of a previous>
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:52:59 van2-host06n.van2.example.com mount[1817555]: mount
>
> error: no mds (Metadata Server) is up. The clu>
>
> lines 1-29...skipping...
> Jul 15 19:31:10 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:32:12 van2-host06n.van2.example.com mount[1809096]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be>
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:34:59 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:35:59 van2-host06n.van2.example.com mount[1810584]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be>
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:36:16 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:37:17 van2-host06n.van2.example.com mount[1811498]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be>
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:49:38 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:50:40 van2-host06n.van2.example.com mount[1817486]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be>
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:51:58 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=killed, status=15/TERM
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'signal'.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Unit process 1817555 (mount.ceph) remains running after unit stopped.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Unmounted
>
> mount /cephfs.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Found left-over process 1817555 (mount.ceph) in control group while
> starting uni>
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: This usually
>
> indicates unclean termination of a previous run, or service implementation
> defici>
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:52:59 van2-host06n.van2.example.com mount[1817555]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be>
>
> Jul 15 19:53:26 van2-host06n.van2.example.com mount[1817572]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be>
>
> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:56:29 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:57:29 van2-host06n.van2.example.com mount[1817655]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be>
>
> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> lines 1-38...skipping...
> Jul 15 19:31:10 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:32:12 van2-host06n.van2.example.com mount[1809096]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:32:12 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:34:59 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:35:59 van2-host06n.van2.example.com mount[1810584]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:35:59 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:36:16 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:37:17 van2-host06n.van2.example.com mount[1811498]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:37:17 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:49:38 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:50:40 van2-host06n.van2.example.com mount[1817486]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:50:40 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:51:58 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=killed, status=15/TERM
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'signal'.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Unit process 1817555 (mount.ceph) remains running after unit stopped.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Unmounted
>
> mount /cephfs.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Found left-over process 1817555 (mount.ceph) in control group while
> starting unit. Ignoring.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: This usually
>
> indicates unclean termination of a previous run, or service implementation
> deficiencies.
>
> Jul 15 19:52:24 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:52:59 van2-host06n.van2.example.com mount[1817555]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:53:26 van2-host06n.van2.example.com mount[1817572]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:53:26 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:56:29 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:57:29 van2-host06n.van2.example.com mount[1817655]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:57:29 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 19:58:50 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 19:59:51 van2-host06n.van2.example.com mount[1818287]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> Jul 15 19:59:51 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Mount process exited, code=exited, status=32/n/a
>
> Jul 15 19:59:51 van2-host06n.van2.example.com systemd[1]: cephfs.mount:
>
> Failed with result 'exit-code'.
>
> Jul 15 19:59:51 van2-host06n.van2.example.com systemd[1]: Failed to
>
> mount mount /cephfs.
>
> Jul 15 20:05:43 van2-host06n.van2.example.com systemd[1]: Mounting
>
> mount /cephfs...
>
> Jul 15 20:06:44 van2-host06n.van2.example.com mount[1818912]: mount
>
> error: no mds (Metadata Server) is up. The cluster might be laggy, or you
> may not be authorized
>
> lines 1-45
>
> _______________________________________________
> ceph-users mailing list --ceph-users@ceph.io
> To unsubscribe send an email toceph-users-le...@ceph.io
>
> _______________________________________________
> ceph-users mailing list --ceph-users@ceph.io
> To unsubscribe send an email toceph-users-le...@ceph.io
>
> _______________________________________________
> ceph-users mailing list --ceph-users@ceph.io
> To unsubscribe send an email toceph-users-le...@ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to