I also would recommend bringing all pgs into a active+clean state before
you upgrade the cluster.


  www.clyso.com

  Hohenzollernstr. 27, 80801 Munich

Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306



Am Sa., 14. Sept. 2024 um 14:18 Uhr schrieb Ex Calibur <[email protected]>:

> Hello,
>
> I'm following this guide to upgrade our cephs:
> https://ainoniwa.net/pelican/2021-08-11a.html (Proxmox VE 6.4 Ceph upgrade
> Nautilus to Octopus)
> It's a requirement to upgrade our ProxMox environnement.
>
> Now I've reached the point at that guide where i have to "Upgrade all
> CephFS MDS daemons"
>
> But before I started this piece, I checked the status.
>
> root@pmnode1:~# ceph status
>   cluster:
>     id:     xxxxxxxxxxxxxxx
>     health: HEALTH_ERR
>             noout flag(s) set
>             1 scrub errors
>             Possible data damage: 1 pg inconsistent
>             2 pools have too many placement groups
>
>   services:
>     mon: 3 daemons, quorum pmnode1,pmnode2,pmnode3 (age 19h)
>     mgr: pmnode2(active, since 19h), standbys: pmnode1
>     osd: 15 osds: 12 up (since 12h), 12 in (since 19h)
>          flags noout
>
>   data:
>     pools:   3 pools, 513 pgs
>     objects: 398.46k objects, 1.5 TiB
>     usage:   4.5 TiB used, 83 TiB / 87 TiB avail
>     pgs:     512 active+clean
>              1   active+clean+inconsistent
>
> root@pmnode1:~# ceph mds metadata
> []
>
>
> as you can see there is no mds service running.
>
> What can be wrong and how to solve this?
>
> Thank you in advance.
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to