The output of 'ceph features' (or the mon session output) does not
reflect that it's an older client. It just represents a collection of
supported features. This has been discussed multiple times on this
list. This is not unusual.
Regarding the insecure global_id, does your cluster warn about it? Can
you share these configs?
ceph config get mon mon_warn_on_insecure_global_id_reclaim_allowed
ceph config get mon mon_warn_on_insecure_global_id_reclaim
Zitat von Sergio Rabellino <rabell...@di.unito.it>:
Thanks for your prompt reply, dump follows:
# ceph osd dump |grep require_osd_release
require_osd_release octopus
We are aware of this setting and change it on release change (we did
in the last days mimic->nautilus->octopus).
The very strange thing it's that we upgraded luminous->mimic more
than two years ago.
About librados, I get:
# rados -v
ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4)
octopus (stable)
so it seems fine to me.
Il 15/05/2025 11:01, Sinan Polat ha scritto:
What is the output of:
ceph osd dump | grep require_osd_release
Have you upgraded OpenStack (librados) as well?
Op do 15 mei 2025 om 10:50 schreef Sergio Rabellino
<rabell...@di.unito.it <mailto:rabell...@di.unito.it>>:
Dear list,
we are upgrading our ceph infrastructure from mimic to octopus
(please
be kind, we known that we are working with "old" tools, but these
ceph
releases are tied to our openstack installation needs) and _*all*_
the
ceph actors (mon/mgr/osd/rgw - no mds as we do not serve filesystem)
were upgraded fine as follow:
> # ceph -v
> ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4)
> octopus (stable)
We're using a ubuntu/juju orchestrator for managing ceph (and
openstack
too).
It seems all ok, but if I ask a mon to dump the sessions, we get
that_all of them_ are on luminous:
> "con_features": 4540138292840890367,
> "con_features_hex": "3f01cfb8ffedffff",
> "con_features_release": "luminous",
We found this oddity when we tried to unset the "insecure global_id
reclaim" flag and all things got broken, so we had to reactivate
the flag.
All ceph network layers are "closed", so we're not so urged to remove
the flag, but we would like to understand if this is a known
problem or
an error done by us.
Any hints ?
thanks in advance.
-- ing. Sergio Rabellino
Università degli Studi di Torino
Dipartimento di Informatica
Tecnico di Ricerca
Cel +39-342-529-5409 Tel +39-011-670-6701 Fax +39-011-751603
C.so Svizzera , 185 - 10149 - Torino
<http://www.di.unito.it>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-le...@ceph.io
<mailto:ceph-users-le...@ceph.io>
--
ing. Sergio Rabellino
Università degli Studi di Torino
Dipartimento di Informatica
Tecnico di Ricerca
Cel +39-342-529-5409 Tel +39-011-670-6701 Fax +39-011-751603
C.so Svizzera , 185 - 10149 - Torino
<http://www.di.unito.it>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io