Hello List,
i followed:
https://ceph.io/releases/v15-2-0-octopus-released/
I came from a healthy nautilus and i am stuck at:
5.) Upgrade all OSDs by installing the new packages and restarting
the ceph-osd daemons on all OSD host
When i try to start an osd like this, i get:
/usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser ceph --setgroup ceph
...
2020-03-25T20:11:03.292+0100 7f2762874e00 -1 osd.32 57223
log_to_monitors {default=true}
ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
`map->require_osd_release >= ceph_release_t::mimic' failed.
ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
`map->require_osd_release >= ceph_release_t::mimic' failed.
*** Caught signal (Aborted) **
in thread 7f274854f700 thread_name:tp_osd_tp
Aborted
My current status:
root@ceph03:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 60.70999 root default
-2 20.25140 host ceph01
0 hdd 1.71089 osd.0 up 1.00000 1.00000
8 hdd 2.67029 osd.8 up 1.00000 1.00000
11 hdd 1.59999 osd.11 up 1.00000 1.00000
12 hdd 1.59999 osd.12 up 1.00000 1.00000
14 hdd 2.79999 osd.14 up 1.00000 1.00000
18 hdd 1.59999 osd.18 up 1.00000 1.00000
22 hdd 2.79999 osd.22 up 1.00000 1.00000
23 hdd 2.79999 osd.23 up 1.00000 1.00000
26 hdd 2.67029 osd.26 up 1.00000 1.00000
-3 23.05193 host ceph02
2 hdd 2.67029 osd.2 up 1.00000 1.00000
3 hdd 2.00000 osd.3 up 1.00000 1.00000
7 hdd 2.67029 osd.7 up 1.00000 1.00000
9 hdd 2.67029 osd.9 up 1.00000 1.00000
13 hdd 2.00000 osd.13 up 1.00000 1.00000
16 hdd 1.59999 osd.16 up 1.00000 1.00000
19 hdd 2.38409 osd.19 up 1.00000 1.00000
24 hdd 2.67020 osd.24 up 1.00000 1.00000
25 hdd 1.71649 osd.25 up 1.00000 1.00000
28 hdd 2.67029 osd.28 up 1.00000 1.00000
-4 17.40666 host ceph03
5 hdd 1.71660 osd.5 down 1.00000 1.00000
6 hdd 1.71660 osd.6 down 1.00000 1.00000
10 hdd 2.67029 osd.10 down 1.00000 1.00000
15 hdd 2.00000 osd.15 down 1.00000 1.00000
17 hdd 1.20000 osd.17 down 1.00000 1.00000
20 hdd 1.71649 osd.20 down 1.00000 1.00000
21 hdd 2.00000 osd.21 down 1.00000 1.00000
27 hdd 1.71649 osd.27 down 1.00000 1.00000
32 hdd 2.67020 osd.32 down 1.00000 1.00000
root@ceph03:~# ceph osd dump | grep require_osd_release
require_osd_release nautilus
root@ceph03:~# ceph osd versions
{
"ceph version 14.2.8 (88c3b82e8bc76d3444c2d84a30c4a380d6169d46)
nautilus (stable)": 19
}
oot@ceph03:~# ceph mon dump | grep min_mon_release
dumped monmap epoch 12
min_mon_release 15 (octopus)
ceph versions
{
"mon": {
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
},
"mgr": {
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
},
"osd": {
"ceph version 14.2.8
(88c3b82e8bc76d3444c2d84a30c4a380d6169d46) nautilus (stable)": 19
},
"mds": {
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
},
"overall": {
"ceph version 14.2.8
(88c3b82e8bc76d3444c2d84a30c4a380d6169d46) nautilus (stable)": 19,
"ceph version 15.2.0
(dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 9
}
}
Why does it complain about map->require_osd_release >= ceph_release_t::mimic ?
Cheers,
Michael
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]