in the logs it says:
2020-03-25T22:10:00.823+0100 7f0bd5320e00 0 <cls>
/build/ceph-15.2.0/src/cls/hello/cls_hello.cc:312: loading cls_hello
2020-03-25T22:10:00.823+0100 7f0bd5320e00 0 osd.32 57223 crush map
has features 288232576282525696, adjusting msgr requires for clients
2020-03-25T22:10:00.823+0100 7f0bd5320e00 0 osd.32 57223 crush map
has features 288232576282525696 was 8705, adjusting msgr requires for
mons
2020-03-25T22:10:00.823+0100 7f0bd5320e00 0 osd.32 57223 crush map
has features 1008808516661821440, adjusting msgr requires for osds
2020-03-25T22:10:00.823+0100 7f0bd5320e00 1 osd.32 57223
check_osdmap_features require_osd_release unknown -> luminous
2020-03-25T22:10:04.695+0100 7f0bd5320e00 0 osd.32 57223 load_pgs
2020-03-25T22:10:10.907+0100 7f0bcc01d700 4 rocksdb:
[db/compaction_job.cc:1332] [default] [JOB 3] Generated table #59886:
2107241 keys, 72886355 bytes
2020-03-25T22:10:10.907+0100 7f0bcc01d700 4 rocksdb: EVENT_LOG_v1
{"time_micros": 1585170610911598, "cf_name": "default", "job": 3,
"event": "table_file_creation", "file_number": 59886, "file_size":
72886355, "table_properties": {"data_size": 67112666, "index_size":
504659, "filter_size": 5268165, "raw_key_size": 38673953,
"raw_average_key_size": 18, "raw_value_size": 35746098,
"raw_average_value_size": 16, "num_data_blocks": 16488, "num_entries":
2107241, "filter_policy_name": "rocksdb.BuiltinBloomFilter"}}
2020-03-25T22:10:13.047+0100 7f0bd5320e00 0 osd.32 57223 load_pgs
opened 230 pgs
2020-03-25T22:10:13.047+0100 7f0bd5320e00 -1 osd.32 57223
log_to_monitors {default=true}
2020-03-25T22:10:13.107+0100 7f0bd5320e00 0 osd.32 57223 done with
init, starting boot process
2020-03-25T22:10:13.107+0100 7f0bd5320e00 1 osd.32 57223 start_boot
does the line:
check_osdmap_features require_osd_release unknown -> luminous
mean it thinks the local osd itself is luminous?
On Wed, Mar 25, 2020 at 8:12 PM Ml Ml <[email protected]> wrote:
>
> Hello List,
>
> i followed:
> https://ceph.io/releases/v15-2-0-octopus-released/
>
> I came from a healthy nautilus and i am stuck at:
> 5.) Upgrade all OSDs by installing the new packages and restarting
> the ceph-osd daemons on all OSD host
>
> When i try to start an osd like this, i get:
> /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser ceph --setgroup ceph
> ...
> 2020-03-25T20:11:03.292+0100 7f2762874e00 -1 osd.32 57223
> log_to_monitors {default=true}
> ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
> PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
> `map->require_osd_release >= ceph_release_t::mimic' failed.
> ceph-osd: /build/ceph-15.2.0/src/osd/PeeringState.cc:109: void
> PGPool::update(ceph::common::CephContext*, OSDMapRef): Assertion
> `map->require_osd_release >= ceph_release_t::mimic' failed.
> *** Caught signal (Aborted) **
> in thread 7f274854f700 thread_name:tp_osd_tp
> Aborted
>
>
>
> My current status:
>
> root@ceph03:~# ceph osd tree
> ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
> -1 60.70999 root default
> -2 20.25140 host ceph01
> 0 hdd 1.71089 osd.0 up 1.00000 1.00000
> 8 hdd 2.67029 osd.8 up 1.00000 1.00000
> 11 hdd 1.59999 osd.11 up 1.00000 1.00000
> 12 hdd 1.59999 osd.12 up 1.00000 1.00000
> 14 hdd 2.79999 osd.14 up 1.00000 1.00000
> 18 hdd 1.59999 osd.18 up 1.00000 1.00000
> 22 hdd 2.79999 osd.22 up 1.00000 1.00000
> 23 hdd 2.79999 osd.23 up 1.00000 1.00000
> 26 hdd 2.67029 osd.26 up 1.00000 1.00000
> -3 23.05193 host ceph02
> 2 hdd 2.67029 osd.2 up 1.00000 1.00000
> 3 hdd 2.00000 osd.3 up 1.00000 1.00000
> 7 hdd 2.67029 osd.7 up 1.00000 1.00000
> 9 hdd 2.67029 osd.9 up 1.00000 1.00000
> 13 hdd 2.00000 osd.13 up 1.00000 1.00000
> 16 hdd 1.59999 osd.16 up 1.00000 1.00000
> 19 hdd 2.38409 osd.19 up 1.00000 1.00000
> 24 hdd 2.67020 osd.24 up 1.00000 1.00000
> 25 hdd 1.71649 osd.25 up 1.00000 1.00000
> 28 hdd 2.67029 osd.28 up 1.00000 1.00000
> -4 17.40666 host ceph03
> 5 hdd 1.71660 osd.5 down 1.00000 1.00000
> 6 hdd 1.71660 osd.6 down 1.00000 1.00000
> 10 hdd 2.67029 osd.10 down 1.00000 1.00000
> 15 hdd 2.00000 osd.15 down 1.00000 1.00000
> 17 hdd 1.20000 osd.17 down 1.00000 1.00000
> 20 hdd 1.71649 osd.20 down 1.00000 1.00000
> 21 hdd 2.00000 osd.21 down 1.00000 1.00000
> 27 hdd 1.71649 osd.27 down 1.00000 1.00000
> 32 hdd 2.67020 osd.32 down 1.00000 1.00000
>
> root@ceph03:~# ceph osd dump | grep require_osd_release
> require_osd_release nautilus
>
> root@ceph03:~# ceph osd versions
> {
> "ceph version 14.2.8 (88c3b82e8bc76d3444c2d84a30c4a380d6169d46)
> nautilus (stable)": 19
> }
>
> oot@ceph03:~# ceph mon dump | grep min_mon_release
> dumped monmap epoch 12
> min_mon_release 15 (octopus)
>
>
> ceph versions
> {
> "mon": {
> "ceph version 15.2.0
> (dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
> },
> "mgr": {
> "ceph version 15.2.0
> (dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
> },
> "osd": {
> "ceph version 14.2.8
> (88c3b82e8bc76d3444c2d84a30c4a380d6169d46) nautilus (stable)": 19
> },
> "mds": {
> "ceph version 15.2.0
> (dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 3
> },
> "overall": {
> "ceph version 14.2.8
> (88c3b82e8bc76d3444c2d84a30c4a380d6169d46) nautilus (stable)": 19,
> "ceph version 15.2.0
> (dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus (rc)": 9
> }
> }
>
>
> Why does it complain about map->require_osd_release >= ceph_release_t::mimic ?
>
> Cheers,
> Michael
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]