You can't downgrade from Luminous to Kraken.... well officially at least.

I guess it maybe could somehow work but you'd need to re-create all
the services. For the mon example: delete a mon, create a new old one,
let it sync, etc.
Still a bad idea.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Aug 21, 2019 at 1:37 PM nokia ceph <nokiacephus...@gmail.com> wrote:
>
> Hi Team,
>
> One of our old customer had Kraken and they are going to upgrade to Luminous 
> . In the process they also requesting for downgrade procedure.
> Kraken used leveldb for ceph-mon data , from luminous it changed to rocksdb , 
> upgrade works without any issues.
>
> When we downgrade , the ceph-mon does not start and the mon kv_backend not 
> moving away from rocksdb .
>
> After downgrade , when kv_backend is rocksdb following error thrown by 
> ceph-mon , trying to load data from rocksdb and end up in this error,
>
> 2019-08-21 11:22:45.200188 7f1a0406f7c0  4 rocksdb: Recovered from manifest 
> file:/var/lib/ceph/mon/ceph-cn1/store.db/MANIFEST-000716 
> succeeded,manifest_file_number is 716, next_file_number is 718, last_sequence 
> is 311614, log_number is 0,prev_log_number is 0,max_column_family is 0
>
> 2019-08-21 11:22:45.200198 7f1a0406f7c0  4 rocksdb: Column family [default] 
> (ID 0), log number is 715
>
> 2019-08-21 11:22:45.200247 7f1a0406f7c0  4 rocksdb: EVENT_LOG_v1 
> {"time_micros": 1566386565200240, "job": 1, "event": "recovery_started", 
> "log_files": [717]}
> 2019-08-21 11:22:45.200252 7f1a0406f7c0  4 rocksdb: Recovering log #717 mode 2
> 2019-08-21 11:22:45.200282 7f1a0406f7c0  4 rocksdb: Creating manifest 719
>
> 2019-08-21 11:22:45.201222 7f1a0406f7c0  4 rocksdb: EVENT_LOG_v1 
> {"time_micros": 1566386565201218, "job": 1, "event": "recovery_finished"}
> 2019-08-21 11:22:45.202582 7f1a0406f7c0  4 rocksdb: DB pointer 0x55d4dacf0000
> 2019-08-21 11:22:45.202726 7f1a0406f7c0 -1 ERROR: on disk data includes 
> unsupported features: compat={},rocompat={},incompat={9=luminous ondisk 
> layout}
> 2019-08-21 11:22:45.202735 7f1a0406f7c0 -1 error checking features: (1) 
> Operation not permitted
>
> We changed the kv_backend file inside /var/lib/ceph/mon/ceph-cn1 to leveldb 
> and ceph-mon failed with following error,
>
> 2019-08-21 11:24:07.922978 7fc5a25de7c0 -1 WARNING: the following dangerous 
> and experimental features are enabled: bluestore,rocksdb
> 2019-08-21 11:24:07.922983 7fc5a25de7c0  0 set uid:gid to 167:167 (ceph:ceph)
> 2019-08-21 11:24:07.923009 7fc5a25de7c0  0 ceph version 11.2.0 
> (f223e27eeb35991352ebc1f67423d4ebc252adb7), process ceph-mon, pid 3509050
> 2019-08-21 11:24:07.923050 7fc5a25de7c0  0 pidfile_write: ignore empty 
> --pid-file
> 2019-08-21 11:24:07.944867 7fc5a25de7c0 -1 WARNING: the following dangerous 
> and experimental features are enabled: bluestore,rocksdb
> 2019-08-21 11:24:07.950304 7fc5a25de7c0  0 load: jerasure load: lrc load: isa
> 2019-08-21 11:24:07.950563 7fc5a25de7c0 -1 error opening mon data directory 
> at '/var/lib/ceph/mon/ceph-cn1': (22) Invalid argument
>
> Is there any possibility to toggle ceph-mon db between leveldb and rocksdb?
> Tried to add mon_keyvaluedb = leveldb and filestore_omap_backend = leveldb in 
> ceph.conf also not worked.
> thanks,
> Muthu
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to