Hi, Anton.
You need to run the OSD with debug_ms = 1/1 and debug_osd = 20/20 for
detailed information.

2017-07-17 8:26 GMT+03:00 Anton Dmitriev <[email protected]>:

> Hi, all!
>
> After upgrading from 10.2.7 to 10.2.9 I see that restarting osds by
> 'restart ceph-osd id=N' or 'restart ceph-osd-all' takes about 10 minutes
> for getting OSD from DOWN to UP. The same situation on all 208 OSDs on 7
> servers.
>
> Also very long OSD start after rebooting servers.
>
> Before upgrade it took no more than 2 minutes.
>
> Does anyone has the same situation like mine?
>
>
> 2017-07-17 08:07:26.895600 7fac2d656840  0 set uid:gid to 4402:4402
> (ceph:ceph)
> 2017-07-17 08:07:26.895615 7fac2d656840  0 ceph version 10.2.9
> (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0), process ceph-osd, pid 197542
> 2017-07-17 08:07:26.897018 7fac2d656840  0 pidfile_write: ignore empty
> --pid-file
> 2017-07-17 08:07:26.906489 7fac2d656840  0 filestore(/var/lib/ceph/osd/ceph-0)
> backend xfs (magic 0x58465342)
> 2017-07-17 08:07:26.917074 7fac2d656840  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config
> option
> 2017-07-17 08:07:26.917092 7fac2d656840  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data
> hole' config option
> 2017-07-17 08:07:26.917112 7fac2d656840  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: splice is supported
> 2017-07-17 08:07:27.037031 7fac2d656840  0 
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
> 2017-07-17 08:07:27.037154 7fac2d656840  0 
> xfsfilestorebackend(/var/lib/ceph/osd/ceph-0)
> detect_feature: extsize is disabled by conf
> 2017-07-17 08:15:17.839072 7fac2d656840  0 filestore(/var/lib/ceph/osd/ceph-0)
> mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
> 2017-07-17 08:15:20.150446 7fac2d656840  0 <cls>
> cls/hello/cls_hello.cc:305: loading cls_hello
> 2017-07-17 08:15:20.152483 7fac2d656840  0 <cls>
> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
> 2017-07-17 08:15:20.210428 7fac2d656840  0 osd.0 224167 crush map has
> features 2200130813952, adjusting msgr requires for clients
> 2017-07-17 08:15:20.210443 7fac2d656840  0 osd.0 224167 crush map has
> features 2200130813952 was 8705, adjusting msgr requires for mons
> 2017-07-17 08:15:20.210448 7fac2d656840  0 osd.0 224167 crush map has
> features 2200130813952, adjusting msgr requires for osds
> 2017-07-17 08:15:58.902173 7fac2d656840  0 osd.0 224167 load_pgs
> 2017-07-17 08:16:19.083406 7fac2d656840  0 osd.0 224167 load_pgs opened
> 242 pgs
> 2017-07-17 08:16:19.083969 7fac2d656840  0 osd.0 224167 using 0 op queue
> with priority op cut off at 64.
> 2017-07-17 08:16:19.109547 7fac2d656840 -1 osd.0 224167 log_to_monitors
> {default=true}
> 2017-07-17 08:16:19.522448 7fac2d656840  0 osd.0 224167 done with init,
> starting boot process
>
> --
> Dmitriev Anton
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to