Forgive the wall of text, i shortened it a little........ here is the osd
log when I attempt to start the osd:

2018-08-04 03:53:28.917418 7f3102aa87c0  0
xfsfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_feature: extsize is
disabled by conf
2018-08-04 03:53:28.977564 7f3102aa87c0  0
filestore(/var/lib/ceph/osd/ceph-21) mount: WRITEAHEAD journal mode
explicitly enabled in conf
2018-08-04 03:53:29.001967 7f3102aa87c0 -1 journal FileJournal::_open:
disabling aio for non-block journal.  Use journal_force_aio to force use of
aio anyway
2018-08-04 03:53:29.001981 7f3102aa87c0  1 journal _open
/var/lib/ceph/osd/ceph-21/journal fd 21: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
2018-08-04 03:53:29.002030 7f3102aa87c0  1 journal _open
/var/lib/ceph/osd/ceph-21/journal fd 21: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
2018-08-04 03:53:29.255501 7f3102aa87c0  0 <cls>
cls/hello/cls_hello.cc:271: loading cls_hello
2018-08-04 03:53:29.335038 7f3102aa87c0  0 osd.21 19579 crush map has
features 1107558400, adjusting msgr requires for clients
2018-08-04 03:53:29.335058 7f3102aa87c0  0 osd.21 19579 crush map has
features 1107558400, adjusting msgr requires for mons
2018-08-04 03:53:29.335062 7f3102aa87c0  0 osd.21 19579 crush map has
features 1107558400, adjusting msgr requires for osds
2018-08-04 03:53:29.335077 7f3102aa87c0  0 osd.21 19579 load_pgs
2018-08-04 03:54:00.275885 7f3102aa87c0 -1 osd/PG.cc: In function 'static
epoch_t PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
ceph::bufferlist*)' thread 7f3102aa87c0 time 2018-08-04 03:54:00.274454
osd/PG.cc: 2577: FAILED assert(values.size() == 1)

 ceph version 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f)
 1: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
ceph::buffer::list*)+0x578) [0x741a18]
 2: (OSD::load_pgs()+0x1993) [0x655d13]
 3: (OSD::init()+0x1ba1) [0x65fff1]
 4: (main()+0x1ea7) [0x602fd7]
 5: (__libc_start_main()+0xed) [0x7f31008a276d]
 6: /usr/bin/ceph-osd() [0x607119]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.

--- begin dump of recent events ---
 -3406> 2018-08-04 03:53:24.680985 7f3102aa87c0  5 asok(0x3c40230)
register_command perfcounters_dump hook 0x3c1c010
 -3405> 2018-08-04 03:53:24.681040 7f3102aa87c0  5 asok(0x3c40230)
register_command 1 hook 0x3c1c010
 -3404> 2018-08-04 03:53:24.681046 7f3102aa87c0  5 asok(0x3c40230)
register_command perf dump hook 0x3c1c010
 -3403> 2018-08-04 03:53:24.681052 7f3102aa87c0  5 asok(0x3c40230)
register_command perfcounters_schema hook 0x3c1c010
 -3402> 2018-08-04 03:53:24.681055 7f3102aa87c0  5 asok(0x3c40230)
register_command 2 hook 0x3c1c010
 -3401> 2018-08-04 03:53:24.681058 7f3102aa87c0  5 asok(0x3c40230)
register_command perf schema hook 0x3c1c010
 -3400> 2018-08-04 03:53:24.681061 7f3102aa87c0  5 asok(0x3c40230)
register_command config show hook 0x3c1c010
 -3399> 2018-08-04 03:53:24.681064 7f3102aa87c0  5 asok(0x3c40230)
register_command config set hook 0x3c1c010
 -3398> 2018-08-04 03:53:24.681095 7f3102aa87c0  5 asok(0x3c40230)
register_command config get hook 0x3c1c010
 -3397> 2018-08-04 03:53:24.681101 7f3102aa87c0  5 asok(0x3c40230)
register_command log flush hook 0x3c1c010
 -3396> 2018-08-04 03:53:24.681108 7f3102aa87c0  5 asok(0x3c40230)
register_command log dump hook 0x3c1c010
 -3395> 2018-08-04 03:53:24.681116 7f3102aa87c0  5 asok(0x3c40230)
register_command log reopen hook 0x3c1c010
 -3394> 2018-08-04 03:53:24.689976 7f3102aa87c0  0 ceph version 0.80.4
(7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f), process ceph-osd, pid 51827
 -3393> 2018-08-04 03:53:24.727583 7f3102aa87c0  1 -- 192.168.0.4:0/0
learned my addr 192.168.0.4:0/0
 -3392> 2018-08-04 03:53:24.727613 7f3102aa87c0  1 accepter.accepter.bind
my_inst.addr is 192.168.0.4:6801/51827 need_addr=0
 -3391> 2018-08-04 03:53:24.727638 7f3102aa87c0  1 -- 192.168.1.3:0/0
learned my addr 192.168.1.3:0/0
 -3390> 2018-08-04 03:53:24.727652 7f3102aa87c0  1 accepter.accepter.bind
my_inst.addr is 192.168.1.3:6800/51827 need_addr=0
 -3389> 2018-08-04 03:53:24.727676 7f3102aa87c0  1 -- 192.168.1.3:0/0
learned my addr 192.168.1.3:0/0
 -3388> 2018-08-04 03:53:24.727687 7f3102aa87c0  1 accepter.accepter.bind
my_inst.addr is 192.168.1.3:6801/51827 need_addr=0
 -3387> 2018-08-04 03:53:24.727722 7f3102aa87c0  1 -- 192.168.0.4:0/0
learned my addr 192.168.0.4:0/0
 -3386> 2018-08-04 03:53:24.727732 7f3102aa87c0  1 accepter.accepter.bind
my_inst.addr is 192.168.0.4:6810/51827 need_addr=0
 -3385> 2018-08-04 03:53:24.727767 7f3102aa87c0  1 -- 192.168.0.4:0/0
learned my addr 192.168.0.4:0/0
 -3384> 2018-08-04 03:53:24.727777 7f3102aa87c0  1 accepter.accepter.bind
my_inst.addr is 192.168.0.4:6811/51827 need_addr=0
 -3383> 2018-08-04 03:53:24.728871 7f3102aa87c0  1 finished
global_init_daemonize
 -3382> 2018-08-04 03:53:24.761702 7f3102aa87c0  5 asok(0x3c40230) init
/var/run/ceph/ceph-osd.21.asok
 -3381> 2018-08-04 03:53:24.761737 7f3102aa87c0  5 asok(0x3c40230)
bind_and_listen /var/run/ceph/ceph-osd.21.asok
 -3380> 2018-08-04 03:53:24.761923 7f3102aa87c0  5 asok(0x3c40230)
register_command 0 hook 0x3c180b0
 -3379> 2018-08-04 03:53:24.761942 7f3102aa87c0  5 asok(0x3c40230)
register_command version hook 0x3c180b0
 -3378> 2018-08-04 03:53:24.761949 7f3102aa87c0  5 asok(0x3c40230)
register_command git_version hook 0x3c180b0
 -3377> 2018-08-04 03:53:24.761956 7f3102aa87c0  5 asok(0x3c40230)
register_command help hook 0x3c1c0b0
 -3376> 2018-08-04 03:53:24.761964 7f3102aa87c0  5 asok(0x3c40230)
register_command get_command_descriptions hook 0x3c1c150
 -3375> 2018-08-04 03:53:24.762057 7f30fe2b5700  5 asok(0x3c40230) entry
start
 -3374> 2018-08-04 03:53:24.763359 7f3102aa87c0  0
filestore(/var/lib/ceph/osd/ceph-21) mount detected xfs (libxfs)
 -3373> 2018-08-04 03:53:24.763378 7f3102aa87c0  1
filestore(/var/lib/ceph/osd/ceph-21)  disabling 'filestore replica fadvise'
due to known issues with fadvise(DONTNEED) on xfs
 -3372> 2018-08-04 03:53:24.924666 7f3102aa87c0  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_features: FIEMAP
ioctl is supported and appears to work
 -3371> 2018-08-04 03:53:24.924691 7f3102aa87c0  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_features: FIEMAP
ioctl is disabled via 'filestore fiemap' config option
 -3370> 2018-08-04 03:53:24.941431 7f3102aa87c0  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_features:
syncfs(2) syscall fully supported (by glibc and kernel)
 -3369> 2018-08-04 03:53:24.941561 7f3102aa87c0  0
xfsfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_feature: extsize is
disabled by conf
 -3368> 2018-08-04 03:53:25.288820 7f3102aa87c0  0
filestore(/var/lib/ceph/osd/ceph-21) mount: enabling WRITEAHEAD journal
mode: checkpoint is not enabled
 -3367> 2018-08-04 03:53:28.781126 7f3102aa87c0  2 journal open
/var/lib/ceph/osd/ceph-21/journal fsid 8d47e11a-c5c9-4338-bbe2-8c144c2d03ca
fs_op_seq 657871497
 -3366> 2018-08-04 03:53:28.781215 7f3102aa87c0 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
 -3365> 2018-08-04 03:53:28.781255 7f3102aa87c0  1 journal _open
/var/lib/ceph/osd/ceph-21/journal fd 20: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
 -3364> 2018-08-04 03:53:28.791179 7f3102aa87c0  2 journal No further valid
entries found, journal is most likely valid
 -3363> 2018-08-04 03:53:28.791201 7f3102aa87c0  2 journal No further valid
entries found, journal is most likely valid
 -3362> 2018-08-04 03:53:28.791206 7f3102aa87c0  3 journal journal_replay:
end of journal, done.
 -3361> 2018-08-04 03:53:28.791253 7f3102aa87c0  1 journal _open
/var/lib/ceph/osd/ceph-21/journal fd 20: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
 -3360> 2018-08-04 03:53:28.805854 7f30fa492700  1 FileStore::op_tp worker
finish
 -3359> 2018-08-04 03:53:28.805901 7f30fac93700  1 FileStore::op_tp worker
finish
 -3358> 2018-08-04 03:53:28.806106 7f3102aa87c0  1 journal close
/var/lib/ceph/osd/ceph-21/journal
 -3357> 2018-08-04 03:53:28.808773 7f3102aa87c0 10 monclient(hunting):
build_initial_monmap
 -3356> 2018-08-04 03:53:28.837560 7f3102aa87c0  5 adding auth protocol:
cephx
 -3355> 2018-08-04 03:53:28.837580 7f3102aa87c0  5 adding auth protocol:
cephx
 -3354> 2018-08-04 03:53:28.838016 7f3102aa87c0  1 -- 192.168.0.4:6801/51827
messenger.start
 -3353> 2018-08-04 03:53:28.838080 7f3102aa87c0  1 -- :/0 messenger.start
 -3352> 2018-08-04 03:53:28.838106 7f3102aa87c0  1 -- 192.168.0.4:6810/51827
messenger.start
 -3351> 2018-08-04 03:53:28.838134 7f3102aa87c0  1 -- 192.168.1.3:6801/51827
messenger.start
 -3350> 2018-08-04 03:53:28.838153 7f3102aa87c0  1 -- 192.168.1.3:6800/51827
messenger.start
 -3349> 2018-08-04 03:53:28.838301 7f3102aa87c0  1 -- 192.168.0.4:6811/51827
messenger.start
 -3348> 2018-08-04 03:53:28.838372 7f3102aa87c0  2 osd.21 0 mounting
/var/lib/ceph/osd/ceph-21 /var/lib/ceph/osd/ceph-21/journal
 -3347> 2018-08-04 03:53:28.838425 7f3102aa87c0  0
filestore(/var/lib/ceph/osd/ceph-21) mount detected xfs (libxfs)
 -3346> 2018-08-04 03:53:28.883916 7f3102aa87c0  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_features: FIEMAP
ioctl is supported and appears to work
 -3345> 2018-08-04 03:53:28.883941 7f3102aa87c0  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_features: FIEMAP
ioctl is disabled via 'filestore fiemap' config option
 -3344> 2018-08-04 03:53:28.917333 7f3102aa87c0  0
genericfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_features:
syncfs(2) syscall fully supported (by glibc and kernel)
 -3343> 2018-08-04 03:53:28.917418 7f3102aa87c0  0
xfsfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_feature: extsize is
disabled by conf
 -3342> 2018-08-04 03:53:28.977564 7f3102aa87c0  0
filestore(/var/lib/ceph/osd/ceph-21) mount: WRITEAHEAD journal mode
explicitly enabled in conf
 -3341> 2018-08-04 03:53:29.001933 7f3102aa87c0  2 journal open
/var/lib/ceph/osd/ceph-21/journal fsid 8d47e11a-c5c9-4338-bbe2-8c144c2d03ca
fs_op_seq 657871497
 -3340> 2018-08-04 03:53:29.001967 7f3102aa87c0 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
 -3339> 2018-08-04 03:53:29.001981 7f3102aa87c0  1 journal _open
/var/lib/ceph/osd/ceph-21/journal fd 21: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
 -3338> 2018-08-04 03:53:29.002008 7f3102aa87c0  2 journal No further valid
entries found, journal is most likely valid
 -3337> 2018-08-04 03:53:29.002014 7f3102aa87c0  2 journal No further valid
entries found, journal is most likely valid
 -3336> 2018-08-04 03:53:29.002015 7f3102aa87c0  3 journal journal_replay:
end of journal, done.
 -3335> 2018-08-04 03:53:29.002030 7f3102aa87c0  1 journal _open
/var/lib/ceph/osd/ceph-21/journal fd 21: 2147483648 bytes, block size 4096
bytes, directio = 1, aio = 0
 -3334> 2018-08-04 03:53:29.002172 7f3102aa87c0  2 osd.21 0 boot
 -3333> 2018-08-04 03:53:29.150430 7f3102aa87c0  1 <cls>
cls/replica_log/cls_replica_log.cc:141: Loaded replica log class!
 -3332> 2018-08-04 03:53:29.161634 7f3102aa87c0  1 <cls>
cls/user/cls_user.cc:367: Loaded user class!
 -3331> 2018-08-04 03:53:29.204319 7f3102aa87c0  1 <cls>
cls/rgw/cls_rgw.cc:1599: Loaded rgw class!
 -3330> 2018-08-04 03:53:29.205002 7f3102aa87c0  1 <cls>
cls/version/cls_version.cc:227: Loaded version class!
 -3329> 2018-08-04 03:53:29.214192 7f3102aa87c0  1 <cls>
cls/log/cls_log.cc:312: Loaded log class!
 -3328> 2018-08-04 03:53:29.252936 7f3102aa87c0  1 <cls>
cls/refcount/cls_refcount.cc:231: Loaded refcount class!
 -3327> 2018-08-04 03:53:29.253513 7f3102aa87c0  1 <cls>
cls/statelog/cls_statelog.cc:306: Loaded log class!
 -3326> 2018-08-04 03:53:29.255501 7f3102aa87c0  0 <cls>
cls/hello/cls_hello.cc:271: loading cls_hello
 -3325> 2018-08-04 03:53:29.335038 7f3102aa87c0  0 osd.21 19579 crush map
has features 1107558400, adjusting msgr requires for clients
 -3324> 2018-08-04 03:53:29.335058 7f3102aa87c0  0 osd.21 19579 crush map
has features 1107558400, adjusting msgr requires for mons
 -3323> 2018-08-04 03:53:29.335062 7f3102aa87c0  0 osd.21 19579 crush map
has features 1107558400, adjusting msgr requires for osds
 -3322> 2018-08-04 03:53:29.335077 7f3102aa87c0  0 osd.21 19579 load_pgs
 -3321> 2018-08-04 03:53:29.376203 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[0.4(unlocked)] enter Initial
 -3320> 2018-08-04 03:53:29.441416 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[0.4( empty local-les=18266 n=0 ec=1 les/c 18266/18266 18265/18265/8306)
[17,21] r=1 lpr=0 pi=18240-18264/1 crt=0'0 inactive NOTIFY] exit Initial
0.065214 0 0.000000
 -3319> 2018-08-04 03:53:29.441479 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[0.4( empty local-les=18266 n=0 ec=1 les/c 18266/18266 18265/18265/8306)
[17,21] r=1 lpr=0 pi=18240-18264/1 crt=0'0 inactive NOTIFY] enter Reset
 -3318> 2018-08-04 03:53:29.441855 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[0.18(unlocked)] enter Initial
 -3317> 2018-08-04 03:53:29.479030 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[0.18( empty local-les=19579 n=0 ec=1 les/c 19579/19579
19578/19578/11592) [21,13] r=0 lpr=0 crt=0'0 mlcod 0'0 inactive] exit
Initial 0.037176 0 0.000000
 -3316> 2018-08-04 03:53:29.479056 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[0.18( empty local-les=19579 n=0 ec=1 les/c 19579/19579
19578/19578/11592) [21,13] r=0 lpr=0 crt=0'0 mlcod 0'0 inactive] enter Reset
 -3315> 2018-08-04 03:53:29.479247 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[0.1b(unlocked)] enter Initial
#
#
## I removed a bunch of similar looking statements for pgs to reduce the
copy/paste size
#
#
    -8> 2018-08-04 03:54:00.003813 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.a4( v 19579'2359275 (19579'2356274,19579'2359275] local-les=18267
n=206 ec=5 les/c 18267/18268 18265/18265/18247) [21,26] r=0 lpr=0
crt=2492'1102204 lcod 0'0 mlcod 0'0 inactive] exit Initial 0.062694 0
0.000000
    -7> 2018-08-04 03:54:00.003844 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.a4( v 19579'2359275 (19579'2356274,19579'2359275] local-les=18267
n=206 ec=5 les/c 18267/18268 18265/18265/18247) [21,26] r=0 lpr=0
crt=2492'1102204 lcod 0'0 mlcod 0'0 inactive] enter Reset
    -6> 2018-08-04 03:54:00.004063 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.aa(unlocked)] enter Initial
    -5> 2018-08-04 03:54:00.195237 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.aa( v 19579'25769295 (19579'25766294,19579'25769295] local-les=16453
n=214 ec=5 les/c 16453/16453 15889/16452/16452) [21,16] r=0 lpr=0
crt=16449'19948380 lcod 0'0 mlcod 0'0 inactive] exit Initial 0.191174 0
0.000000
    -4> 2018-08-04 03:54:00.195276 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.aa( v 19579'25769295 (19579'25766294,19579'25769295] local-les=16453
n=214 ec=5 les/c 16453/16453 15889/16452/16452) [21,16] r=0 lpr=0
crt=16449'19948380 lcod 0'0 mlcod 0'0 inactive] enter Reset
    -3> 2018-08-04 03:54:00.195526 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.ab(unlocked)] enter Initial
    -2> 2018-08-04 03:54:00.254812 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.ab( v 19579'1116897 (18464'1113896,19579'1116897] local-les=13378
n=217 ec=5 les/c 13378/13378 13286/13377/13377) [4,21] r=1 lpr=0
pi=12038-13376/4 crt=709'35663 lcod 0'0 inactive NOTIFY] exit Initial
0.059287 0 0.000000
    -1> 2018-08-04 03:54:00.254842 7f3102aa87c0  5 osd.21 pg_epoch: 19579
pg[6.ab( v 19579'1116897 (18464'1113896,19579'1116897] local-les=13378
n=217 ec=5 les/c 13378/13378 13286/13377/13377) [4,21] r=1 lpr=0
pi=12038-13376/4 crt=709'35663 lcod 0'0 inactive NOTIFY] enter Reset
     0> 2018-08-04 03:54:00.275885 7f3102aa87c0 -1 osd/PG.cc: In function
'static epoch_t PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
ceph::bufferlist*)' thread 7f3102aa87c0 time 2018-08-04 03:54:00.274454
osd/PG.cc: 2577: FAILED assert(values.size() == 1)

 ceph version 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f)
 1: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
ceph::buffer::list*)+0x578) [0x741a18]
 2: (OSD::load_pgs()+0x1993) [0x655d13]
 3: (OSD::init()+0x1ba1) [0x65fff1]
 4: (main()+0x1ea7) [0x602fd7]
 5: (__libc_start_main()+0xed) [0x7f31008a276d]
 6: /usr/bin/ceph-osd() [0x607119]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 keyvaluestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.21.log
--- end dump of recent events ---
2018-08-04 03:54:00.314451 7f3102aa87c0 -1 *** Caught signal (Aborted) **
 in thread 7f3102aa87c0

 ceph version 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f)
 1: /usr/bin/ceph-osd() [0x98aa3a]
 2: (()+0xfcb0) [0x7f3101cd0cb0]
 3: (gsignal()+0x35) [0x7f31008b70d5]
 4: (abort()+0x17b) [0x7f31008ba83b]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f310120869d]
 6: (()+0xb5846) [0x7f3101206846]
 7: (()+0xb5873) [0x7f3101206873]
 8: (()+0xb596e) [0x7f310120696e]
 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x1df) [0xa6adcf]
 10: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
ceph::buffer::list*)+0x578) [0x741a18]
 11: (OSD::load_pgs()+0x1993) [0x655d13]
 12: (OSD::init()+0x1ba1) [0x65fff1]
 13: (main()+0x1ea7) [0x602fd7]
 14: (__libc_start_main()+0xed) [0x7f31008a276d]
 15: /usr/bin/ceph-osd() [0x607119]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.

--- begin dump of recent events ---
     0> 2018-08-04 03:54:00.314451 7f3102aa87c0 -1 *** Caught signal
(Aborted) **
 in thread 7f3102aa87c0

 ceph version 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f)
 1: /usr/bin/ceph-osd() [0x98aa3a]
 2: (()+0xfcb0) [0x7f3101cd0cb0]
 3: (gsignal()+0x35) [0x7f31008b70d5]
 4: (abort()+0x17b) [0x7f31008ba83b]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f310120869d]
 6: (()+0xb5846) [0x7f3101206846]
 7: (()+0xb5873) [0x7f3101206873]
 8: (()+0xb596e) [0x7f310120696e]
 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x1df) [0xa6adcf]
 10: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
ceph::buffer::list*)+0x578) [0x741a18]
 11: (OSD::load_pgs()+0x1993) [0x655d13]
 12: (OSD::init()+0x1ba1) [0x65fff1]
 13: (main()+0x1ea7) [0x602fd7]
 14: (__libc_start_main()+0xed) [0x7f31008a276d]
 15: /usr/bin/ceph-osd() [0x607119]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 keyvaluestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.21.log
--- end dump of recent events ---

Thanks.

*Sean Patronis*

Information Technology | *Auto**Data**Direct, Inc.*

Ph: (850) 877-8804 | Fax: (850) 877-5910

Email: spatro...@add123.com | Web: *www.add123.com <http://www.add123.com/>*

On Fri, Aug 3, 2018 at 5:52 PM, Sean Redmond <sean.redmo...@gmail.com>
wrote:

> Hi,
>
> You can export and import PG's using ceph_objectstore_tool, but if the osd
> won't start you may have trouble exporting a PG.
>
> It maybe useful to share the errors you get when trying to start the osd.
>
> Thanks
>
> On Fri, Aug 3, 2018 at 10:13 PM, Sean Patronis <spatro...@add123.com>
> wrote:
>
>>
>>
>> Hi all.
>>
>> We have an issue with some down+peering PGs (I think), when I try to mount 
>> or access data the requests are blocked:
>>
>> 114891/7509353 objects degraded (1.530%)
>>                  887 stale+active+clean
>>                    1 peering
>>                   54 active+recovery_wait
>>                19609 active+clean
>>                   91 active+remapped+wait_backfill
>>                   10 active+recovering
>>                    1 active+clean+scrubbing+deep
>>                    9 down+peering
>>                   10 active+remapped+backfilling
>> recovery io 67324 kB/s, 10 objects/s
>>
>> when I query one of these down+peering PGs, I can see the following:
>>
>>          "peering_blocked_by": [
>>                 { "osd": 7,
>>                   "current_lost_at": 0,
>>                   "comment": "starting or marking this osd lost may let us 
>> proceed"},
>>                 { "osd": 21,
>>                   "current_lost_at": 0,
>>                   "comment": "starting or marking this osd lost may let us 
>> proceed"}]},
>>         { "name": "Started",
>>           "enter_time": "2018-08-01 07:06:16.806339"}],
>>
>>
>>
>> Both of these OSDs (7 and 21) will not come back up and in with ceph due
>> to some errors, but I can mount the disks and read data off of them.  Can I
>> manually move/copy these PGs off of these down and out OSDs and put them on
>> a good OSD?
>>
>> This is an older ceph cluster running firefly.
>>
>> Thanks.
>>
>>
>>
>>
>> This email message may contain privileged or confidential information,
>> and is for the use of intended recipients only. Do not share with or
>> forward to additional parties except as necessary to conduct the business
>> for which this email (and attachments) was clearly intended. If you have
>> received this message in error, please immediately advise the sender by
>> reply email and then delete this message.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>

-- 
This email message may contain privileged or confidential information, and 
is for the use of intended recipients only. Do not share with or forward to 
additional parties except as necessary to conduct the business for which 
this email (and attachments) was clearly intended. If you have received 
this message in error, please immediately advise the sender by reply email 
and then delete this message.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to