Thanks Eugen!
I was looking into running all the commands manually, following the docs for
add/remove osd but tried ceph-disk first.
I actually made it work by changing the id part in ceph-disk ( it was checking
the wrong journal device, which was owned by root:root ). The next part was
that I
Hi,
we have a full bluestore cluster and had to deal with read errors on
the SSD for the block.db. Something like this helped us to recreate a
pre-existing OSD without rebalancing, just refilling the PGs. I would
zap the journal device and let it recreate. It's very similar to your
ceph-d
Hm. You are right. Seems ceph-osd uses id 0 in main.py.
I'll have a look in my dev cluster and see if it helps things.
/usr/lib/python2.7/dist-packages/ceph_disk/main.py
def check_journal_reqs(args):
_, _, allows_journal = command([
'ceph-osd', '--check-allows-journal',
'-i', '0',
'--log-file', '$
ceph_disk.main.Error: Error: journal specified but not allowed by osd backend
I faced this issue once before.
The problem is - function is query for osd.0 instead your osd.21.
Change in main.py
'-i', '0',
to 21 (your osd number)
'-i', '21',
and try again.
k
Hi!
Trying to replace an OSD on a Jewel cluster (filestore data on HDD + journal
device on SSD).
I've set noout and removed the flapping drive (read errors) and replaced it
with a new one.
I've taken down the osd UUID to be able to prepare the new disk with the same
osd.ID. The journal device