On 29/8/20 2:20 pm, Stuart Longland wrote:
> Step 7.
>> If there are any OSDs in the cluster deployed with ceph-disk (e.g., almost
>> any OSDs that were created before the Mimic release), you need to tell
>> ceph-volume to adopt responsibility for starting the daemons.
So on two of my nodes, which were deployed later than the others, I
*did* have a journal deployed on hose, and so the `ceph-volume` step
went without a hitch.
I note though that they don't survive a reboot:
> [2020-09-05 11:05:39,216][ceph_volume][ERROR ] exception caught by decorator
> Traceback (most recent call last):
> File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py", line 59,
> in newfunc
> return f(*a, **kw)
> File "/usr/lib/python2.7/dist-packages/ceph_volume/main.py", line 148, in
> main
> terminal.dispatch(self.mapper, subcommand_args)
> File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line 182,
> in dispatch
> instance.main()
> File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py",
> line 40, in main
> terminal.dispatch(self.mapper, self.argv)
> File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line 182,
> in dispatch
> instance.main()
> File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py", line 16,
> in is_root
> return func(*a, **kw)
> File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/trigger.py",
> line 70, in main
> Activate(['--auto-detect-objectstore', osd_id, osd_uuid]).main()
> File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/activate.py", line
> 339, in main
> self.activate(args)
> File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py", line 16,
> in is_root
> return func(*a, **kw)
> File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/activate.py", line
> 249, in activate
> raise RuntimeError('could not find osd.%s with fsid %s' % (osd_id,
> osd_fsid))
> RuntimeError: could not find osd.1 with fsid
> b1e03762-9579-4b3c-bc8d-3bcb95302b31
> root@nitrogen:~# blkid
> /dev/sda1: UUID="b13aa310-13e1-40c8-8661-40cf3ffa93b2" TYPE="xfs"
> PARTLABEL="ceph data" PARTUUID="c2b97862-2217-4520-8587-6a1fe771425b"
> /dev/sdb1: UUID="5fd17d4c-6161-4ff9-81a2-a301be94273f" TYPE="ext4"
> PARTUUID="fe1535be-01"
> /dev/sdb5: UUID="e369ff56-f60d-448e-9d38-3545e38d5e10"
> UUID_SUB="545d46c2-b7c2-4888-a897-ac04a85c8545" TYPE="btrfs"
> PARTUUID="fe1535be-05"
> /dev/sdb6: UUID="19f3b666-660b-48c0-afb7-018cbc03e976" TYPE="swap"
> PARTUUID="fe1535be-06"
> /dev/sda2: PARTLABEL="ceph journal"
> PARTUUID="cf1c0037-058a-423b-99ff-a8f74080df85"
> /dev/sdb7: PARTUUID="fe1535be-07"
> root@nitrogen:~# mount
> …
> /dev/sda1 on /var/lib/ceph/osd/ceph-1 type xfs
> (rw,relatime,attr2,inode64,noquota)
It would seem *something* recognised and mounted osd.1 in the right
place, but that doesn't seem good enough, it still can't "find" the
filesystem?
What gives?
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]