That value is in ceph.conf, but I wouldn't expect that to have helped,
looking at the ceph-disk code (in the module level function `activate`)::
ceph_fsid = read_one_line(path, 'ceph_fsid')
if ceph_fsid is None:
raise Error('No cluster uuid assigned.')
Maybe there is a thinko there, as ceph_fsid is only used to find the
cluster name by scanning config files (which does succeed if there is
only a ceph.conf that does not contain an fsid - meaning the ceph_fsid
value is not used at all.)
Cheers,
Jasper
On 25/07/2017 19:22, David Turner wrote:
> Does your ceph.conf file have your cluster uuid lasted in it? You should
> be able to see what it is from ceph status and add it to your config if
> it's missing.
>
>
> On Tue, Jul 25, 2017, 7:38 AM Jasper Spaans
> <[email protected] <mailto:[email protected]>> wrote:
>
> Hi list,
>
> We had some troubles activating our OSDs after upgrading from Ceph
> 10.2.7 to 10.2.9. The error we got was 'No cluster uuid assigned' after
> calling ceph-disk trigger --sync /dev/sda3 .
>
> Our cluster runs on Ubuntu 16.04, has been deployed using the
> Ceph-ansible roles, and we're using the collocated dmcrypt mode (so, 3
> partitions per drive for data, journal and lockbox, with the first two
> encrypted using dmcrypt).
>
> After some probing (read: diffing the source code) it turned out our
> lockbox directories did not contain a 'ceph_fsid' file, so I just
> bluntly put them in using something along the lines of:
>
> for fs in $(mount|grep lockbox|cut -d' ' -f3) ; do \
> mount $fs -o rw,remount
> echo $our_fs_uuid > $fs/ceph_fsid
> mount $fs -o ro,remount
> done
>
> After doing this on all of our nodes, I was able to upgrade and activate
> the OSDs again, and it even survives a reboot.
>
> Looking at the release notes, I couldn't find any mention of this - so
> I'll post it here in the hopes someone may find it useful.
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com