On 10/15/20 11:27 AM, Jeff Bailey wrote:
On 10/15/2020 12:07 PM, Michael Thomas wrote:
On 10/15/20 10:19 AM, Jeff Bailey wrote:
On 10/15/2020 10:01 AM, Michael Thomas wrote:
I recreated the storage domain and added rbd_default_features=3 to
ceph.conf. Now I see the new disk being created with (what I think
is) the correct set of features:
# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
size 100 GiB in 25600 objects
order 22 (4 MiB objects)
create_timestamp: Thu Oct 15 06:53:23 2020
access_timestamp: Thu Oct 15 06:53:23 2020
modify_timestamp: Thu Oct 15 06:53:23 2020
However, I'm still unable to attach the disk to a VM. This time
it's a permissions issue on the ovirt node where the VM is running.
It looks like it can't read the temporary ceph config file that is
sent over from the engine:
Are you using octopus? If so, the config file that's generated is
missing the "[global]" at the top and octopus doesn't like that. It's
been patched upstream.
Yes, I am using Octopus (15.2.4). Do you have a pointer to the
upstream patch or issue so that I can watch for a release with the fix?
And for anyone playing along at home, I was able to map this back to the
It's a simple fix. I just changed line 100 of
conf_file.writelines(["[global]", "\n", mon_hosts, "\n", keyring, "\n"])
After applying this patch, I was finally able to attach my ceph block
device to a running VM. I've now got virtually unlimited data storage
for my VMs. Many thanks to you and Benny for the help!
Users mailing list -- firstname.lastname@example.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: