I have a ceph filesystem that I can manually mount on my ovirt host. [root@ovirt121 ~]# mount -t ceph ceph01:/ /mnt -o name=admin,secret=<secret removed for security> [root@ovirt121 ~]# touch /mnt/test
works great! Then I umount that filesystem and attempt to add storage domain in ovirt engine. Settings: Storage type: POSIX compliant FS Host to use: ovirt121 name: cephdata Path: ceph01:/ vfs type: ceph mount options: name=admin,secret=<secret removed for security> In the oVirt web UI I get a "General Exception" error. On the host I see this in vdsm.log: 2018-10-03 23:47:58,165+0000 INFO (jsonrpc/6) [vdsm.api] START connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'name=admin,secret=AQDJI7RbS4Z2GhAAhlyWUvX7wtPNzO6n5vqjsQ==', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'ceph01:/', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'ceph', u'password': '********', u'port': u''}], options=None) from=::ffff:10.78.7.221,54910, flow_id=ba106a2d-e204-4da7-acc7-29b70c3ff759, task_id=b5e0c87c-8838-4b49-8590-11382a02d67f (api:46) 2018-10-03 23:47:58,166+0000 INFO (jsonrpc/6) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/ceph01:_' (storageServer:167) 2018-10-03 23:47:58,166+0000 INFO (jsonrpc/6) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/ceph01:_ mode: None (fileUtils:197) 2018-10-03 23:47:58,166+0000 INFO (jsonrpc/6) [storage.Mount] mounting ceph01:/ at /rhev/data-center/mnt/ceph01:_ (mount:204) 2018-10-03 23:47:58,218+0000 ERROR (jsonrpc/6) [storage.HSM] Could not connect to storageServer (hsm:2415) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2412, in connectStorageServer conObj.connect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 183, in connect self.getMountObj().getRecord().fs_file) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 253, in getRecord (self.fs_spec, self.fs_file)) OSError: [Errno 2] Mount of `ceph01:/` at `/rhev/data-center/mnt/ceph01:_` does not exist 2018-10-03 23:47:58,218+0000 INFO (jsonrpc/6) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 100, 'id': u'00000000-0000-0000-0000-000000000000'}]} from=::ffff:10.78.7.221,54910, flow_id=ba106a2d-e204-4da7-acc7-29b70c3ff759, task_id=b5e0c87c-8838-4b49-8590-11382a02d67f (api:52) However, if i run "monut" or "cat /proc/mounts" I can see the cephfs was properly mounted, I can even touch files. [root@ovirt121 ~]# touch /rhev/data-center/mnt/ceph01\:_/ceph_ovirt_test [root@ovirt121 ~]# [root@ovirt121 ~]# cat /proc/mounts <...bunch of mounts...> 10.78.71.1:/ /rhev/data-center/mnt/ceph01:_ ceph rw,relatime,name=admin,secret=<hidden>,acl,wsize=16777216 0 0 [root@ovirt121 ~]# Any thoughts? oVirt version is 4.2.6.4-1.el7, running on CentOS 7 18.05. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPOUQEIAQIZQ6YOMJRGWE65LMNNPLLVI/