[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-08 Thread Mathias Schwenke
> It looks like a configuration issue, you can use plain `rbd` to check 
> connectivity.
Yes, it was a configuration error. I fixed it. 
Also, I had to adapt different rbd feature sets between ovirt nodes and ceph 
images. Now it seems to work.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/72OOSCUSTZAGYIDTEDIINDO47EBL2GLM/


[ovirt-users] Re: oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-04 Thread Mathias Schwenke
Thanks vor your replay. 
Yes, I have some issues. In some cases starting or migrating a virtual machine 
failed. 

At the moment it seems that I have a misconfiguration of my ceph connection: 
2020-06-04 22:44:07,685+02 ERROR 
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(EE-ManagedThreadFactory-engine-Thread-2771) [6e1b74c4] cinderlib execution 
failed: Traceback (most recent call last):
  File "./cinderlib-client.py", line 179, in main
args.command(args)
  File "./cinderlib-client.py", line 232, in connect_volume
backend = load_backend(args)
  File "./cinderlib-client.py", line 210, in load_backend
return cl.Backend(**json.loads(args.driver))
  File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 88, in 
__init__
self.driver.check_for_setup_error()
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
295, in check_for_setup_error
with RADOSClient(self):
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
177, in __init__
self.cluster, self.ioctx = driver._connect_to_rados(pool)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
353, in _connect_to_rados
return _do_conn(pool, remote, timeout)
  File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 818, in _wrapper
return r.call(f, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
raise attempt.get()
  File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
six.reraise(self.value[0], self.value[1], self.value[2])
  File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 
351, in _do_conn
raise exception.VolumeBackendAPIException(data=msg)
VolumeBackendAPIException: Bad or unexpected response from the storage volume 
backend API: Error connecting to ceph cluster.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4BMALG7MPMPS3JJU23OCQUMOCSO2D27/


[ovirt-users] oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

2020-06-04 Thread Mathias Schwenke
At 
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
 ist described the cinderlib integration into oVirt: 
Installation: 
- install centos-release-openstack-pike on engine and all hosts
- install openstack-cinder and python-pip on engine
- pip install cinderlib on engine 
- install python2-os-brick on all hosts
- install ceph-common on engine and on all hosts

Which software versions do you use on CentOS 7 whith oVirt 4.3.10?
The package centos-release-openstack-pike, as described at the above-mentioned 
Managed Block Storage feature page, doesn't exist anymore in the CentOS 
repositories, so I have to switch to centos-release-openstack-queens or newer 
(rocky, stein, train). So I get (for using with ceph luminous 12): 
- openstack-cinder 12.0.10
- cinderlib 1.0.1
- ceph-common 12.2.11
- python2-os-brick 2.3.9
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5BRKSYAHJBLI65G6JEDZIWSQ72OCF3S/


[ovirt-users] Re: Cinderlib managed block storage, ceph jewel

2019-07-22 Thread Mathias Schwenke
> Starting a VM should definitely work, I see in the error message:
> "RBD image feature set mismatch. You can disable features unsupported by
> the kernel with "rbd feature disable"
> Adding "rbd default features = 3" to ceph.conf might help with that.
Thanks! That helped. 

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
 means: 
> Also for Ceph backend, a keyring file and ceph.conf file is needed in the 
> Engine.
I hat do copy them to all the hosts to start a virtual machine with attached 
cinderlib ceph block device. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6AECIHGBKKRW2ZPTQN6RMLPPCT2E3XCW/


[ovirt-users] Cinderlib managed block storage, ceph jewel

2019-07-17 Thread mathias . schwenke
Hi. 
I tried to use manged block storage to connect our oVirt cluster (Version 
4.3.4.3-1.el7) to our ceph storage ( version 10.2.11). I used the instructions 
from 
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

At the moment, in the ovirt administration portal I can create and delete ceph 
volumes (ovirt disks) and attach them to virtual machines. If I try to launch a 
vm with connected ceph block storage volume, starting fails: 

2019-07-16 19:39:09,251+02 WARN  
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
 (default task-53) [7cada945] Unexpected return value: Status [code=926, 
message=Managed Volume Helper failed.: ('Error executing helper: Command 
[\'/usr/libexec/vdsm/managedvolume-helper\', \'attach\'] failed with rc=1 
out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\', 
\\\'privsep-helper\\\', \\\'--privsep_context\\\', 
\\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\', 
\\\'/tmp/tmpB6ZBAs/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new privsep 
daemon via rootwrap\\noslo.privsep.daemon: privsep daemon 
starting\\noslo.privsep.daemon: privsep process running with uid/gid: 
0/0\\noslo.privsep.daemon: privsep process running with capabilities 
(eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: privsep 
daemon running as pid 112531\\nTraceback (most recent call last):\\n  File 
"/usr/libexec/vdsm/managedvolume-help
 er", line 154, in \\nsys.exit(main(sys.argv[1:]))\\n  File 
"/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 137, 
in attach\\nattachment = conn.connect_volume(conn_info[\\\'data\\\'])\\n  
File "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96, in 
connect_volume\\nrun_as_root=True)\\n  File 
"/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in 
_execute\\nresult = self.__execute(*args, **kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 169, 
in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 205, in 
_wrap\\nreturn self.channel.remote_call(name, args, kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in 
remote_call\\nraise exc_type(*result[2])\\noslo_concurrency.processutils.Pr
 ocessExecutionError: Unexpected error while running command.\\nCommand: rbd 
map volume-a57dbd5c-2f66-460f-b37f-5f7dfa95d254 --pool ovirt-volumes --conf 
/tmp/brickrbd_TLMTkR --id ovirtcinderlib --mon_host 192.168.61.1:6789 
--mon_host 192.168.61.2:6789 --mon_host 192.168.61.3:6789\\nExit code: 
6\\nStdout: u\\\'RBD image feature set mismatch. You can disable features 
unsupported by the kernel with "rbd feature disable".nIn some cases useful 
info is found in syslog - try "dmesg | tail" or so.n\\\'\\nStderr: 
u\\\'rbd: sysfs write failednrbd: map failed: (6) No such device or 
addressn\\\'\\n\'',)]
2019-07-16 19:39:09,251+02 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
 (default task-53) [7cada945] Failed in 'AttachManagedBlockStorageVolumeVDS' 
method
 
After disconnecting the disk, I can delete it (the volume disappears from 
ceph), but the disks stays in my oVirt administration portal as cinderlib means 
the disk ist still connected: 

2019-07-16 19:42:53,551+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] 
(EE-ManagedThreadFactory-engine-Thread-487362) 
[887b4d11-302f-4f8d-a3f9-7443a80a47ba] Running command: RemoveDiskCommand 
internal: false. Entities affected :  ID: a57dbd5c-2f66-460f-b37f-5f7dfa95d254 
Type: DiskAction group DELETE_DISK with role type USER
2019-07-16 19:42:53,559+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.managedblock.RemoveManagedBlockStorageDiskCommand]
 (EE-ManagedThreadFactory-commandCoordinator-Thread-8) [] Running command: 
RemoveManagedBlockStorageDiskCommand internal: true.
2019-07-16 19:42:56,240+02 ERROR 
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-8) [] cinderlib execution 
failed
DBReferenceError: (psycopg2.IntegrityError) update or delete on table "volumes" 
violates foreign key constraint "volume_attachment_volume_id_fkey" on table 
"volume_attachment"
2019-07-16 19:42:55,958 - cinderlib-client - INFO - Deleting volume 
'a57dbd5c-2f66-460f-b37f-5f7dfa95d254' [887b4d11-302f-4f8d-a3f9-7443a80a47ba]
2019-07-16 19:42:56,099 - cinderlib-client - ERROR - Failure occurred when 
trying to run command 'delete_volume': (psycopg2.IntegrityError) update or 
delete on table "volumes" violates foreign key constraint 
"volume_attachment_volume_id_fkey" on table "volume_attachment"
DETAIL:  Key (id)=(a57dbd5c-2f66-460f-b37f-5f7dfa95d254) is still referenced 
from table