[ovirt-users] Managed Block Storage

2019-07-04 Thread dan . poltawski
I'm testing out the managed storage to connect to ceph and I have a few questions: * Would I be correct in assuming that the hosted engine VM needs connectivity to the storage and not just the underlying hosts themselves? It seems like the cinderlib client runs from the engine? * Does the ceph

[ovirt-users] Re: Managed Block Storage

2019-07-04 Thread dan . poltawski
> Can you provide logs? mainly engine.log and cinderlib.log > (/var/log/ovirt-engine/cinderlib/cinderlib.log If I create two volumes, the first one succeeds successfully, the second one hangs. If I look in the processlist after creating the second volume which doesn't succceed, I see the python

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
Hi, On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote: > Any chance you can setup gdb[1] so we can find out where it's stuck > exactly? Yes, abolutely - but I will need some assistance in getting GDB configured in the engine as I am not very familar with it - or how to enable the correct rep

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
ion=1, modify=, running=True, debug_blocking=False, listeners={'read': {20: , spent=False, greenlet=, evtype='read', mark_as_closed=, tb=) at remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0, greenlet=, closed=[], stopping=False, timers=[(, , tpl=(, (), {}),

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
return listener 60 61def do_poll(self, seconds): >62return self.poll.poll(seconds) > Thanks for you help, Dan > On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski < > dan.poltaw...@tnp.net.uk> wrote: > > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
Hi, On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote: > Any chance you can setup gdb[1] so we can find out where it's stuck > exactly? Yes, abolutely - but I will need some assistance in getting GDB configured in the engine as I am not very familar with it - or how to enable the correct r

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote: > I've now managed to succesfully create/mount/delete volumes! However, I'm seeing live migrations stay stuck. Is this supported? (gdb) py-list 345client.conf_set('rados_osd_op_timeo

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Mon, 2019-07-08 at 18:53 +0300, Benny Zlotnik wrote: > Can you try to create mutliple ceph volumes manually via rbd from the > engine machine, so we can simulate what cinderlib does without using > it, this can be done > $ rbd -c ceph.conf create /vol1 --size 100M > $ rbd -c ceph.conf create /vo

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
lso is it possible to host the hosted engine on this storage? Thanks Again for your help, Dan > > On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski < > dan.poltaw...@tnp.net.uk> wrote: > > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote: > > > I've now m

[ovirt-users] Moving Hosted Engine Storage

2019-07-16 Thread Dan Poltawski
Hello, I've read various posts[1] on this list, but I must confess that I am still not entirely clear on the process for moving a hosted engine to a new storage domain (in my case I want to move from an existing NFS server to a new iSCSI target). Some of what i've read make me slightly concerned i

[ovirt-users] Hosted Engine on seperate L2 network from nodes?

2019-08-16 Thread Dan Poltawski
For some security requirements, I’ve been asked if it’s possible to segregate the hosted engine from the physical nodes, with specific firewalling for access to do node/ storage operations (I’m using managed block storage). Is this an approach others us, or is it better practice and just ensure

[ovirt-users] Where to configure iscsi initiator name?

2019-08-21 Thread Dan Poltawski
When I added the first node a 'random' initiator name was generated of form: # cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.1994-05.com.redhat:[RANDOM] Having attempted to add another node, this node has another initiator name generated and can't access the storage. Is there a way to con

[ovirt-users] iSCSI Multipath/multiple gateways

2019-09-03 Thread dan . poltawski
My iSCSI target (ceph-based) has multiple gateways, I’d like to ensure my hosted storage is aware of these such that each gateway can be rebooted for maintance without impacting service. What is the appropriate way to configure this so that each host knows of the multipath configuration? I do no

[ovirt-users] iSCSI Multipath/multiple gateways

2019-09-03 Thread Dan Poltawski
My iSCSI target (ceph-based) has multiple gateways, I’d like to ensure my hosted storage is aware of these such that each gateway can be rebooted for maintance without impacting service. What is the appropriate way to configure this so that each host knows of the multipath configuration? I do no

[ovirt-users] Re: iSCSI Multipath/multiple gateways

2019-09-04 Thread Dan Poltawski
On Tue, 2019-09-03 at 17:27 +0200, Matthias Leopold wrote: > - multipath configuration on hypervisor hosts is then automatically > set > up without further intervention. be sure to add the multipath.conf > snippets from > https://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/ to your > hyperv

[ovirt-users] Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-09-12 Thread Dan Poltawski
Yesterday we had a catastrophic hardware failure with one of our nodes using ceph and the experimental cinderlib integration. Unfortunately the ovirt cluster recover the situation well and took some manual intervention to resolve. I thought I'd share what happened and how we resolved it in case th

[ovirt-users] Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Dan Poltawski
On ovirt 4.3.5 we are seeing various problems related to the rbd device staying mapped after a guest has been live migrated. This causes problems migrating the guest back, as well as rebooting the guest when it starts back up on the original host. The error returned is ‘nrbd: unmap failed: (16)

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Dan Poltawski
mario multipathd: uevent trigger error Dan > > On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski < > dan.poltaw...@tnp.net.uk> wrote: > > On ovirt 4.3.5 we are seeing various problems related to the rbd > > device staying mapped after a guest has been live migrated. This >