I'm testing out the managed storage to connect to ceph and I have a few
questions:
* Would I be correct in assuming that the hosted engine VM needs connectivity
to the storage and not just the underlying hosts themselves? It seems like the
cinderlib client runs from the engine?
* Does the ceph
> Can you provide logs? mainly engine.log and cinderlib.log
> (/var/log/ovirt-engine/cinderlib/cinderlib.log
If I create two volumes, the first one succeeds successfully, the second one
hangs. If I look in the processlist after creating the second volume which
doesn't succceed, I see the python
Hi,
On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> Any chance you can setup gdb[1] so we can find out where it's stuck
> exactly?
Yes, abolutely - but I will need some assistance in getting GDB
configured in the engine as I am not very familar with it - or how to
enable the correct rep
ion=1, modify=,
running=True, debug_blocking=False, listeners={'read': {20:
, spent=False,
greenlet=, evtype='read',
mark_as_closed=, tb=) at
remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
greenlet=, closed=[],
stopping=False, timers=[(,
, tpl=(, (), {}),
return listener
60
61def do_poll(self, seconds):
>62return self.poll.poll(seconds)
>
Thanks for you help,
Dan
> On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski <
> dan.poltaw...@tnp.net.uk> wrote:
> > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik
Hi,
On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> Any chance you can setup gdb[1] so we can find out where it's stuck
> exactly?
Yes, abolutely - but I will need some assistance in getting GDB configured in
the engine as I am not very familar with it - or how to enable the correct
r
On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> I've now managed to succesfully create/mount/delete volumes!
However, I'm seeing live migrations stay stuck. Is this supported?
(gdb) py-list
345client.conf_set('rados_osd_op_timeo
On Mon, 2019-07-08 at 18:53 +0300, Benny Zlotnik wrote:
> Can you try to create mutliple ceph volumes manually via rbd from the
> engine machine, so we can simulate what cinderlib does without using
> it, this can be done
> $ rbd -c ceph.conf create /vol1 --size 100M
> $ rbd -c ceph.conf create /vo
lso
is it possible to host the hosted engine on this storage?
Thanks Again for your help,
Dan
>
> On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski <
> dan.poltaw...@tnp.net.uk> wrote:
> > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> > > I've now m
Hello,
I've read various posts[1] on this list, but I must confess that I am
still not entirely clear on the process for moving a hosted engine to a
new storage domain (in my case I want to move from an existing NFS
server to a new iSCSI target). Some of what i've read make me slightly
concerned i
For some security requirements, I’ve been asked if it’s possible to segregate
the hosted engine from the physical nodes, with specific firewalling for access
to do node/ storage operations (I’m using managed block storage).
Is this an approach others us, or is it better practice and just ensure
When I added the first node a 'random' initiator name was generated of form:
# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:[RANDOM]
Having attempted to add another node, this node has another initiator name
generated and can't access the storage. Is there a way to con
My iSCSI target (ceph-based) has multiple gateways, I’d like to ensure my
hosted storage is aware of these such that each gateway can be rebooted for
maintance without impacting service. What is the appropriate way to configure
this so that each host knows of the multipath configuration? I do no
My iSCSI target (ceph-based) has multiple gateways, I’d like to ensure my
hosted storage is aware of these such that each gateway can be rebooted for
maintance without impacting service. What is the appropriate way to configure
this so that each host knows of the multipath configuration? I do no
On Tue, 2019-09-03 at 17:27 +0200, Matthias Leopold wrote:
> - multipath configuration on hypervisor hosts is then automatically
> set
> up without further intervention. be sure to add the multipath.conf
> snippets from
> https://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/ to your
> hyperv
Yesterday we had a catastrophic hardware failure with one of our nodes
using ceph and the experimental cinderlib integration.
Unfortunately the ovirt cluster recover the situation well and took
some manual intervention to resolve. I thought I'd share what happened
and how we resolved it in case th
On ovirt 4.3.5 we are seeing various problems related to the rbd device staying
mapped after a guest has been live migrated. This causes problems migrating the
guest back, as well as rebooting the guest when it starts back up on the
original host. The error returned is ‘nrbd: unmap failed: (16)
mario multipathd: uevent trigger error
Dan
>
> On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski <
> dan.poltaw...@tnp.net.uk> wrote:
> > On ovirt 4.3.5 we are seeing various problems related to the rbd
> > device staying mapped after a guest has been live migrated. This
>
18 matches
Mail list logo