On Tue, May 7, 2019 at 12:29 AM Brian Kircher
wrote:
> Thanks Simone,
>
>
>
> I’m using the ansible code that is packaged with the rpm packages as this
> is a fully offline development deployment without access to our ansible
> server or the internet in general. This also has the added benefit o
You may use gluster with geo-replucation, but that doesn't support gluster
updates from remote to local side, so the second location is your DR site and
nothing else should be running.
Maybe Simone/Sahina can clarify the ovirt part of the setup.
Best Regards,
Strahil NikolovOn May 6, 2019 18:15
I Also get these errors in the websocket proxy, it looks like either I messed
up a cert on the main ovirt machine, or there is some additiona configuration
on the hosts.
May 06 22:33:07 ovirt.domain.com ovirt-websocket[31306]: 2019-05-06
22:33:07,786-0400 ovirt-websocket-proxy: INFO log_message
Thanks Simone,
I’m using the ansible code that is packaged with the rpm packages as this is a
fully offline development deployment without access to our ansible server or
the internet in general. This also has the added benefit of using the exact
ansible roles/plays that were packaged with the
this is what I see in the logs when I try to add RDMA:
[2019-05-06 16:54:50.305297] I [MSGID: 106521]
[glusterd-op-sm.c:2953:glusterd_op_set_volume] 0-management: changing
transport-type for volume storage_ssd to tcp,rdma
[2019-05-06 16:54:50.309122] W [MSGID: 101095]
[xlator.c:180:xlator_volop
Hi everybody,
I have 2 location separate geographically. I want to deploy an ovirt
installation. Evary location has 3 host with a storage system. What is the best
arquitecture for this scenario?. My problem is that I don know how to recover
the manager if its storage is down. With a backup/reco
I have 4.3.3 on engine and hosts in 4.3.3 with plain CentOS, updated from
4.3.2 on 2/5.
Since the day after, 3/5, I get the event
Check for available updates on host ov200 was completed successfully with
message 'found updates for packages ovirt-host'.
and also the icon at side of host in web adm
Yes! I can confirm that a second deployment succeeded on the second run. Great
:-)
I was also able to delete the Local Engine with virsh. I'd like to point out
for reference, that this is only possible if a sasldb user is created like this:
# saslpasswd2 -a libvirt username
Password:
Aft
Hello,
Context :
(probably) After a failed deletion of snapshot during a backup (it had
been somes times since i got this problem), one of my host has gone
Nonresponsive, and with him, all the VMs who was stored on it or run on it.
Instead of removing the problem snapshot (usually it does the tric
> On 1 May 2019, at 15:27, Matthias Leopold
> wrote:
>
> Hi,
>
> do I get it right that hypervisor hosts with Cascade Lake CPU could be used
> in oVirt, but would be recognized as "Skylake”?
correct. Skylake-server I guess.
> Cascade Lake specific features could only be used in VMs after C
> On 2 May 2019, at 09:21, Gianluca Cecchi wrote:
>
> Hello,
> having some environments born before 4.3.0 and Q35 introduction.
> Now on 4.3.3, but of course existing VMs seem not impacted and retain their
> machine type, eg a dumpxml gives:
>
> hvm
the default is still i440fx, so no cha
Hi, all
I do have a 5 hosts cluster with over 300 VMs in production with Red Hat
Virtualization version 4.2.8 and I am willing
to switch to a pure CentOS + oVirt 4.2 setup.
Management is based in a RHVM hosted-engine in a dedicated iSCSI DataStore and
our hosts are a mix of RHVH nodes (3) and
C
On Fri, May 3, 2019 at 8:14 PM Todd Barton
wrote:
> Simone,
>
> It appears 192.168.122.13 stops routing correctly during the final stage
> of deployment. After a failure of final stage, I can restart the
> hosted-engine VM from the cockpit and I can ping 192.168.122.13 from the
> Host again. If
On Sun, May 5, 2019 at 9:13 PM Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:
> Hello today I tried to migrate the hosted engine from our Default
> Datacenter (NFS) to our Ceph Datacenter. The deployment worked with the
> automatic "hosted-engine --deploy --restore-from-file=ba
14 matches
Mail list logo