On February 1, 2020 2:33:57 AM GMT+02:00, matteo fedeli
>Hi at all, I have many problems after upgrade ovirt version. I come
>When I upgraded to 4.3.7 my three host (hyperconverged emviroment) I
>passed a few days of instability: HA agent down gluster problem...
On February 1, 2020 1:34:30 AM GMT+02:00, Jayme wrote:
>I have run into this exact issue before and resolved it by simply
>over the missing files and running a heal on the volume (can take a
>time to correct)
>On Fri, Jan 31, 2020 at 7:05 PM Christian Reiss
Hi at all, I have many problems after upgrade ovirt version. I come from
When I upgraded to 4.3.7 my three host (hyperconverged emviroment) I passed a
few days of instability: HA agent down gluster problem...
When I rebooted for the umpteenth time (after reinitialized lockspace, heal,
I have run into this exact issue before and resolved it by simply syncing
over the missing files and running a heal on the volume (can take a little
time to correct)
On Fri, Jan 31, 2020 at 7:05 PM Christian Reiss
> Hey folks,
> in our production setup with 3 nodes (HCI) we took one
in our production setup with 3 nodes (HCI) we took one host down
(maintenance, stop gluster, poweroff via ssh/ovirt engine). Once it was
up the gluster hat 2k healing entries that went down in a matter on 10
minutes to 2.
Those two give me a headache:
[root@node03:~] # gluster
Hello, sorry to bump the thread. But I was able to recover the backup, but it
ended in a non usable state.
The process was reinstall a host from the DVD. Create a new NFS share to the
new hosted engine and do the ovirt-hosted-engine-setup procedure. After the new
engine is up, I’ve connected
Once again bumping my own thread. I came across this link:
So it’s say that the VM metadata is stored in OVF_STORE. Which is awesome.
But in my case right now, the VM’s are running. So I need to properly
Happy to help.
Host shouldn't need to be in maintenance mode to add logical networks. I don't
do mine that way, but I am using a second nic for those and the first nic is
dedicated to ovirtmgmt.
Let me know if you need anything. I am an engineer, network, but also have done
> On 30 Jan 2020, at 00:07, Joseph Goldman wrote:
> There is an important distinction though, if your reboot process involves
> shutting down the VM's cleanly - then rebooting the server - they will not
> auto start - however if their operation was interrupted and they are listed
> On 27 Jan 2020, at 19:53, Dirk Streubel wrote:
> i use for testing Version 4.4 and i wanted to make a update.
> This is the result:
> LANG=C engine-setup
> [ INFO ] Checking for product updates...
> [ ERROR ] Yum
Given what you have described, it seems to be either a HAproxy or server config
issue. If the server can reach the internet, that solves default gateway
issues, if you can reach the server from the LAN then that solves any
I would probably do a packet capture at the pfSense
Thanks so much for your reply Robert!
I like your set-up alot, that's where I'm going too actually. But now I have
only one node, I'm trying to learn very basic setup with just the one for the
moment (Because I have it running after years of trying! It will take a few
weeks and another
Thanks again Joseph,
I do have specific noob question. I'm learning so much with this test
deployment :) 'Amazing.
I can't get to a test vm / webserver managed by Ovirt Engine from WAN, as I do
with the Engine and other machinesI suspect that I am missing some pretty
basic setup step
The Virtualization community that is gathering at FOSDEM would like to
share its plans to remember the life and accomplishments of our valued
friend and colleague, Lars Kurth, who has recently passed away.
The remembrance will be at 09:45 Sunday Feb. 2 in the Virtualization
Devroom (H.1309). We
We're testing version 4.3.8, we're planning to upgrade to this version
in production as currently we're still using 4.1.9.
In 4.1.9, users could grant permissions on their created VMs to other
users from within the VM portal, however I can't find this option on
We are actually seeing collisions, which is why I reached out in first place.
Strange is that is did not happen since few weeks ago, and since then I saw it
For now I am simply going to create new mac pool for each of the clusters and
switch to it, hoping it's not
For those of you who are unaware of this, Lars Kurth from the Xen project
has passed away . Lars was one of the people who truly practiced open
source and worked with the oVirt project on running FOSDEM's Virtualization
and IaaS room in the past years.
In Lars' memory there will be a
is the problem back with qemu/dynamic_ownership?
We have one DC with two clusters: one: vdsm-4.30.38 / libvirt-4.5.0-23,
two: vdsm-4.30.40 / libvirt-4.5.0-23, LibgfApi enabled, Gluster-Server
6.5.1, Gluster-Client 6.7, opVersion 6
It went well for a log time but now, with getting
That was it!
Thanks for your help
Ingénieur Système et Réseau
On Wed, Jan 29, 2020 at 7:42 PM Martin Necas wrote:
> this issue was already submitted and I created the patch and already done
> backport for it.
> You can put the module
Mail list logo