i still don't completely understand the oVirt Node update process and
the involved rpm packages.
We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
(i don't want run
Yesterday someone powered off our storage, and all my 3 hosts lose
their disks. After 2 days of recovering i managed to bring back
everything, except engine VM, which is online but not visible to
I did new deployment of VM, restored backup and started engine setup.
Thank you very much, Mr Leviim!
This made things clear.
Eduardo Mayoral Jimeno (emayo...@arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153
On 28/08/17 11:14, Shani Leviim wrote:
> Hi Eduardo,
> Welcome aboard!
> First, you may find
On Thu, Aug 31, 2017 at 11:15 AM, Jakub Niedermertl
> Hello Gianluca,
> ultimate source of truth for the engine is line  and possibly
> subsequent update clauses. It contains Broadwell-noTSX for 4.1 as well as
> planned 4.2.
ultimate source of truth for the engine is line  and possibly subsequent
update clauses. It contains Broadwell-noTSX for 4.1 as well as planned 4.2.
Ok, i did a right click on storage domain and did destroy. It's get's
imported and Engine VM too.
Now it seems OK,
Thank you very much.
On Thu, Aug 31, 2017 at 5:11 PM, Misak Khachatryan wrote:
> it's grayed out on web interface, is
yes, right. What you can do is edit the hosted-engine.conf file and there
is a parameter as shown below  and replace h2 and h3 with your second
and third storage servers. Then you will need to restart ovirt-ha-agent and
ovirt-ha-broker services in all the nodes .
Don't quite understand how you got to that 4.1.6 rc, it's only available in
the pre release repo, can you paste the yum repos that are enabled on your
On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold <
> thanks a
Also, to add to this, I figured all nodes need to "equal" in terms of
selinux now so I went on node 1 and set selinux to permissive, rebooted,
and then vdsmd wouldnt start which would show the host as nonresponsive in
engine UI. Upon inspection of the log it was because of the missing sebool
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.1.6 for testing, as of August 31th, 2017.
Starting from 4.1.5 oVirt supports libgfapi . Using libgfapi provides a
real performance boost for ovirt when using GlusterFS .
Due to a known issue
I recently installed ovirt cluster on 3 nodes and saw that I could only
migrate one way
Reviewing the logs I found this
2017-08-31 09:04:30,685-0400 ERROR (migsrc/1eca84bd) [virt.vm]
(vmId='1eca84bd-2796-469d-a071-6ba2b21d82f4') unsupported configuration:
Unable to find security driver
Hi Kasturi -
Thanks for feedback
> If cockpit+gdeploy plugin would be have been used then that would have
automatically detected glusterfs replica 3 volume created during Hosted
Engine deployment and this question would not have been asked
Actually, doing hosted-engine --deploy it too also auto
oVirt node ng is shipped with a placeholder rpm preinstalled.
The image-update rpms obsolete the placeholder rpm, so once a new
image-update rpm is published, yum update will pull those packages. So you
have 1 system that was a fresh install and the others were upgrades.
Next, the post
thanks a lot.
So i understand everything is fine with my nodes and i'll wait until the
update GUI shows the right version to update (4.1.5 at the moment).
Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:
oVirt node ng is shipped with a placeholder rpm preinstalled.
you can remote the hosted engine storage domain from the engine as
well. It should also be re-imported.
We had cases where destroying the domain ended up with a locked SD,
but removing the SD and re-importing is the proper way here.
PS: Re-adding the mailing list, we should
it's grayed out on web interface, is there any other way? Trying to
detach gives error
VDSM command DetachStorageDomainVDS failed: Storage domain does not
Failed to detach Storage Domain hosted_storage from Data Center
Yes that would do it, thanks for the update :)
On Thu, Aug 31, 2017 at 5:21 PM, Matthias Leopold <
> all of the nodes that already made updates in the past have
all of the nodes that already made updates in the past have
i went through the logs in /var/log/ovirt-engine/host-deploy/ and my own
notes and discovered/remembered that this being presented with RC
My engine was configured with enp3s0
Users mailing list
My engine was configured with *enp3s0* interface => ovirtmgmt. which says
out of sync (Earlier I was using Lan connection).
Now, I have moved into wifi - where I have *wlp2s0 *interface, When I click
host->setup network -> it don't get wlp2s0 interface to link with ovirt
So I've tested this today and I failed a node. Specifically, I setup a
glusterfs domain and selected "host to use: node1". Set it up and then
failed that VM
However, this did not work and the datacenter went down. My engine stayed
up, however, it seems configuring a domain to pin to a host to use
Typo..."Set it up and then failed that **HOST**"
And upon that host going down, the storage domain went down. I only have
hosted storage domain and this new one - is this why the DC went down and
no SPM could be elected?
I dont recall this working this way in early 4.0 or 3.6
On Thu, Aug 31,
By someone, I assume you mean some other process running on the host, or
possibly the engine?
I have several VMs, all thin provisioned, on my small storage (self-hosted
gluster / hyperconverged cluster). I'm now noticing that some of my VMs
(espicially my only Windows VM) are using even MORE disk space than the
blank it was allocated.
Example: windows VM: virtual size created at
Sorry to hijack the thread, but I was about to start essentially the same
I have a 3 node cluster, all three are hosts and gluster nodes (replica 2 +
arbitrar). I DO have the mnt_options=backup-volfile-servers= set:
During Hosted Engine setup question about glusterfs volume is being
asked because you have setup the volumes yourself. If cockpit+gdeploy
plugin would be have been used then that would have automatically detected
glusterfs replica 3 volume created during Hosted Engine deployment and this
Mail list logo