Beau, have you solved your performance problem already?
Would you share what is the MTU you have set on the underlying network, and
in your VMs?
On Dec 13, 2017 17:41, "Dominik Holler" wrote:
On Tue, 12 Dec 2017 23:12:12 +0200
Yaniv Kaul wrote:
Were you running gluster under you shared storage? If so, you probably need to
setup ganesha nfs yourself.
If not, check your ha-agent logs and make sure it’s mounting the storage
properly and check for errors. Good luck!
> From: Jayme
> Subject: Re: [ovirt-users] Some major
This is becoming critical for me, does anyone have any ideas or
recommendations on what I can do to recover access to hosted VM? As of
right now I have three hosts that are fully updated, they have the 4.2 repo
and a full yum update was performed on them, there are no new updates to
I suppose is because they're started with "run once".
Happens the same also when you deploy with ansible using cloud-init. I
think with vagrant is the same.
Il 11 gen 2018 7:24 PM, "Nathanaël Blanchet" ha scritto:
> Hi all,
> I'm using vagrant ovirt4 plugin to
I'm using vagrant ovirt4 plugin to provision some vms, and I noticed
that a wheel is still present next to the up status of those vms. What
does that mean?
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
On Thu, Jan 11, 2018 at 1:03 PM, Николаев Алексей <
> Denis, thx for your answer.
> [root@node-gluster203 ~]# gluster volume heal engine info
> Brick node-gluster205:/opt/gluster/engine
> Status: Connected
> Number of entries: 0
On Thu, Jan 11, 2018 at 5:32 PM, Derek Atkins wrote:
> On Thu, January 11, 2018 9:53 am, Yaniv Kaul wrote:
> > No one likes downtime but I suspect this is one of those serious
> > vulnerabilities that you really really must be protected against.
> > That being said,
if you have recent supermicro you dont need to update the bios,
the intel microcodes you can find here:
On Thu, January 11, 2018 9:53 am, Yaniv Kaul wrote:
> No one likes downtime but I suspect this is one of those serious
> vulnerabilities that you really really must be protected against.
> That being said, before planning downtime, check your HW vendor for
> firmware or Intel for microcode
On Thu, Jan 11, 2018 at 4:33 PM, Derek Atkins wrote:
> Yaniv Kaul writes:
> > On Mon, Jan 8, 2018 at 7:32 PM, Derek Atkins wrote:
> > Michal Skrivanek writes:
> > > > If there are
Yaniv Kaul writes:
> On Mon, Jan 8, 2018 at 7:32 PM, Derek Atkins wrote:
> Michal Skrivanek writes:
> > > If there are Patches nessessary will there also be updates
> > ovirt
Could you please share your experience with snapshotting the VMs in
oVirt 4.1 with GlusterFS as a storage?
Thanks in advance
Users mailing list
Quick questions about Nodes (in order to not hijack the other thread).
As they don't have much notion of persistence how can I:
- To have a simple configuration backup in order for, if the Operating
System disk fails I reinstall the Node, restore the config and
everything comes back up fine ?
Hey Colin -
You're correct -- they will persist. However, there's only a plugin for
yum, and not bare RPM. If the packages were installed with "rpm -Uvh ...",
they won't be 'sticky'. I believe this is also part of the documentation.
If you don't remember whether you used yum or not, you can
Next-gen node (since 4.0) doesn't have any strong notion of "persistence",
unlike previous versions of oVirt Node.
My guess would be that /exports was not moved across to the new system.
The update process for Node is essentially:
* create a new lv
* unpack the squashfs in the new RPM and put
I haven't tried to build on EL for a long time, but the easiest way to
modify may simply be to unpack the squashfs, chroot inside of it, and
repack it. Have you tried this?
On Wed, Dec 27, 2017 at 11:33 AM, Giuseppe Ragusa <
> Hi all,
> I'm trying to modify
Note that, in the case of Node, the ovirt-release RPM should not be
installed. Mostly because it automatically enables a number of per-package
updates (such as vdsm) instead of installing a single image.
Trimming the repo files so they look like what is shipped in Node
(IgnorePkgs and OnlyPkgs)
The same here. Upgrade 4.1.8 to 4.2. oVirt connected to Active Directory. User
cannot see machines in VM portal. Still playing with permissions.
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Denis, thx for your answer. [root@node-gluster203 ~]# gluster volume heal engine infoBrick node-gluster205:/opt/gluster/engineStatus: ConnectedNumber of entries: 0 Brick node-gluster203:/opt/gluster/engineStatus: ConnectedNumber of entries: 0 Brick node-gluster201:/opt/gluster/engineStatus:
On 01/11/2018 11:44 AM, Kapetanakis Giannis wrote:
On 10/01/18 22:11, Wesley Stewart wrote:
I would greatly appreciate seeing a script! It would be an excellent chance
for me to learn a bit about using ovirt from the command line as well!
I'm using something like this with
On 10/01/18 22:11, Wesley Stewart wrote:
> I would greatly appreciate seeing a script! It would be an excellent chance
> for me to learn a bit about using ovirt from the command line as well!
I'm using something like this with ovirt-shell
The oVirt Project is pleased to announce the availability of the oVirt
Release Candidate, as of January 11th, 2017
This update is the nineth in a series of stabilization updates to the 4.1
This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
On Thu, Jan 11, 2018 at 9:56 AM, Николаев Алексей <
> We have a self-hosted engine test infra with gluster storage replica 3
> arbiter 1 (oVirt 4.1).
Why would not you try 4.2? :-) There is a lot of good changes in that area.
Hi community! We have a self-hosted engine test infra with gluster storage replica 3 arbiter 1 (oVirt 4.1). Shared storage for engine VM has a dns name "gluster-facility:/engine".DNS "gluster-facility" resolf's into 3 A records:"node-gluster205 10.77.253.205""node-gluster203
you hit one known issue we already have fixes for (4.1 hosts with 4.2
You can try hotfixing it by upgrading hosted engine packages to 4.2 or
applying the patches manually and
I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared storage.
The shared storage is mounted from one of the hosts.
I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
update then engine setup which seemed to complete successfully, at the end
it powered down the
Mail list logo