Just to throw my 2 cents in, one of my clusters is very similar to yours,
& I'm not having any of the issues you complain about. One thing I would
strongly recommend you do however is bond your NICs with LACP 802.3ad -
either 2x1Gbit for oVirt & 2x1Gbit for Gluster, or bond all of your
t much benefit from
single 10Gbit links would be on our distributed storage layer, although
with 10 nodes, each with 4x1Gbit LAGGs, even that's holding up quite well.
Let's see how the tests go tomorrow...
On 05/14/2018 03:03 PM, Doug Ingham wrote:
>> On 14 May 2018 at 15:35, Juan
14, 2018, 11:33 PM Chris Adams <c...@cmadams.net> wrote:
>> Once upon a time, Doug Ingham <dou...@gmail.com> said:
>> > Correct!
>> > | Single 1Gbit virtual interface
>> > |
>> > VM Host Switch stack
refer to use LACP (mode 4)
> 2018-05-14 16:20 GMT-03:00 Doug Ingham:
>> On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
>>> You should use better hashing algorithms for LACP.
>>> Take a look at this explanation: https://www.ibm.co
p; the switch, but
setting up LACP between the VM & the host. For reasons of stability, my 4.1
cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my
question, is LACP on the VM possible with that, or will I have to use ALB?
On 14 May 2018 at 15:01, Juan Pablo wrote:
> LACP is not intended for maximizing throughtput.
> if you are using iscsi, you should use multipathd instead.
Umm, maximising the total throughput for multiple concurrent connections is
most definitely one of
My hosts have all of their interfaces bonded via LACP to maximise
throughput, however the VMs are still limited to Gbit virtual interfaces.
Is there a way to configure my VMs to take full advantage of the bonded
One way might be adding several VIFs to each VM & using
The two key errors I'd investigate are these...
2018-05-10 03:24:21,048+02 WARN
> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:
> /gluster/brick/brick1' of volume
I've plugged this into our monitoring.
When the UPS' are at 50%, it puts the general cluster into global
maintenance & then triggers a shutdown action on all of the VMs in the
cluster's service group via the monitoring agent (you could use an SNMP
trap if you use agentless monitoring). Once all
Err...by reading the hardware specs in the standard manner? eg. dmidecode,
On 15 April 2018 at 01:28, TomK wrote:
> From within an oVirt (KVM) guest machine, how can I read the guest
> specific definitions such as memory, CPU, disk etc configuration that the
I sent Marek an email with my username last week, offering to do the pt_br
translation, however I've still had no response?
On 22 August 2017 at 09:38, Gianluca Cecchi
> On Mon, Aug 14, 2017 at 8:37 PM, Jakub Niedermertl
Just today I noticed that guests can now pass discards to the underlying
Is this supported by all of the main Linux guest OS's running the virt
> Only problem I would like to manage is that I have gluster network shared
> with ovirtmgmt one.
> Can I move it now with these updated packages?
Are the gluster peers configured with the same hostnames/IPs as your hosts
Once they're configured on the same network, separating
I think it's VDSM that handles the pausing & resuming of the VMs.
An analogous small-scale scenario...the Gluster layer for one of our
smaller oVirt clusters temporarily lost quorum the other week, locking all
I/O for about 30 minutes. The VMs all went into pause & then resumed
On 23 March 2017 at 23:54, Bryan Sockel wrote:
> I am attempting to deploy an appliance to a bonded interface, and i
> getting this error when it attempts to setup the bridge:
> [ ERROR ] Failed to execute stage 'Misc configuration': Failed to
Fedora has signed drivers, however whilst I can't speak for Windows 10,
I've still not had any luck getting any of the VirtIO & Spice drivers
working on Windows Server 2016.
The services are *running*, but there doesn't seem to be any actual
communcation going on between the hypervisor & guest...
16GB is just the recommended amount of memory. The more items your Engine
has to manage, the more memory it will consume, so whilst it might not be
using that amount of memory at the moment, it will do as you expand your
On 20 February 2017 at 16:22, FERNANDO FREDIANI
On 16 Feb 2017 22:41, "Nir Soffer" <nsof...@redhat.com> wrote:
On Fri, Feb 17, 2017 at 3:16 AM, Doug Ingham <dou...@gmail.com> wrote:
> Well that didn't go so well. I deleted both dom_md/ids & dom_md/leases in
> the cloned volume, and I still can't import
Again, many thanks!
On 16 February 2017 at 18:53, Doug Ingham <dou...@gmail.com> wrote:
> Hi Nir,
> On 16 February 2017 at 13:55, Nir Soffer <nsof...@red
On 16 February 2017 at 13:55, Nir Soffer <nsof...@redhat.com> wrote:
> On Mon, Feb 13, 2017 at 3:35 PM, Doug Ingham <dou...@gmail.com> wrote:
> > Hi Sahina,
> > On 13 February 2017 at 05:45, Sahina Bose <sab...@redhat.com> wrote:
...although I understand the API calls it uses have been deprecated in 4.1.
On 15 February 2017 at 14:38, Pat Riehecky wrote:
> Has someone got a script to automate scheduling snapshots of a specific
> system (and retaining them for
> On 13 February 2017 at 21:17, Doug Ingham <dou...@gmail.com> wrote:
>> Hey Guys,
>> I've gone through both oVirt's & Red Hat's API docs, but I can only find
>> info on getting the global maintenance state & setting l
I've gone through both oVirt's & Red Hat's API docs, but I can only find
info on getting the global maintenance state & setting local maintenance on
Is it not possible to set global maintenance via the API?
I'm writing up a new script for our engine-backup routine, but
that there is a sanlock daemon that
runs with VDSM, independently of the HE, so I'd basically have to bring the
volume down & wait for the leases to expire/delete them* before I can
import the domain.
*I understand removing /dom_md/leases/ should do the job?
> On Thu, Feb 9, 2017 a
s quorum and heal
> operations but your procedure sounds like a sensible way to operate this
> On 2017-02-11 18:08, Doug Ingham wrote:
> On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris <
On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris <
> Hello list,
> Just wanted to get your opinion on my ovirt home lab setup. While this is
> not a production setup I would like it to run relatively reliably so please
> tell me if the following
I currently use dedicated interfaces & hostnames to separate gluster
traffic on my "hyperconverged" hosts.
For example, the first node uses "v0" for its management interface & "s0"
for its gluster interface.
With this setup, I notice that all functions under the "Volumes" tab work,
On 9 February 2017 at 10:08, Gianluca Cecchi
> On Wed, Feb 8, 2017 at 10:59 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>> what is considered the best way to shutdown and restart an hypervisor,
>> supposing plain CentOS 7 host?
On 9 February 2017 at 15:48, Yaniv Kaul <yk...@redhat.com> wrote:
> On Thu, Feb 9, 2017 at 6:00 PM, Doug Ingham <dou...@gmail.com> wrote:
>> On 9 February 2017 at 12:03, Dan Yasny <dya...@gmail.com> wrote:
Some interesting output from the vdsm log...
2017-02-09 15:16:24,051 INFO (jsonrpc/1) [storage.StorageDomain] Resource
namespace 01_img_60455567-ad30-42e3-a9df-62fe86c7fd25 already registered
2017-02-09 15:16:24,051 INFO (jsonrpc/1) [storage.StorageDomain] Resource
My original HE died & was proving too much of a hassle to restore, so I've
setup a new HE on a new host & now want to import my previous data storage
domain with my VMs.
The problem is when I try to attach the new domain to the datacenter, it
hangs for a minute and then comes back with,
On 9 February 2017 at 12:03, Dan Yasny <dya...@gmail.com> wrote:
> On Thu, Feb 9, 2017 at 9:55 AM, Doug Ingham <dou...@gmail.com> wrote:
>> Hi Dan,
>> On 8 February 2017 at 18:26, Dan Yasny <dya...@gmail.com> wrote:
On 8 February 2017 at 18:26, Dan Yasny wrote:
> But seriously, above all, I'd recommend you backup the engine (it comes
> with a utility) often and well. I do it via cron every hour in production,
> keeping a rotation of hourly and daily backups, just in case. It
On 8 February 2017 at 18:10, Dan Yasny <dya...@gmail.com> wrote:
> On Wed, Feb 8, 2017 at 4:07 PM, Doug Ingham <dou...@gmail.com> wrote:
>> Hi Guys,
>> My Hosted-Engine has failed & it looks like the easiest solution will be
My Hosted-Engine has failed & it looks like the easiest solution will be
to install a new one. Now before I try to re-add the old hosts (still
running the guest VMs) & import the storage domain into the new engine, in
case things don't go to plan, I want to make sure I'm able to bring up
On 6 February 2017 at 13:30, Simone Tiraboschi wrote:
>1. What problems can I expect to have with VMs added/modified
>since the last backup?
> Modified VMs will be reverted to the previous configuration;
>>> additional VMs should be seen as
On 31 January 2017 at 13:27, Gianluca Cecchi
> This in CentOS 7.3 plain hosts used as hypervisors and intended for oVirt
> 4.0 and 4.1 hosts.
> In particular for performance related packages such as
> and the like.
Would anyone be able to tell me the name/location of the gluster client
log when mounting through libgfapi?
Users mailing list
> its memory resynchronised one last time
Actually, thinking about it, rather than diffing *all* of the memory on the
first host to resync it at the last moment, the hypervisor probably
simultaneously copies the current state of memory & uses copy-on-write
(COW) to write all new transactions
My educated guess...When you live migrate a VM, its state in memory is
copied over to the new host, but the VM still remains online during this
period to minimise downtime. Once its state in memory is fully copied to
the new host, the VM is paused on the original host, its memory
Make sure to enable global maintenance mode before doing so!
The Hosted-Engine is just a manager for the underlying hypervisors, which
will keep running the VMs as usual until the engine comes back online.
Disable maintenance mode afterwards.
On 26 January 2017 at 13:48, Wout Peeters
On 24 January 2017 at 15:15, emanuel.santosvar...@mahle.com wrote:
> If I access the UI via "ALIAS" I get the Error-Page "The client is not
> authorized to request an authorization. It's required to access the system
> using FQDN.
> What can I do to get UI working through ALIAS and real
Just giving this a bump in the hope that someone might be able to advise...
> One of our engines has had a DB failure* & it seems there was an
> unnoticed problem in its backup routine, meaning the last backup I've got
> is a couple of weeks old.
> Luckily, VDSM has kept the
One of our engines has had a DB failure* & it seems there was an unnoticed
problem in its backup routine, meaning the last backup I've got is a couple
of weeks old.
Luckily, VDSM has kept the underlying VMs running without any
interruptions, so my objective is to get the HE back online &
my HE has since borked itself & I'm now in the process of
restoring/redeploying it. I've got access to the logs, but the engine & API
are now offline.
> On Tue, Jan 17, 2017 at 1:52 PM, Doug Ingham <dou...@gmail.com> wrote:
>> Hi Tomas,
management was unaffected.
On Mon, Jan 9, 2017 at 8:09 PM, Doug Ingham <dou...@gmail.com> wrote:
> Hi all,
> We had some hiccups in our datacenter over the new year which caused some
> problems with our hosted engine.
> I've managed to get everything back up &
Each of my hosts/nodes also hosts its own gluster bricks for the storage
domains, and peers over a dedicated FQDN & interface.
For example, the first server is setup like the following...
eth0: v0.dc0.example.com (10.10.10.100)
eth1: s0.dc0.example.com (10.123.123.100)
As it's a
We had some hiccups in our datacenter over the new year which caused some
problems with our hosted engine.
I've managed to get everything back up & running, however now one of the
VMs is listed twice in the UI. When I click on the VM, both items are
highlighted & I'm able to configure &
Mail list logo