So we saw a potential bug with the user interface in oVirt 4.2 with
the Network Interfaces screen. Screenshot attached. Basically this
morning we had a problem with out bond, we replaced the cable and the
bond came back up. However the screen in ovirt that shows you this
information was re
On Fri, Oct 19, 2018 at 11:33 AM wrote:
> Thanks Andrej, Do you know a previous version where this functionality
> work fine mainly in the VM Portal? Quotas in Ovirt is a very important
> functionality for me.
>
Unfortunately no. The code is in progress if you'd like to follow along:
https://ger
Sorry Greg it was my mistake, I was using a user account that had read-only
admin rights to allow it to create templates in the admin portal.
Regards,
Paul S.
From: Greg Sheremeta
Sent: 18 October 2018 15:45
To: Staniforth, Paul
Cc: users
Subject
Den 19 okt. 2018 20:38 skrev René Koch :Hi list,I'm happy to announce version 2.0 of check_rhv.check_rhv is a monitoring plugin for Icinga/Nagios and it's forks,which is used to monitor datacenters, clusters, hosts, vms, vm poolsand storage domains of Red Hat Enterprise Virtualization (RHEV) andoVi
Hi list,
I'm happy to announce version 2.0 of check_rhv.
check_rhv is a monitoring plugin for Icinga/Nagios and it's forks,
which is used to monitor datacenters, clusters, hosts, vms, vm pools
and storage domains of Red Hat Enterprise Virtualization (RHEV) and
oVirt virtualization environments.
Thanks Andrej, Do you know a previous version where this functionality work
fine mainly in the VM Portal? Quotas in Ovirt is a very important functionality
for me.
Regards,
Siovel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email
another little step:
I found an ovirtmgmt interface active on host (from a prev. failed
deployment).
After shutdown this interface I soved one error and now deploy script
wait. It is waiting since 1 hour ago
[ INFO ] TASK [Wait for ovirt-engine service to start]
[ INFO ] ok: [localh
Hey All,
Is there a newer package of glusterfs-gnfs available for GlusterFS 4.1?
After upgrading to GlusterFS 4.1, all the hosts are now disconnected
from the Ovirt engine.
I'm on CentOS 7.5:
---> Package kmod-kvdo.x86_64 0:6.1.0.181-17.el7_5 will be installed
---> Package mokutil.x86_64 0:1
Hi,
I have installed a 4.2 cluster on CentOS 7 nodes but I have follow an (old)
procedure of mine done with 4.0 : so, I have disabled Network Manager before
installing oVirt.
The networks created and validated in the engine UI are :
ovirmgmt on bond0 (2 slaves) failover mode
storagemanager on bo
I'm wondering how I can best limit the ability of VMs to overrun the load
on hosts. I have a fairly stock 4.2 HCI setup with three well spec'ed
servers, 10Gbe/SSDs, plenty of ram and CPU with only a hand full of light
use VMs. I notice when the occasional demanding job is run on a VM I'm
seeing l
Hi,
When creating a VM, users should be able to choose which quota they want to
use, because each user can have multiple quotas available.
But we found a bug where there is no way for the user to get a list of
available quotas using the REST API or VM Portal.
There is a related bug[1], which will
I've found some additional info.
Engine what for host to be up.
VDSM log on host show "waiting for storage pool to go up", but
"hosted-engine --deploy" (and web wizard) don't ask for storage domain.
2018-10-19 13:36:52,206+0200 INFO (vmrecovery) [vds] recovery: waiting
for storage pool to g
Hi,
Can I just changed the VLAN ID of the ovirtmgmt network in the Admin
Portal. IN the OS the network is configured and verified for ovirtmgmt
network to have a 21 VLAN ID, but in the Admin Portal, it shows VLAN ID 20,
which is configured for the VM network.
Can I just changed it in the admin po
Hi Nir,
thank you for this detailed analysis. As I can see, the fist VM to shutdown had
its lease on hosted storage domain (probably not the best, maybe a test
remained here) and its disk on DATA02. The 3 others (HA VMs) had a lease on the
same domain as their disk (DATA02).
So I suppose this l
> On 19 Oct 2018, at 01:08, Edward Berger wrote:
>
> I'm not sure where to send a request for including the current Aquantia 107
> (10GbaseT nic) driver to be included in the ovirt-node-ng image. I don't see
> a centos RPM for kmod-redhat-atlantic, apparently there's a scientific linux
> rpm
15 matches
Mail list logo