[ovirt-users] Re: VMs unexpectidly restarted

2018-10-19 Thread fsoyer

Hi Nir,
thank you for this detailed analysis. As I can see, the fist VM to shutdown had 
its lease on hosted storage domain (probably not the best, maybe a test 
remained here) and its disk on DATA02. The 3 others (HA VMs) had a lease on the 
same domain as their disk (DATA02).
So I suppose this looks like a gluster latency on DATA02. But what I don't 
understand at this time is :
- if this was a lease problem on DATA02, the VM npi2 should not be impacted... 
Or DATA02 was inaccessible, and the messages should have reported a storage 
error (something like "IO error" I suppode ?)
- If this was a problem on hosted storage domain too, the engine do not restart 
(if the domain was off or blocked, it would have?) nor was marked as not 
responding, even temporarily
- Gluster saw absolutly nothing at the same time, on engine domain or DATA02 : 
the logs of daemons and bricks show nothing revelant.

Unfortunatly, I have no more the vdsm log file at the time of the problem : it 
is rotated+compressed all 2 hours, and I discover that if you uncompress the 
"vdsm.log.1.xz" for example, at the time of rotation the system overwrite it 
with the last log :(
I'm afraid that I need to wait for another problem to re-scan all the logs and 
try to understand what append... 
--

Cordialement,

Frank
 

Le Jeudi, Octobre 18, 2018 23:13 CEST, Nir Soffer  a écrit:
 On Thu, Oct 18, 2018 at 3:43 PM fsoyer  wrote:Hi,
I forgot to look in the /var/log/messages file on the host ! What a shame :/
Here is the messages file at the time of the error : 
https://gist.github.com/fsoyer/4d1247d4c3007a8727459efd23d89737
At the sasme time, the second host as no particular messages in its log.
Does anyone have an idea of the source problem ? The problem started when 
sanlock could not renew storage leases held by some processes: Oct 16 11:01:46 
victor sanlock[904]: 2018-10-16 11:01:46 2945585 [4167]: s3 delta_renew read 
timeout 10 sec offset 0 
/rhev/data-center/mnt/glusterSD/victor.local.systea.fr:_DATA02/ffc53fd8-c5d1-4070-ae51-2e91835cd937/dom_md/idsOct
 16 11:01:46 victor sanlock[904]: 2018-10-16 11:01:46 2945585 [4167]: s3 
renewal error -202 delta_length 25 last_success 2945539 After 80 seconds, the 
vms are terminated by sanlock: Oct 16 11:02:19 victor sanlock[904]: 2018-10-16 
11:02:18 2945617 [904]: s1 check_our_lease failed 80Oct 16 11:02:19 victor 
sanlock[904]: 2018-10-16 11:02:18 2945617 [904]: s1 kill 13823 sig 15 count 1 
But process 13823 cannot be killed, since it is blocked on storage, so sanlock 
send many moreTERM signals: Oct 16 11:02:33 victor sanlock[904]: 2018-10-16 
11:02:33 2945633 [904]: s1 kill 13823 sig 15 count 17 The VM finally dies after 
17 retries: Oct 16 11:02:33 victor sanlock[904]: 2018-10-16 11:02:33 2945633 
[904]: dead 13823 ci 10 count 17 We can see the same flow for other processes 
(HA VMs?) This allows the system to start the HA VMon another host, which is 
what we see in the events log in the first message. Trying to restart VM npi2 
on Host victor.local.systea.fr
16 oct. 2018 11:02:33
Highly Available VM npi2 failed. It will be restarted automatically.
16 oct. 2018 11:02:33
VM npi2 is down with error. Exit message: VM has been terminated on the host. 
If the VMs were not started successfully on the other hosts, maybe the storage 
domainused for VM lease is not accessible? It is recommended to choose the same 
storage domain used by the other VM disks forthe VM lease. Also check that all 
storage domains are accessible - if they are not you will have warningsin 
/var/log/vdsm/vdsm.log. Nir 

--
Cordialement,

Frank

Le Mardi, Octobre 16, 2018 13:25 CEST, "fsoyer"  a écrit:
  Hi all,
this morning, some of my VMs were restarted unexpectidly. The events in GUI say 
:
16 oct. 2018 11:03:50
Trying to restart VM patjoub1 on Host ginger.local.systea.fr
16 oct. 2018 11:03:26
Trying to restart VM op2drugs1 on Host victor.local.systea.fr
16 oct. 2018 11:03:23
Trying to restart VM npi2 on Host ginger.local.systea.fr
16 oct. 2018 11:02:54
Trying to restart VM op2drugs1 on Host victor.local.systea.fr
16 oct. 2018 11:02:54
Trying to restart VM patjoub1 on Host ginger.local.systea.fr
16 oct. 2018 11:02:53
Highly Available VM op2drugs1 failed. It will be restarted automatically.
16 oct. 2018 11:02:53
Failed to restart VM patjoub1 on Host victor.local.systea.fr
16 oct. 2018 11:02:53
VM op2drugs1 is down with error. Exit message: VM has been terminated on the 
host.
16 oct. 2018 11:02:53
VM patjoub1 is down with error. Exit message: Failed to acquire lock: Aucun 
espace disponible sur le périphérique.
16 oct. 2018 11:02:47
Trying to restart VM npi2 on Host ginger.local.systea.fr
16 oct. 2018 11:02:46
Failed to restart VM npi2 on Host victor.local.systea.fr
16 oct. 2018 11:02:46
VM npi2 is down with error. Exit message: Failed to acquire lock: Aucun espace 
disponible sur le périphérique.
16 oct. 2018 11:02:38
Trying to restart VM patjoub1 on Host victor.local.systea.fr
16 oct. 2018 11:02:37
Highly Available VM 

[ovirt-users] Re: aquantia 107 (10Gbase-T NIC) driver for oVirt node-ng?

2018-10-19 Thread Andrei Verovski

> On 19 Oct 2018, at 01:08, Edward Berger  wrote:
> 
> I'm not sure where to send a request for including the current Aquantia 107 
> (10GbaseT nic) driver to be included in the ovirt-node-ng image.  I don't see 
> a centos RPM for kmod-redhat-atlantic, apparently there's a scientific linux 
> rpm available for download. 


Use CentOS 7.5 for node installation, and upgrade kernel to 4.xx from EPEL 
repository.
Should work if kernel driver is there.
Stock oVirt node install wipes out all changes upon upgrade.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IVA6BDS2GX5VEQNH5H7QKGJDI7423VCV/


[ovirt-users] Ovirt 4.2 rpm glusterfs-gnfs w/ gluster 4.1

2018-10-19 Thread TomK

Hey All,

Is there a newer package of glusterfs-gnfs available for GlusterFS 4.1? 
After upgrading to GlusterFS 4.1, all the hosts are now disconnected 
from the Ovirt engine.


I'm on CentOS 7.5:

---> Package kmod-kvdo.x86_64 0:6.1.0.181-17.el7_5 will be installed
---> Package mokutil.x86_64 0:12-2.el7 will be installed
---> Package python-imgbased.noarch 0:1.0.24-1.el7 will be installed
---> Package vdo.x86_64 0:6.1.0.168-18 will be installed
--> Finished Dependency Resolution
Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64 
(ovirt-4.2-centos-gluster312)

   Requires: glusterfs-client-xlators(x86-64) = 3.12.15-1.el7
   Installed: glusterfs-client-xlators-4.1.5-1.el7.x86_64 
(@centos-gluster41)

   glusterfs-client-xlators(x86-64) = 4.1.5-1.el7
   Available: 
glusterfs-client-xlators-3.8.4-53.el7.centos.x86_64 (base)

   glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos
   Available: 
glusterfs-client-xlators-3.8.4-54.15.el7.centos.x86_64 (updates)

   glusterfs-client-xlators(x86-64) = 3.8.4-54.15.el7.centos

--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.


[root@mdskvm-p02 yum.repos.d]# yum info glusterfs-gnfs
Loaded plugins: enabled_repos_upload, fastestmirror, package_upload, 
product-id, search-disabled-repos, subscription-manager, vdsmupgrade
This system is not registered with an entitlement server. You can use 
subscription-manager to register.

Loading mirror speeds from cached hostfile
 * base: mirror.csclub.uwaterloo.ca
 * epel: mirror.csclub.uwaterloo.ca
 * extras: mirror.csclub.uwaterloo.ca
 * ovirt-4.2: mirrors.rit.edu
 * ovirt-4.2-epel: mirror.csclub.uwaterloo.ca
 * updates: mirror.csclub.uwaterloo.ca
Available Packages
Name: glusterfs-gnfs
Arch: x86_64
Version : 3.12.15
Release : 1.el7
Size: 166 k
Repo: ovirt-4.2-centos-gluster312/x86_64
Summary : GlusterFS gNFS server
URL : http://gluster.readthedocs.io/en/latest/
License : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling 
to several
: petabytes. It aggregates various storage bricks over 
Infiniband RDMA

: or TCP/IP interconnect into one large parallel network file
: system. GlusterFS is one of the most sophisticated file 
systems in
: terms of features and extensibility.  It borrows a 
powerful concept
: called Translators from GNU Hurd kernel. Much of the code 
in GlusterFS

: is in user space and easily manageable.
:
: This package provides the glusterfs legacy gNFS server xlator
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TLKLRUZXVMTRKLS2I26AEGKETT2BVXRD/


[ovirt-users] Re: Node, failed to deploy hosted engine

2018-10-19 Thread Stefano Danzi

another little step:

I found an ovirtmgmt interface active on host (from a prev. failed 
deployment).
After shutdown this interface I soved one error and now deploy script 
wait. It is waiting since 1 hour ago


[ INFO  ] TASK [Wait for ovirt-engine service to start]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Detect VLAN ID]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Set Engine public key as authorized key without 
validating the TLS/SSL certificates]

[ INFO  ] changed: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Enable GlusterFS at cluster level]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Set VLAN ID at datacenter level]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Force host-deploy in offline mode]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Add host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Wait for the host to be up]


Il 19/10/2018 13:47, Stefano Danzi ha scritto:

I've found some additional info.
Engine what for host to be up.
VDSM log on host show "waiting for storage pool to go up", but 
"hosted-engine --deploy" (and web wizard) don't ask for storage domain.




2018-10-19 13:36:52,206+0200 INFO  (vmrecovery) [vds] recovery: 
waiting for storage pool to go up (clientIF:707)
2018-10-19 13:36:52,529+0200 INFO  (jsonrpc/7) [api.host] START 
getStats() from=:::192.168.124.71,44704 (api:46)
2018-10-19 13:36:52,532+0200 INFO  (jsonrpc/7) [vdsm.api] START 
repoStats(domains=()) from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:46)
2018-10-19 13:36:52,533+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
repoStats return={} from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:52)
2018-10-19 13:36:52,534+0200 INFO  (jsonrpc/7) [vdsm.api] START 
multipath_health() from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:46)
2018-10-19 13:36:52,535+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
multipath_health return={} from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:52)
2018-10-19 13:36:52,561+0200 INFO  (jsonrpc/7) [api.host] FINISH 
getStats return={'status': {'message': 'Done', 'code': 0}, 'info': 
{'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': 
'0.20', 'cpuIdle': '99.67'}, '10': {'cpuUser': '0.73', 'nodeIndex': 0, 
'cpuSys': '0.20', 'cpuIdle': '99.07'}, '1': {'cpuUser': '1.45', 
'nodeIndex': 1, 'cpuSys': '0.86', 'cpuIdle': '97.69'}, '0': 
{'cpuUser': '2.44', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': 
'97.23'}, '3': {'cpuUser': '1.25', 'nodeIndex': 1, 'cpuSys': '0.53', 
'cpuIdle': '98.22'}, '2': {'cpuUser': '1.58', 'nodeIndex': 0, 
'cpuSys': '0.46', 'cpuIdle': '97.96'}, '5': {'cpuUser': '0.13', 
'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.80'}, '4': 
{'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': 
'99.47'}, '7': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 
'cpuIdle': '99.80'}, '6': {'cpuUser': '1.19', 'nodeIndex': 0, 
'cpuSys': '0.33', 'cpuIdle': '98.48'}, '9': {'cpuUser': '0.40', 
'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '8': 
{'cpuUser': '0.86', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': 
'98.81'}}, 'numaNodeMemFree': {'1': {'memPercent': 30, 'memFree': 
'14483'}, '0': {'memPercent': 35, 'memFree': '13445'}}, 'memShared': 
446, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 1, 
'memUsed': '19', 'storageDomains': {}, 'incomingVmMigrations': 0, 
'network': {'glusternet': {'txErrors': '0', 'state': 'up', 
'sampleTime': 1539949006.318315, 'name': 'glusternet', 'tx': 
'10080290', 'txDropped': '0', 'rx': '6669132', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'enp3s0f0': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'enp3s0f0', 
'tx': '66909029', 'txDropped': '0', 'rx': '17996942', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'bond0': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'bond0', 'tx': 
'87998708', 'txDropped': '0', 'rx': '46881018', 'rxErrors': '0', 
'speed': '3000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 
'state': 'down', 'sampleTime': 1539949006.318315, 'name': 
';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': 
'0', 'speed': '1000', 'rxDropped': '0'}, 'ovirtmgmt': {'txErrors': 
'0', 'state': 'up', 'sampleTime': 1539949006.318315, 'name': 
'ovirtmgmt', 'tx': '56421549', 'txDropped': '0', 'rx': '25448254', 
'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': 
{'txErrors': '0', 'state': 'up', 'sampleTime': 1539949006.318315, 
'name': 'lo', 'tx': '41578221', 'txDropped': '0', 'rx': '41578221', 
'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'ovs-system': 
{'txErrors': '0', 'state': 'down', 'sampleTime': 1539949006.318315, 
'name': 'ovs-system', 'tx': '0', 'txDropped': '0', 'rx': '0', 
'rxErrors': '0', 'speed': 

[ovirt-users] Re: Problems with quotas

2018-10-19 Thread siovelrm
Thanks Andrej, Do you know a previous version where this functionality work 
fine mainly in the VM Portal? Quotas in Ovirt is a very important functionality 
for me.  

Regards,
Siovel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZWWUGYR2OCU4XQK5MPCAWVIZ4S3WQ5W/


[ovirt-users] oVirt Nagios/Icinga monitoring plugin check_rhv 2.0 released

2018-10-19 Thread René Koch
Hi list,

I'm happy to announce version 2.0 of check_rhv.

check_rhv is a monitoring plugin for Icinga/Nagios and it's forks,
which is used to monitor datacenters, clusters, hosts, vms, vm pools
and storage domains of Red Hat Enterprise Virtualization (RHEV) and
oVirt virtualization environments.

Download this plugin from: https://github.com/rk-it-at/check_rhv/releas
es/check_rhv-2.0
  
For further information on how to install this plugin visit: https://gi
thub.com/rk-it-at/check_rhv/wiki/Installation-Documentation
  
A detailed usage documentation can be found here:  
https://github.com/rk-it-at/check_rhv/wiki/Usage-Documentation


Changelog:

New features:
-   Dropped support for RHEV 3
-   Support for RHV 4 and oVirt 4 with REST-APIv4
-   new URL: https://github.com/rk-it-at/check_rhv
-   support for Gluster volume monitoring (#2)
-   support for Gluster brick monitoring (#2)
-   check for available updates of RHV hosts

Bugs fixed:
-   Removed monitoring of Storagedomain status (#3)
-   Unknown interface error check if tmp file is empty (#5)
-   Fix network errors (#6)
-   Check RHEV Host VMs (-l vms) fails with RHEV UNKNOWN: Host ' ' not
found when no vms on host
-   VM Pool Check broken in oVirt 4.1
-   When using '*' and option '-l usage', output is incomplete
-   storagedomain usage calculation broken when specifing multiple
storagedomains


Please note this plugin will only work with RHV 4.0 and newer and oVirt
4.0 and newer. Support for version 3.x was dropped.


If you have any questions or ideas, please drop me an email: rkoch@rk-i
t.at.

Thank you for using check_rhv.


Regards,
René

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYE3C662SV7W4W3YP5FYBW2G5R2CM5CV/


[ovirt-users] Re: oVirt Nagios/Icinga monitoring plugin check_rhv 2.0 released

2018-10-19 Thread Karli Sjöberg
Den 19 okt. 2018 20:38 skrev René Koch :Hi list,I'm happy to announce version 2.0 of check_rhv.check_rhv is a monitoring plugin for Icinga/Nagios and it's forks,which is used to monitor datacenters, clusters, hosts, vms, vm poolsand storage domains of Red Hat Enterprise Virtualization (RHEV) andoVirt virtualization environments.Download this plugin from: https://github.com/rk-it-at/check_rhv/releases/check_rhv-2.0  For further information on how to install this plugin visit: https://github.com/rk-it-at/check_rhv/wiki/Installation-Documentation  A detailed usage documentation can be found here:  https://github.com/rk-it-at/check_rhv/wiki/Usage-DocumentationChangelog:New features:-   Dropped support for RHEV 3-   Support for RHV 4 and oVirt 4 with REST-APIv4-   new URL: https://github.com/rk-it-at/check_rhv-   support for Gluster volume monitoring (#2)-   support for Gluster brick monitoring (#2)-   check for available updates of RHV hostsBugs fixed:-   Removed monitoring of Storagedomain status (#3)-   Unknown interface error check if tmp file is empty (#5)-   Fix network errors (#6)-   Check RHEV Host VMs (-l vms) fails with RHEV UNKNOWN: Host ' ' notfound when no vms on host-   VM Pool Check broken in oVirt 4.1-   When using '*' and option '-l usage', output is incomplete-   storagedomain usage calculation broken when specifing multiplestoragedomainsPlease note this plugin will only work with RHV 4.0 and newer and oVirt4.0 and newer. Support for version 3.x was dropped.If you have any questions or ideas, please drop me an email: rkoch@rk-it.at.Thank you for using check_rhv.Regards,René___Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-le...@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYE3C662SV7W4W3YP5FYBW2G5R2CM5CV/Awesome to see you're still maintaining this! Keep up the good work!/K___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K5H2MNW3UYVPXRUTTTGNKODOS4B3PDXL/


[ovirt-users] Re: Problems with quotas

2018-10-19 Thread Greg Sheremeta
On Fri, Oct 19, 2018 at 11:33 AM  wrote:

> Thanks Andrej, Do you know a previous version where this functionality
> work fine mainly in the VM Portal? Quotas in Ovirt is a very important
> functionality for me.
>

Unfortunately no. The code is in progress if you'd like to follow along:
https://gerrit.ovirt.org/#/c/94953/
currently targeted to ovirt 4.2.8.


> Regards,
> Siovel
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZWWUGYR2OCU4XQK5MPCAWVIZ4S3WQ5W/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IZAVYX64YDVGZCKJBUV465OD4QQXIAFP/


[ovirt-users] Re: ovirt - in docker

2018-10-19 Thread Staniforth, Paul
Sorry Greg it was my mistake, I was using a user account that had read-only 
admin rights to allow it to create templates in the admin portal.


Regards,

  Paul S.


From: Greg Sheremeta 
Sent: 18 October 2018 15:45
To: Staniforth, Paul
Cc: users
Subject: Re: [ovirt-users] Re: ovirt - in docker

Ok, please open a bug describing your users and groups setup + your rpm versions
https://github.com/oVirt/ovirt-web-ui/issues/new



On Tue, Oct 16, 2018 at 11:41 AM Staniforth, Paul 
mailto:p.stanifo...@leedsbeckett.ac.uk>> wrote:

We are using 4.2.6 but the cluster compatibility version is still at 4.1 as we 
have too many VMs to upgrade as it's transactional, I believe there is a fix in 
4.2.7


Regards,

   Paul S.


From: Greg Sheremeta mailto:gsher...@redhat.com>>
Sent: 16 October 2018 16:25
To: Staniforth, Paul
Cc: users
Subject: Re: [ovirt-users] Re: ovirt - in docker

Which version of engine are you using?

On Tue, Oct 16, 2018 at 6:49 AM Staniforth, Paul 
mailto:p.stanifo...@leedsbeckett.ac.uk>> wrote:

Hello Greg,

   I tried the ovirt-web-ui and it works but even though I'm 
logging in as a user it displays all the VMs and it still doesn't give the 
option to sort or filter them in the UI.


Thanks,

   Paul S.


From: Greg Sheremeta mailto:gsher...@redhat.com>>
Sent: 13 October 2018 16:33
To: Staniforth, Paul
Cc: users
Subject: Re: [ovirt-users] Re: ovirt - in docker

On Fri, Oct 12, 2018 at 7:32 PM Greg Sheremeta 
mailto:gsher...@redhat.com>> wrote:
I'm in the process of updating that. (It's not related to OP's question.)

Done.
Note the new location, ovirtwebui
docker run --rm -it -e ENGINE_URL=https://[ENGINE.FQDN]/ovirt-engine -p 
3000:3000 ovirtwebui/ovirt-web-ui


On Fri, Oct 12, 2018 at 7:40 AM Staniforth, Paul 
mailto:p.stanifo...@leedsbeckett.ac.uk>> wrote:

Hello,

  do you know the situation with ovirt-web-ui and docker  
https://github.com/oVirt/ovirt-web-ui the last time I tried the docker 
instructions in the quick run section the latest released version was an old 
version and the most recent image was even older.


Thanks,

Paul S.


From: Sandro Bonazzola mailto:sbona...@redhat.com>>
Sent: 12 October 2018 09:18
To: Roman Mohr; Martin Perina
Cc: re.search.it@gmail.com; users
Subject: [ovirt-users] Re: ovirt - in docker



Il giorno ven 12 ott 2018 alle ore 09:06 Roman Mohr 
mailto:rm...@redhat.com>> ha scritto:
On Tue, Oct 9, 2018 at 11:16 AM ReSearchIT Eng
mailto:re.search.it@gmail.com>> wrote:
>
> Hello!
> I am interested to run ovirt in docker container.
> It was noticed that there is an official repo for it:
> https://github.com/oVirt/ovirt-container-engine

Yaniv Bronheim mostly worked on it when the repo was moved to oVirt.

Sandro,  Simone, since he is now working on other things, do you guys
know anything about plans for updating the repo?

No. Martin?



Best Regards,

Roman

> Unfortunately it did not get an update for 2 years (4.1).
>
> Can anyone help with the required answers/entrypoint/patch files for
> the new 4.2 ?
>
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to 
> users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C66CSW7CY7RCTC56V5YNSZ6KQKHLADIS/


--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA

sbona...@redhat.com

[https://www.redhat.com/files/brand/email/sig-redhat.png]
[http://images.engage.redhat.com/EloquaImages/clients/RedHat/%7B98cb2f7e-01c6-4d72-b84d-99545fa13c39%7D_RH_OSD_ITALY_Banner_350x50_esign.png]
To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OXJUNLHVCEU4S7NAU2HCRMPUKYOBBS62/


--

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme


[ovirt-users] Network interfaces refresh bug.

2018-10-19 Thread Jacob Green
So we saw a potential bug with the user interface in oVirt 4.2 with 
the Network Interfaces screen.  Screenshot attached. Basically this 
morning we had a problem with out bond, we replaced the cable and the 
bond came back up. However the screen in ovirt that shows you this 
information was reporting that half the bond was down all day, even 
though at the OS level it was definitely up. It did not come back up 
until we refreshed the host capabilities.


We are currently running oVirt Open Virtualization Manager Software 
Version 4.2.6.4-1.el7


Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection x 2 on 
the host.



/Note: To be clear, the screenshot I attached is after we refreshed host 
capabilities so it showing the bond is up 20Gbit/.



Thank you.



--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACY3QEGKV7QR2UGUKSQ6VLW7SB2FF3VX/


[ovirt-users] high load on hosts

2018-10-19 Thread Jayme
I'm wondering how I can best limit the ability of VMs to overrun the load
on hosts.  I have a fairly stock 4.2 HCI setup with three well spec'ed
servers, 10Gbe/SSDs, plenty of ram and CPU with only a hand full of light
use VMs.  I notice when the occasional demanding job is run on a VM I'm
seeing load average on host node shoot up in to the 20-30s, how can a
single "medium" vm cause host load to rise so high?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LVSVFSI2KXURMXHQXYDAAUU5JCLG53MM/


[ovirt-users] re-enabling networkmanager

2018-10-19 Thread fsoyer

Hi,
I have installed a 4.2 cluster on CentOS 7 nodes but I have follow an (old) 
procedure of mine done with 4.0 : so, I have disabled Network Manager before 
installing oVirt.
The networks created and validated in the engine UI are :
ovirmgmt on bond0 (2 slaves) failover mode
storagemanager on bond1 (2 slaves), jumbo frames, aggregation mode, serving 
Gluster.
Today, I installed Cockpit on the node to have the nodes consoles. But it say 
that it cannot manage the network without NM.
So my question is : is there any risk to re-enabled NM on the nodes ? Can it 
broke anything done by the UI ?

--

Cordialement,

Frank Soyer
Mob. 06 72 28 38 53 - Fix. 05 49 50 52 34
Systea IG
Administration systèmes, réseaux et bases de données
www.systea.net
Membre du réseau Les Professionnels du Numérique
KoGite
Hébergement de proximité  
www.kogite.fr
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GOWGDMXIQ2VLHW2NAL2SSRQLXFKD7753/


[ovirt-users] Re: Node, failed to deploy hosted engine

2018-10-19 Thread Stefano Danzi

I've found some additional info.
Engine what for host to be up.
VDSM log on host show "waiting for storage pool to go up", but 
"hosted-engine --deploy" (and web wizard) don't ask for storage domain.




2018-10-19 13:36:52,206+0200 INFO  (vmrecovery) [vds] recovery: waiting 
for storage pool to go up (clientIF:707)
2018-10-19 13:36:52,529+0200 INFO  (jsonrpc/7) [api.host] START 
getStats() from=:::192.168.124.71,44704 (api:46)
2018-10-19 13:36:52,532+0200 INFO  (jsonrpc/7) [vdsm.api] START 
repoStats(domains=()) from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:46)
2018-10-19 13:36:52,533+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
repoStats return={} from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:52)
2018-10-19 13:36:52,534+0200 INFO  (jsonrpc/7) [vdsm.api] START 
multipath_health() from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:46)
2018-10-19 13:36:52,535+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
multipath_health return={} from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:52)
2018-10-19 13:36:52,561+0200 INFO  (jsonrpc/7) [api.host] FINISH 
getStats return={'status': {'message': 'Done', 'code': 0}, 'info': 
{'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': 
'0.20', 'cpuIdle': '99.67'}, '10': {'cpuUser': '0.73', 'nodeIndex': 0, 
'cpuSys': '0.20', 'cpuIdle': '99.07'}, '1': {'cpuUser': '1.45', 
'nodeIndex': 1, 'cpuSys': '0.86', 'cpuIdle': '97.69'}, '0': {'cpuUser': 
'2.44', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '97.23'}, '3': 
{'cpuUser': '1.25', 'nodeIndex': 1, 'cpuSys': '0.53', 'cpuIdle': 
'98.22'}, '2': {'cpuUser': '1.58', 'nodeIndex': 0, 'cpuSys': '0.46', 
'cpuIdle': '97.96'}, '5': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': 
'0.07', 'cpuIdle': '99.80'}, '4': {'cpuUser': '0.40', 'nodeIndex': 0, 
'cpuSys': '0.13', 'cpuIdle': '99.47'}, '7': {'cpuUser': '0.07', 
'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '6': {'cpuUser': 
'1.19', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '98.48'}, '9': 
{'cpuUser': '0.40', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': 
'99.40'}, '8': {'cpuUser': '0.86', 'nodeIndex': 0, 'cpuSys': '0.33', 
'cpuIdle': '98.81'}}, 'numaNodeMemFree': {'1': {'memPercent': 30, 
'memFree': '14483'}, '0': {'memPercent': 35, 'memFree': '13445'}}, 
'memShared': 446, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 
'vmCount': 1, 'memUsed': '19', 'storageDomains': {}, 
'incomingVmMigrations': 0, 'network': {'glusternet': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'glusternet', 
'tx': '10080290', 'txDropped': '0', 'rx': '6669132', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'enp3s0f0': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'enp3s0f0', 
'tx': '66909029', 'txDropped': '0', 'rx': '17996942', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'bond0': {'txErrors': '0', 'state': 
'up', 'sampleTime': 1539949006.318315, 'name': 'bond0', 'tx': 
'87998708', 'txDropped': '0', 'rx': '46881018', 'rxErrors': '0', 
'speed': '3000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 
'state': 'down', 'sampleTime': 1539949006.318315, 'name': ';vdsmdummy;', 
'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': 
'1000', 'rxDropped': '0'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 
'sampleTime': 1539949006.318315, 'name': 'ovirtmgmt', 'tx': '56421549', 
'txDropped': '0', 'rx': '25448254', 'rxErrors': '0', 'speed': '1000', 
'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'lo', 'tx': '41578221', 'txDropped': '0', 
'rx': '41578221', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 
'ovs-system': {'txErrors': '0', 'state': 'down', 'sampleTime': 
1539949006.318315, 'name': 'ovs-system', 'tx': '0', 'txDropped': '0', 
'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 
'bond0.127': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'bond0.127', 'tx': '10080290', 'txDropped': 
'0', 'rx': '6670098', 'rxErrors': '0', 'speed': '1000', 'rxDropped': 
'0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'enp3s0f1', 'tx': '14732876', 'txDropped': 
'0', 'rx': '12258325', 'rxErrors': '0', 'speed': '1000', 'rxDropped': 
'0'}, 'bond0.1': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'bond0.1', 'tx': '56421549', 'txDropped': 
'0', 'rx': '25458727', 'rxErrors': '0', 'speed': '1000', 'rxDropped': 
'0'}, 'br-int': {'txErrors': '0', 'state': 'down', 'sampleTime': 
1539949006.318315, 'name': 'br-int', 'tx': '0', 'txDropped': '0', 'rx': 
'0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '2'}, 'enp65s0f0': 
{'txErrors': '0', 'state': 'down', 'sampleTime': 1539949006.318315, 
'name': 'enp65s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': 
'0', 'speed': '1000', 'rxDropped': '0'}, 

[ovirt-users] Re: Problems with quotas

2018-10-19 Thread Andrej Krejcir
Hi,

When creating a VM, users should be able to choose which quota they want to
use, because each user can have multiple quotas available.
But we found a bug where there is no way for the user to get a list of
available quotas using the REST API or VM Portal.

There is a related bug[1], which will change quota behavior, so that in
case the user does not specify quota in the REST request,
it will choose one of the quotas available. Which would solve the problem.

I'm not sure if there is any workaround for this without these bugs fixed.


Andrej

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1619154

On Thu, 18 Oct 2018 at 17:12,  wrote:

> Hi, I need to use Ovirt's quotas system. I have approximately 50 users
> that use my Ovirt and they can create virtual machines only from templates,
> in total there are 20 templates. I have assigned a quota for each user,
> until here there are no problems. The problem is that in order for the
> user's quota to be used for each template, it must be linked to a specific
> quota, but only one. The only solution I see would be to have 20 templates
> for each user so it would be a total of 50X20 templates = 1000 templates
> and if a new user arrives 20 templates more and so on, which would not be a
> good solution. I think evidently or that I have not understood well how
> Ovirt's system of quotas works or maybe my problem can be solved in another
> way. Please I need your help
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGFW72XC2CPCIZL5NEOCXVGFJH6KH4BI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BHCKN7BJPBN6QW6HYLQJL57JSYW5BQAP/


[ovirt-users] Wrong VLAN ID for Management Network

2018-10-19 Thread Sakhi Hadebe
Hi,

Can I just changed the VLAN ID of the ovirtmgmt network in the Admin
Portal. IN the OS the network is configured and verified for ovirtmgmt
network to have a 21 VLAN ID, but in the Admin Portal, it shows VLAN ID 20,
which is configured for the VM network.

Can I just changed it in the admin portal? Will the cluster be happy about
the changes?

-- 
Regards,
Sakhi Hadebe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/34RMIX2H3CVAZTSIY3QI45RRZOUNAZGJ/