[ovirt-users] Re: nfs

2019-09-10 Thread mailing-ovirt
So I think I fixed the option.  I created a sanlock user (179) on the NFS
server, share option also needed to use  no_root_squash.

Simon

-Original Message-
From: Vojtech Juranek  
Sent: September 9, 2019 2:55 PM
To: users@ovirt.org
Subject: [ovirt-users] Re: nfs

Hi,

> I`m trying to mount a nfs share.
> 
> 
> 
> if I manually mount it from ssh, I can access it without issues.
> 
> 
> 
> However when I do it from the web config, it keeps failing:

the error means sanlock cannot write a resource on our device. Please
check 
you have proper permission on your NFS (has to be writeable by uid:guid
36:36) 
and eventually check /var/log/sanlock.log if there are any details

> 
> 
> Not sure how to solve that.
> 
> 
> 
> Thanks
> 
> 
> 
> Simon
> 
> 
> 
> 2019-09-09 09:08:47,601-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4'])
from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:48)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH
repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual':
True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck':
'0.4',
> 'valid': True}} from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:54)
> 
> 2019-09-09 09:08:47,611-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer]
RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,839-0400 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,846-0400 INFO  (jsonrpc/2) [api.host] START
> getCapabilities() from=::1,42394 (api:48)
> 
> 2019-09-09 09:08:48,149-0400 INFO  (jsonrpc/2) [root] managedvolume not
> supported: Managed Volume Not Supported. Missing package os-brick.:
('Cannot
> import os_brick',) (caps:152)
> 
> 2019-09-09 09:08:49,212-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.01 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,263-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:49,497-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,657-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,665-0400 INFO  (jsonrpc/7) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4'])
from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:48)
> 
> 2019-09-09 09:08:49,666-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH
repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual':
True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck':
'2.4',
> 'valid': True}} from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:54)
> 
> 2019-09-09 09:08:49,667-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,800-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,464-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0
err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,467-0400 INFO  (jsonrpc/2) [api.host] FINISH
> getCapabilities return={'status': {'message': 'Done', 'code': 0},
'info':
> {u'HBAInventory': {u'iSCSI': [{u'InitiatorName':
> u'iqn.1994-05.com.redhat:d8c85fc0ab85'}], u'FC': []}, u'packages2':
> {u'kernel': {u'release': u'957.27.2.el7.x86_64', u'version': u'3.10.0'},
> u'spice-server': {u'release': u'6.el7_6.1', u'version': u'0.14.0'},
> u'librbd1': {u'release': u'4.el7', u'version': u'10.2.5'}, u'vdsm':
> {u'release': u'1.el7', u'version': u'4.30.24'}, u'qemu-kvm':
{u'release':
> u'18.el7_6.7.1', u'version': u'2.12.0'}, u'openvswitch': {u'release':
> u'4.el7', u'version': u'2.11.0'}, u'libvirt': {u'release':
u'10.el7_6.12',
> u'version': u'4.5.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7',
> u'version': u'2.3.3'}, u'qemu-img': {u'release': u'18.el7_6.7.1',
> u'version': u'2.12.0'}, u'mom': {u'release': u'1.el7.centos',
u'version':
> u'0.5.12'}, u'glusterfs-cli': {u'release': u'1.el7', u'version':
u'6.5'}},
> u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel':
> u'Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz', u'nestedVirtualization':
> False, u'liveMerge': u'true', u'hooks': {u'before_vm_start':
> {u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'},
> u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}},
> u'after_network_setup': {u'30_ethtool_options': {u'md5':
> 

[ovirt-users] nfs

2019-09-09 Thread mailing-ovirt
I`m trying to mount a nfs share.

 

if I manually mount it from ssh, I can access it without issues.

 

However when I do it from the web config, it keeps failing: 

 

Not sure how to solve that.

 

Thanks

 

Simon

 

2019-09-09 09:08:47,601-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] START
repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:48)

2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH repoStats
return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '0.4',
'valid': True}} from=::1,42394, task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2
(api:54)

2019-09-09 09:08:47,611-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,839-0400 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,846-0400 INFO  (jsonrpc/2) [api.host] START
getCapabilities() from=::1,42394 (api:48)

2019-09-09 09:08:48,149-0400 INFO  (jsonrpc/2) [root] managedvolume not
supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot
import os_brick',) (caps:152)

2019-09-09 09:08:49,212-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.01 seconds (__init__:312)

2019-09-09 09:08:49,263-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err=
(hooks:114)

2019-09-09 09:08:49,497-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,657-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,665-0400 INFO  (jsonrpc/7) [vdsm.api] START
repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:48)

2019-09-09 09:08:49,666-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats
return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '2.4',
'valid': True}} from=::1,42394, task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7
(api:54)

2019-09-09 09:08:49,667-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,800-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err=
(hooks:114)

2019-09-09 09:08:50,464-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=
(hooks:114)

2019-09-09 09:08:50,467-0400 INFO  (jsonrpc/2) [api.host] FINISH
getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info':
{u'HBAInventory': {u'iSCSI': [{u'InitiatorName':
u'iqn.1994-05.com.redhat:d8c85fc0ab85'}], u'FC': []}, u'packages2':
{u'kernel': {u'release': u'957.27.2.el7.x86_64', u'version': u'3.10.0'},
u'spice-server': {u'release': u'6.el7_6.1', u'version': u'0.14.0'},
u'librbd1': {u'release': u'4.el7', u'version': u'10.2.5'}, u'vdsm':
{u'release': u'1.el7', u'version': u'4.30.24'}, u'qemu-kvm': {u'release':
u'18.el7_6.7.1', u'version': u'2.12.0'}, u'openvswitch': {u'release':
u'4.el7', u'version': u'2.11.0'}, u'libvirt': {u'release': u'10.el7_6.12',
u'version': u'4.5.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7',
u'version': u'2.3.3'}, u'qemu-img': {u'release': u'18.el7_6.7.1',
u'version': u'2.12.0'}, u'mom': {u'release': u'1.el7.centos', u'version':
u'0.5.12'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'6.5'}},
u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel':
u'Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz', u'nestedVirtualization':
False, u'liveMerge': u'true', u'hooks': {u'before_vm_start':
{u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'},
u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}},
u'after_network_setup': {u'30_ethtool_options': {u'md5':
u'ce1fbad7aa0389e3b06231219140bf0d'}}, u'after_vm_destroy':
{u'delete_vhostuserclient_hook': {u'md5':
u'c2f279cc9483a3f842f6c29df13994c1'}, u'50_vhostmd': {u'md5':
u'bdf4802c0521cf1bae08f2b90a9559cf'}}, u'after_vm_start':
{u'openstacknet_utils.py': {u'md5': u'1ed38ddf30f8a9c7574589e77e2c0b1f'},
u'50_openstacknet': {u'md5': u'ea0a5a715da8c1badbcda28e8b8fa00e'}},
u'after_device_migrate_destination': {u'openstacknet_utils.py': {u'md5':
u'1ed38ddf30f8a9c7574589e77e2c0b1f'}, u'50_openstacknet': {u'md5':
u'6226fbc4d1602994828a3904fc1b875d'}}, u'before_device_migrate_destination':
{u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}},
u'after_device_create': {u'openstacknet_utils.py': {u'md5':

[ovirt-users] gluster

2019-09-09 Thread mailing-ovirt
Hi, 

 

I see options to deploy ovirt with gluster during the initial rollout,
however I can't seem to find information as to how I can add it following a
non gluster initial setup: 

 

GlusterFS Version:

[N/A]

 

Thanks

 

Simon

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNRQN76L3XGNLYXYEIRYVDCJGU5BBGL5/


[ovirt-users] Re: network config change

2019-09-06 Thread mailing-ovirt
So in the case here, the work was already started outside of the ovirt engine 
UI.

I inherited the problem, so far I've made sure to point the em1,3 to the bond0, 
confirmed it was working (in mode 4, lacp), pointed the bond0 to the ovirtmgmt 
bridge.  Now the current setup uses dhcp and the IP address is successfully 
assigned.

One of the issue I'm seeing right now is that the hosted engine VM will not 
start.  So I started looking a bit further and I can see that route has some 
issues: 

192.168.20.0/24 dev em1 proto kernel scope link src 192.168.20.108 
192.168.20.0/24 dev ovirtmgmt proto kernel scope link src 192.168.20.108

The first line prevents the host from talking to it's default gateway (20.1) 
because it's using em1 which has no IP (part of bond0) 
If I remove the line from the routing table, I can successfully ping the 
default gateway through the ovirtmgmt interface.

I can't figure out how to prevent the system from recreating the first line 
during a reboot.

I don’t know if this is one of the reason why the engine fails or not, but 
since the ovirt startup process seems to validate ping, when the services are 
starting during boot time, they'll fail because that route is re-added.

Thanks again

Simon 

-Original Message-
From: Dominik Holler  
Sent: September 6, 2019 2:07 AM
To: mailing-ov...@qic.ca
Cc: users@ovirt.org
Subject: [ovirt-users] Re: network config change

On Thu, 5 Sep 2019 08:27:07 -0400
 wrote:

> Greetings,
>
> So we have a single hosted engine ovirt box deployed.
>
> When this was deployed, the network requirements were poorly 
> determined and there was no port teaming or any vlan configuration.
>
> now we need to change that.
>
> I have searched all over for any form of procedure or guide as to how 
> we do this.
>
> in summary:
>
> I need to go from a setup that uses em1 as it's default device and 
> gateway to a setup where em1 and 3 are bonded into bond0 and now use that 
> instead.
>
>

All modifications should be done via oVirt Engine, e.g. in Administration 
Portal via Compute > Hosts > hostname > Network Interfaces > Setup Host 
Networks because this way the change will be rolled back automatically on 
failures.

Because this change modifies many aspects of the management network, I would 
recommend to add a temporary second host, migrate the hosted Engine on this 
second host, and put the first host in maintenance and migrate back to the 
first host after the change succeeded.

>
> As for the vlan we could live with keeping the native vlan configured 
> on the cisco side for management and only add extra ones for VMs.
>
> any assistance as to what I need to go through from a bash shell would 
> be appreciated.
>
> thanks
>
> Simon
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBWLN2RGHSQO7NJPSQR7RMEE62RAFDU2/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CWD33ZQLGAIKQPTETZXHQVYA7LSFMYEP/


[ovirt-users] network config change

2019-09-05 Thread mailing-ovirt
Greetings,

So we have a single hosted engine ovirt box deployed.

When this was deployed, the network requirements were poorly determined and
there was no port teaming or any vlan configuration.

now we need to change that.  

I have searched all over for any form of procedure or guide as to how we do
this.

in summary: 

I need to go from a setup that uses em1 as it's default device and gateway
to a setup where em1 and 3 are bonded into bond0 and now use that instead.

 

As for the vlan we could live with keeping the native vlan configured on the
cisco side for management and only add extra ones for VMs.

any assistance as to what I need to go through from a bash shell would be
appreciated.

thanks

Simon

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G53XTI7F3MN5ZHS42OGSANTUKQAHNEA6/