[ovirt-users] Re: hyperconverged single node with SSD cache fails gluster creation

2019-09-09 Thread thomas
Thanks a ton!

On one hand I'm glad it's a bug now known and fixed, on the other hand I am 
more scared than ever, that oVirt is too raw to upgrade without intensive QA.

I'll try both the manual approach and the new ansible scripts once I've 
overcome a new problem, that keeps me busy (that will be a new post).

So when would the change flow into the current oVirt release? 4.3.6 or 4.4?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BMYVSKBKQZWOPT2FL67HIW6NXOJ4YQZP/


[ovirt-users] Re: gluster

2019-09-09 Thread Kaustav Majumder
Hi,
You can try this.
Engine-UI -> Compute -> Clusters -> New -> (Check) Enable Gluster Service
-> (Check) Import existing gluster configuration -> Enter details of one
gluster host.
Engine will add all host in peer with gluster under the new Cluster.


On Mon, Sep 9, 2019 at 5:31 PM  wrote:

> Hi,
>
>
>
> I see options to deploy ovirt with gluster during the initial rollout,
> however I can’t seem to find information as to how I can add it following a
> non gluster initial setup:
>
>
>
> GlusterFS Version:
>
> [N/A]
>
>
>
> Thanks
>
>
>
> Simon
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNRQN76L3XGNLYXYEIRYVDCJGU5BBGL5/
>


-- 

Thanks,

Kaustav Majumder
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IZMCLNFCUU3A4BEV2UXULSNCPSKKTUFA/


[ovirt-users] Re: network config change

2019-09-09 Thread simon
For anyone following, 

I reverted the changes back to using em1 only and will wait for the 3 node 
cluster to be functional before making changes as recommended.

Thanks

Simon

-Original Message-
From: si...@qic.ca  On Behalf Of mailing-ov...@qic.ca
Sent: September 6, 2019 1:02 PM
To: 'Dominik Holler' ; mailing-ov...@qic.ca
Cc: users@ovirt.org
Subject: RE: [ovirt-users] Re: network config change

So in the case here, the work was already started outside of the ovirt engine 
UI.

I inherited the problem, so far I've made sure to point the em1,3 to the bond0, 
confirmed it was working (in mode 4, lacp), pointed the bond0 to the ovirtmgmt 
bridge.  Now the current setup uses dhcp and the IP address is successfully 
assigned.

One of the issue I'm seeing right now is that the hosted engine VM will not 
start.  So I started looking a bit further and I can see that route has some
issues:

192.168.20.0/24 dev em1 proto kernel scope link src 192.168.20.108
192.168.20.0/24 dev ovirtmgmt proto kernel scope link src 192.168.20.108

The first line prevents the host from talking to it's default gateway (20.1) 
because it's using em1 which has no IP (part of bond0) If I remove the line 
from the routing table, I can successfully ping the default gateway through the 
ovirtmgmt interface.

I can't figure out how to prevent the system from recreating the first line 
during a reboot.

I don’t know if this is one of the reason why the engine fails or not, but 
since the ovirt startup process seems to validate ping, when the services are 
starting during boot time, they'll fail because that route is re-added.

Thanks again

Simon

-Original Message-
From: Dominik Holler 
Sent: September 6, 2019 2:07 AM
To: mailing-ov...@qic.ca
Cc: users@ovirt.org
Subject: [ovirt-users] Re: network config change

On Thu, 5 Sep 2019 08:27:07 -0400
 wrote:

> Greetings,
>
> So we have a single hosted engine ovirt box deployed.
>
> When this was deployed, the network requirements were poorly 
> determined and there was no port teaming or any vlan configuration.
>
> now we need to change that.
>
> I have searched all over for any form of procedure or guide as to how 
> we do this.
>
> in summary:
>
> I need to go from a setup that uses em1 as it's default device and 
> gateway to a setup where em1 and 3 are bonded into bond0 and now use 
> that instead.
>
>

All modifications should be done via oVirt Engine, e.g. in Administration 
Portal via Compute > Hosts > hostname > Network Interfaces > Setup Host 
Networks because this way the change will be rolled back automatically on 
failures.

Because this change modifies many aspects of the management network, I would 
recommend to add a temporary second host, migrate the hosted Engine on this 
second host, and put the first host in maintenance and migrate back to the 
first host after the change succeeded.

>
> As for the vlan we could live with keeping the native vlan configured 
> on the cisco side for management and only add extra ones for VMs.
>
> any assistance as to what I need to go through from a bash shell would 
> be appreciated.
>
> thanks
>
> Simon
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBWLN2RGHSQO7NJPSQR7RMEE62RAFDU2/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYB2NNC4PEKV7WJXR23Q5TU5L3E3SIIR/


[ovirt-users] gluster

2019-09-09 Thread mailing-ovirt
Hi, 

 

I see options to deploy ovirt with gluster during the initial rollout,
however I can't seem to find information as to how I can add it following a
non gluster initial setup: 

 

GlusterFS Version:

[N/A]

 

Thanks

 

Simon

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNRQN76L3XGNLYXYEIRYVDCJGU5BBGL5/


[ovirt-users] Re: nfs

2019-09-09 Thread Vojtech Juranek
Hi,

> I`m trying to mount a nfs share.
> 
> 
> 
> if I manually mount it from ssh, I can access it without issues.
> 
> 
> 
> However when I do it from the web config, it keeps failing:

the error means sanlock cannot write a resource on our device. Please check 
you have proper permission on your NFS (has to be writeable by uid:guid 36:36) 
and eventually check /var/log/sanlock.log if there are any details

> 
> 
> Not sure how to solve that.
> 
> 
> 
> Thanks
> 
> 
> 
> Simon
> 
> 
> 
> 2019-09-09 09:08:47,601-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:48)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '0.4',
> 'valid': True}} from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:54)
> 
> 2019-09-09 09:08:47,611-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,839-0400 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,846-0400 INFO  (jsonrpc/2) [api.host] START
> getCapabilities() from=::1,42394 (api:48)
> 
> 2019-09-09 09:08:48,149-0400 INFO  (jsonrpc/2) [root] managedvolume not
> supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot
> import os_brick',) (caps:152)
> 
> 2019-09-09 09:08:49,212-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.01 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,263-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:49,497-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,657-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,665-0400 INFO  (jsonrpc/7) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:48)
> 
> 2019-09-09 09:08:49,666-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '2.4',
> 'valid': True}} from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:54)
> 
> 2019-09-09 09:08:49,667-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,800-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,464-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,467-0400 INFO  (jsonrpc/2) [api.host] FINISH
> getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info':
> {u'HBAInventory': {u'iSCSI': [{u'InitiatorName':
> u'iqn.1994-05.com.redhat:d8c85fc0ab85'}], u'FC': []}, u'packages2':
> {u'kernel': {u'release': u'957.27.2.el7.x86_64', u'version': u'3.10.0'},
> u'spice-server': {u'release': u'6.el7_6.1', u'version': u'0.14.0'},
> u'librbd1': {u'release': u'4.el7', u'version': u'10.2.5'}, u'vdsm':
> {u'release': u'1.el7', u'version': u'4.30.24'}, u'qemu-kvm': {u'release':
> u'18.el7_6.7.1', u'version': u'2.12.0'}, u'openvswitch': {u'release':
> u'4.el7', u'version': u'2.11.0'}, u'libvirt': {u'release': u'10.el7_6.12',
> u'version': u'4.5.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7',
> u'version': u'2.3.3'}, u'qemu-img': {u'release': u'18.el7_6.7.1',
> u'version': u'2.12.0'}, u'mom': {u'release': u'1.el7.centos', u'version':
> u'0.5.12'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'6.5'}},
> u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel':
> u'Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz', u'nestedVirtualization':
> False, u'liveMerge': u'true', u'hooks': {u'before_vm_start':
> {u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'},
> u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}},
> u'after_network_setup': {u'30_ethtool_options': {u'md5':
> u'ce1fbad7aa0389e3b06231219140bf0d'}}, u'after_vm_destroy':
> {u'delete_vhostuserclient_hook': {u'md5':
> u'c2f279cc9483a3f842f6c29df13994c1'}, u'50_vhostmd': {u'md5':
> u'bdf4802c0521cf1bae08f2b90a9559cf'}}, u'after_vm_start':
> {u'openstacknet_utils.py': {u'md5': u'1ed38ddf30f8a9c7574589e77e2c0b1f'},

[ovirt-users] Re: gluster

2019-09-09 Thread Kaustav Majumder
Well almost. Create a new cluster and  (Check) Enable Gluster Service .
Upon adding new hosts to this cluster (via ui) gluster will be
automatically configured on them.


On Mon, Sep 9, 2019 at 6:56 PM  wrote:

> So doing this seems to assume that gluster is already configured on the
> hosts.  What if it’s not configured yet, can I use the web config to do
> this or it has to be done separate of ovirt ?
>
>
>
> Thanks again
>
>
>
> Simon
>
>
>
> *From:* Kaustav Majumder 
> *Sent:* September 9, 2019 8:08 AM
> *To:* mailing-ov...@qic.ca
> *Cc:* users@ovirt.org
> *Subject:* [ovirt-users] Re: gluster
>
>
>
> Hi,
> You can try this.
> Engine-UI -> Compute -> Clusters -> New -> (Check) Enable Gluster Service
> -> (Check) Import existing gluster configuration -> Enter details of one
> gluster host.
> Engine will add all host in peer with gluster under the new Cluster.
>
>
>
> On Mon, Sep 9, 2019 at 5:31 PM  wrote:
>
> Hi,
>
>
>
> I see options to deploy ovirt with gluster during the initial rollout,
> however I can’t seem to find information as to how I can add it following a
> non gluster initial setup:
>
>
>
> GlusterFS Version:
>
> [N/A]
>
>
>
> Thanks
>
>
>
> Simon
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNRQN76L3XGNLYXYEIRYVDCJGU5BBGL5/
>
>
>
>
> --
>
> *Thanks,*
>
> *Kaustav Majumder*
>


-- 

Thanks,

Kaustav Majumder
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GR4XHNGO57K57N6JKB4XE5P3HX7RRMY5/


[ovirt-users] nfs

2019-09-09 Thread mailing-ovirt
I`m trying to mount a nfs share.

 

if I manually mount it from ssh, I can access it without issues.

 

However when I do it from the web config, it keeps failing: 

 

Not sure how to solve that.

 

Thanks

 

Simon

 

2019-09-09 09:08:47,601-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] START
repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:48)

2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH repoStats
return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '0.4',
'valid': True}} from=::1,42394, task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2
(api:54)

2019-09-09 09:08:47,611-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,839-0400 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,846-0400 INFO  (jsonrpc/2) [api.host] START
getCapabilities() from=::1,42394 (api:48)

2019-09-09 09:08:48,149-0400 INFO  (jsonrpc/2) [root] managedvolume not
supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot
import os_brick',) (caps:152)

2019-09-09 09:08:49,212-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.01 seconds (__init__:312)

2019-09-09 09:08:49,263-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err=
(hooks:114)

2019-09-09 09:08:49,497-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,657-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,665-0400 INFO  (jsonrpc/7) [vdsm.api] START
repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:48)

2019-09-09 09:08:49,666-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats
return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '2.4',
'valid': True}} from=::1,42394, task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7
(api:54)

2019-09-09 09:08:49,667-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,800-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err=
(hooks:114)

2019-09-09 09:08:50,464-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=
(hooks:114)

2019-09-09 09:08:50,467-0400 INFO  (jsonrpc/2) [api.host] FINISH
getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info':
{u'HBAInventory': {u'iSCSI': [{u'InitiatorName':
u'iqn.1994-05.com.redhat:d8c85fc0ab85'}], u'FC': []}, u'packages2':
{u'kernel': {u'release': u'957.27.2.el7.x86_64', u'version': u'3.10.0'},
u'spice-server': {u'release': u'6.el7_6.1', u'version': u'0.14.0'},
u'librbd1': {u'release': u'4.el7', u'version': u'10.2.5'}, u'vdsm':
{u'release': u'1.el7', u'version': u'4.30.24'}, u'qemu-kvm': {u'release':
u'18.el7_6.7.1', u'version': u'2.12.0'}, u'openvswitch': {u'release':
u'4.el7', u'version': u'2.11.0'}, u'libvirt': {u'release': u'10.el7_6.12',
u'version': u'4.5.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7',
u'version': u'2.3.3'}, u'qemu-img': {u'release': u'18.el7_6.7.1',
u'version': u'2.12.0'}, u'mom': {u'release': u'1.el7.centos', u'version':
u'0.5.12'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'6.5'}},
u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel':
u'Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz', u'nestedVirtualization':
False, u'liveMerge': u'true', u'hooks': {u'before_vm_start':
{u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'},
u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}},
u'after_network_setup': {u'30_ethtool_options': {u'md5':
u'ce1fbad7aa0389e3b06231219140bf0d'}}, u'after_vm_destroy':
{u'delete_vhostuserclient_hook': {u'md5':
u'c2f279cc9483a3f842f6c29df13994c1'}, u'50_vhostmd': {u'md5':
u'bdf4802c0521cf1bae08f2b90a9559cf'}}, u'after_vm_start':
{u'openstacknet_utils.py': {u'md5': u'1ed38ddf30f8a9c7574589e77e2c0b1f'},
u'50_openstacknet': {u'md5': u'ea0a5a715da8c1badbcda28e8b8fa00e'}},
u'after_device_migrate_destination': {u'openstacknet_utils.py': {u'md5':
u'1ed38ddf30f8a9c7574589e77e2c0b1f'}, u'50_openstacknet': {u'md5':
u'6226fbc4d1602994828a3904fc1b875d'}}, u'before_device_migrate_destination':
{u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}},
u'after_device_create': {u'openstacknet_utils.py': {u'md5':

[ovirt-users] Re: gluster

2019-09-09 Thread simon
Thank you.

 

From: Kaustav Majumder  
Sent: September 9, 2019 8:08 AM
To: mailing-ov...@qic.ca
Cc: users@ovirt.org
Subject: [ovirt-users] Re: gluster

 

Hi,
You can try this.
Engine-UI -> Compute -> Clusters -> New -> (Check) Enable Gluster Service -> 
(Check) Import existing gluster configuration -> Enter details of one gluster 
host.
Engine will add all host in peer with gluster under the new Cluster.

 

On Mon, Sep 9, 2019 at 5:31 PM mailto:mailing-ov...@qic.ca> > wrote:

Hi, 

 

I see options to deploy ovirt with gluster during the initial rollout, however 
I can’t seem to find information as to how I can add it following a non gluster 
initial setup: 

 

GlusterFS Version:

[N/A]

 

Thanks

 

Simon

___
Users mailing list -- users@ovirt.org  
To unsubscribe send an email to users-le...@ovirt.org 
 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNRQN76L3XGNLYXYEIRYVDCJGU5BBGL5/




 

-- 

Thanks,

Kaustav Majumder

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I5LI665B7ROV3R2FPEQ525TS5SP2AKKD/


[ovirt-users] Re: ovirt-engine-extension-aaa-ldap-setup

2019-09-09 Thread Rick A
I finally got this to work so I'm posting what I did in case it may help 
someone else in the future. Hopefully the format of this site won't make it 
hard to read.

- Thanks to Edward Berger who got me to the right direction and providing this 
link:

https://github.com/oVirt/ovirt-engine-extension-aaa-ldap/blob/master/profiles/openldap.properties

- Also Thanks to Ondra Machacek for advising to use the 
ovirt-engine-extensions-tool 

All changes are made on /etc/ovirt-engine/aaa/MYDOMAIN.com.properties

- Once I added this line: 

sequence.openldap-init-vars.040.var-set.value = 
(objectClass=Person)(${seq:simple_attrsUserName}=*)

- I was getting this error:

-->Authz.InvokeCommands.FETCH_PRINCIPAL_RECORD principal='null'
2019-09-06 10:50:18,837-04 SEVERE  Cannot locate principal 'null'

- So then I changed the Principal map from "uid" to "cn" by adding  this line: 

attrmap.map-principal-record.attr.PrincipalRecord_PRINCIPAL.map = cn

- After that, it pulled the user principal name, but then when trying to add a 
user in the web interface, it would fail with this error:

ERROR: null value in column "external_id" violates not-null constraint

- So I mapped the PrincipalRecord_ID to the user mail attribute figuring that 
would be fine since emails are mostly unique anyway,by adding the following 
line:

attrmap.map-principal-record.attr.PrincipalRecord_ID.map = mail



My configuration: /etc/ovirt-engine/aaa/MYDOMAIN.com.properties

include = 

vars.server = SERVERNAME.MYDOMAIN.com
vars.user = ldapu...@mydomain.com
vars.password = USER PASSWORD

pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
pool.default.serverset.type = single
pool.default.serverset.single.server = ${global:vars.server}



attrmap.map-principal-record.attr.PrincipalRecord_PRINCIPAL.map = cn
attrmap.map-principal-record.attr.PrincipalRecord_ID.map = mail

sequence.openldap-init-vars.010.description = set base dn
sequence.openldap-init-vars.010.type = var-set
sequence.openldap-init-vars.010.var-set.variable = simple_attrsBaseDN
sequence.openldap-init-vars.010.var-set.value = DC=MYDOMAIN,DC=com
sequence.openldap-init-vars.020.var-set.value = cn
sequence.openldap-init-vars.040.var-set.value = 
(objectClass=Person)(${seq:simple_attrsUserName}=*)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H3SB6QRWEZETO6YJEDO7SMAVEMH4PPHZ/


[ovirt-users] Re: gluster

2019-09-09 Thread simon
So doing this seems to assume that gluster is already configured on the hosts.  
What if it’s not configured yet, can I use the web config to do this or it has 
to be done separate of ovirt ? 

 

Thanks again

 

Simon

 

From: Kaustav Majumder  
Sent: September 9, 2019 8:08 AM
To: mailing-ov...@qic.ca
Cc: users@ovirt.org
Subject: [ovirt-users] Re: gluster

 

Hi,
You can try this.
Engine-UI -> Compute -> Clusters -> New -> (Check) Enable Gluster Service -> 
(Check) Import existing gluster configuration -> Enter details of one gluster 
host.
Engine will add all host in peer with gluster under the new Cluster.

 

On Mon, Sep 9, 2019 at 5:31 PM mailto:mailing-ov...@qic.ca> > wrote:

Hi, 

 

I see options to deploy ovirt with gluster during the initial rollout, however 
I can’t seem to find information as to how I can add it following a non gluster 
initial setup: 

 

GlusterFS Version:

[N/A]

 

Thanks

 

Simon

___
Users mailing list -- users@ovirt.org  
To unsubscribe send an email to users-le...@ovirt.org 
 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNRQN76L3XGNLYXYEIRYVDCJGU5BBGL5/




 

-- 

Thanks,

Kaustav Majumder

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5FOKIXQBL6MHDQSJRQUCPI3JMQOPUHOI/


[ovirt-users] How do I make delta clones ? OVM (aka Xen) vs Ovirt

2019-09-09 Thread Tim Tuck

Hi all,

I'm new to Ovirt coming from a OVM ( aka Xen ) environment and I'm 
trying to work out how to do similar things.


In my OVM environment I have VMs that are 100GB in size. These VMs are 
templates that are used for running customer workshops so we create and 
destroy them quite often.


In OVM I can clone 10 x 100GB VM's in about 1 minute, the cloning 
process just creates a "delta" VM rather than copying the entire 100GB. 
Those VM's can then be run and the on-disk size of the delta VM grows as 
it runs and deviates from the master.


The same goes for templates, creating a 100GB VM from a template takes 
only a few seconds. Creating a template from a VM only takes a few seconds.


My problem is that I cant seem to do any of this using Ovirt.

it appears that there is no way to tell the cloning process " I want 10" 
like I can in OVM - am I missing something here?


In my tests I find that even if a "snapshot" is made, the entire VM is 
copied rather than creating a delta.


Creating a VM from a template is no different in that it copies the 
entire VM.


So...

Is it possible for Ovirt to create multiple clones on request ?

Can Ovirt do "delta" copies rather than copy the entire VM ?

If Ovirt can do these things, how do I set it up to to do that ?

Any help appreciated.

thanks

Tim






---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C23QFIU6A65SKHQR7HATD4IXXVLQPILD/