[ovirt-users] Error creating a storage domain (On Cisco UCS Only)

2019-01-27 Thread nico . kruger
Hi Guys,

I am trying to install a new cluster... I currently have a 9 node and two 6 
node oVirt Clusters... (these were installed on 4.1 and upgraded to 4.2)

So i want to build a new cluster, which is working fine on this HP notebook i 
use for testing (using single node gluster deployment)

But when i try install this on my production servers which are Cisco UCS 
servers i keep getting this error:

[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Error 
creating a storage domain]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Error creating a storage 
domain]\". HTTP response code is 400."}


This happens during storage creation after the hosted-engine is built and after 
gluster has been deployed (error happens for both single and 3 replica 
deployments)

I just cant see how an install on one type of server is successful but not on 
the UCS servers (which i am running my other ovirt clusters on)

BTW i dont think the issue is related to Gluster Storage Create as i tried 
using NFS and Local storage and get the same error (on UCS server only)

I am using the ovirt-node-ng-installer-4.2.0-2019011406.el7.iso install ISO 


Below is a tail from ovirt-hosted-engine-setup-ansible-create_storage_domain 
Log file
2019-01-27 11:09:49,754+0400 INFO ansible ok {'status': 'OK', 'ansible_task': 
u'Fetch Datacenter name', 'ansible_host': u'localhost', 'ansible_playbook': 
u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 
'ansible_type': 'task'}
2019-01-27 11:09:49,754+0400 DEBUG ansible on_any args 
 kwargs 
2019-01-27 11:09:50,478+0400 INFO ansible task start {'status': 'OK', 
'ansible_task': u'Add NFS storage domain', 'ansible_playbook': 
u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 
'ansible_type': 'task'}
2019-01-27 11:09:50,479+0400 DEBUG ansible on_any args TASK: Add NFS storage 
domain kwargs is_conditional:False 
2019-01-27 11:09:51,151+0400 DEBUG var changed: host "localhost" var 
"otopi_storage_domain_details_nfs" type "" value: "{
"changed": false, 
"skip_reason": "Conditional result was False", 
"skipped": true
}"
2019-01-27 11:09:51,151+0400 INFO ansible skipped {'status': 'SKIPPED', 
'ansible_task': u'Add NFS storage domain', 'ansible_host': u'localhost', 
'ansible_playbook': 
u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 
'ansible_type': 'task'}
2019-01-27 11:09:51,151+0400 DEBUG ansible on_any args 
 kwargs 
2019-01-27 11:09:51,820+0400 INFO ansible task start {'status': 'OK', 
'ansible_task': u'Add glusterfs storage domain', 'ansible_playbook': 
u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 
'ansible_type': 'task'}
2019-01-27 11:09:51,821+0400 DEBUG ansible on_any args TASK: Add glusterfs 
storage domain kwargs is_conditional:False 
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var 
"otopi_storage_domain_details_gluster" type "" value: "{
"changed": false, 
"exception": "Traceback (most recent call last):\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_Xous24/__main__.py\", line 682, in 
main\nret = storage_domains_module.create()\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_Xous24/ansible_ovirt_storage_domain_payload.zip/ansible/module_utils/ovirt.py\",
 line 587, in create\n**kwargs\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 24225, in 
add\nreturn self._internal_add(storage_domain, headers, query, wait)\n  
File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in 
_internal_add\nreturn future.wait() if wait else future\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n 
   return self._code(response)\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in 
callback\nself._check_fault(response)\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in 
_check_fault\n
 self._raise_error(response, body)\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in 
_raise_error\nraise error\nError: Fault reason is \"Operation Failed\". 
Fault detail is \"[Error creating a storage domain]\". HTTP response code is 
400.\n", 
"failed": true, 
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Error 
creating a storage domain]\". HTTP response code is 400."
}"
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var 
"ansible_play_hosts" type "" value: "[]"
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var 
"play_hosts" type "" value: "[]"
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var 
"ansible_play_batch" type "" value: "[]"
2019-01-27 11:10:02,046+0400 ERROR ansible failed {'status': 'FAILED', 
'ansible_type': 'task', 'ansible_task': u'Add gluste

[ovirt-users] Re: Deploying single instance - error

2019-01-27 Thread Leo David
Hi,
It seems so that I had to manually add the sections, to make the scrip
working:
[diskcount]
12
[stripesize]
256

It looks like ansible is still searching for these sections regardless
that I have configured "jbod"  in the wizard...

Thanks,

Leo



On Sun, Jan 27, 2019 at 10:49 AM Leo David  wrote:

> Hello Everyone,
> Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
> ) for deploying one node instance by following from within CockpitUI seems
> not to be possible.
> Here's the generated inventory ( i've specified "jbod"  in the wizard ):
>
> #gdeploy configuration generated by cockpit-gluster plugin
> [hosts]
> 192.168.80.191
>
> [script1:192.168.80.191]
> action=execute
> ignore_script_errors=no
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> 192.168.80.191
> [disktype]
> jbod
> [service1]
> action=enable
> service=chronyd
> [service2]
> action=restart
> service=chronyd
> [shell2]
> action=execute
> command=vdsm-tool configure --force
> [script3]
> action=execute
> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
> ignore_script_errors=no
> [pv1:192.168.80.191]
> action=create
> devices=sdb
> ignore_pv_errors=no
> [vg1:192.168.80.191]
> action=create
> vgname=gluster_vg_sdb
> pvname=sdb
> ignore_vg_errors=no
> [lv1:192.168.80.191]
> action=create
> lvname=gluster_lv_engine
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/engine
> size=230GB
> lvtype=thick
> [selinux]
> yes
> [service3]
> action=restart
> service=glusterd
> slice_setup=yes
> [firewalld]
> action=add
>
> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
> services=glusterfs
> [script2]
> action=execute
> file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
> [shell3]
> action=execute
> command=usermod -a -G gluster qemu
> [volume1]
> action=create
> volname=engine
> transport=tcp
>
> key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
> value=36,36,on,32,on,off,30,off,on,off,off,off,enable
> brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
> ignore_volume_errors=no
>
> It does not get to finish,  throwing the following error:
>
> PLAY [gluster_servers]
> *
> TASK [Create volume group on the disks]
> 
> changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
> u'gluster_vg_sdb'})
> PLAY RECAP
> *
> 192.168.80.191 : ok=1changed=1unreachable=0
> failed=0
> *Error: Section diskcount not found in the configuration file*
>
> Any thoughts ?
>
>
>
>
>
>
> --
> Best regards, Leo David
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3DBH6DGENJGBAVKNPY5T/


[ovirt-users] Deploying single instance - error

2019-01-27 Thread Leo David
Hello Everyone,
Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
) for deploying one node instance by following from within CockpitUI seems
not to be possible.
Here's the generated inventory ( i've specified "jbod"  in the wizard ):

#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
192.168.80.191

[script1:192.168.80.191]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.80.191
[disktype]
jbod
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no
[pv1:192.168.80.191]
action=create
devices=sdb
ignore_pv_errors=no
[vg1:192.168.80.191]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[lv1:192.168.80.191]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=230GB
lvtype=thick
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
value=36,36,on,32,on,off,30,off,on,off,off,off,enable
brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
ignore_volume_errors=no

It does not get to finish,  throwing the following error:

PLAY [gluster_servers]
*
TASK [Create volume group on the disks]

changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
u'gluster_vg_sdb'})
PLAY RECAP
*
192.168.80.191 : ok=1changed=1unreachable=0
failed=0
*Error: Section diskcount not found in the configuration file*

Any thoughts ?






-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OOOVWZWHGQGTCXQGJ3JQBJDCDGXDZW5J/