Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-11-14 Thread Open tech
Hi Kasturi,
   Thanks a lot for taking a look at this. I think its
"grafton-sanity-check.sh" . Following is the complete output from the
install attempt. Ansible ver is 2.4. Gdeploy is 2.0.2.

Do you have a tested step by step for 4.1.6/7 ?. Would be great if you can
share it.


PLAY [gluster_servers]
*

TASK [Run a shell script]
**
changed: [ovirt2] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda1 -h
ovirt1,ovirt2,ovirt3)
changed: [ovirt3] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda1 -h
ovirt1,ovirt2,ovirt3)
changed: [ovirt1] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda1 -h
ovirt1,ovirt2,ovirt3)

PLAY RECAP
*
ovirt1 : ok=1changed=1unreachable=0failed=0

ovirt2 : ok=1changed=1unreachable=0failed=0

ovirt3 : ok=1changed=1unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [Enable or disable services]
**
ok: [ovirt1] => (item=chronyd)
ok: [ovirt3] => (item=chronyd)
ok: [ovirt2] => (item=chronyd)

PLAY RECAP
*
ovirt1 : ok=1changed=0unreachable=0failed=0

ovirt2 : ok=1changed=0unreachable=0failed=0

ovirt3 : ok=1changed=0unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [start/stop/restart/reload services]
**
changed: [ovirt3] => (item=chronyd)
changed: [ovirt1] => (item=chronyd)
changed: [ovirt2] => (item=chronyd)

PLAY RECAP
*
ovirt1 : ok=1changed=1unreachable=0failed=0

ovirt2 : ok=1changed=1unreachable=0failed=0

ovirt3 : ok=1changed=1unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [Run a command in the shell]
**
changed: [ovirt2] => (item=vdsm-tool configure --force)
changed: [ovirt3] => (item=vdsm-tool configure --force)
changed: [ovirt1] => (item=vdsm-tool configure --force)

PLAY RECAP
*
ovirt1 : ok=1changed=1unreachable=0failed=0

ovirt2 : ok=1changed=1unreachable=0failed=0

ovirt3 : ok=1changed=1unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [Run a shell script]
**
fatal: [ovirt2]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt3]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt1]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmpEkEkpR/run-script.retry

PLAY RECAP
*
ovirt1 : ok=0changed=0unreachable=0failed=1

ovirt2 : ok=0changed=0unreachable=0failed=1

ovirt3 : ok=0changed=0unreachable=0failed=1


Error: Ansible(>= 2.2) is not installed.
Some of the features might not work if not installed.


[root@ovirt2 ~]# yum info ansible

Loaded plugins: fastestmirror, imgbased-persist

Loading mirror speeds from cached hostfile

 * epel: mirror01.idc.hinet.net

 * ovirt-4.1: ftp.nluug.nl

 * ovirt-4.1-epel: mirror01.idc.hinet.net

Installed Packages

Name: *ansible*

Arch: noarch

Version : 2.4.0.0

Release : 5.el7

Size: 38 M

Repo: installed

Summary : SSH-based configuration management, deployment, and task
execution system

URL : http://ansible.com

License : GPLv3+

Description :

: Ansible is a radically simple model-driven configuration
management,

: multi-node deployment, and remote task execution system.
Ansible works

: over SSH and does not require any software or daemons to be
installed

: on remote nodes. Extension modules can 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-11-13 Thread Kasturi Narra
Hello,

   From the output you have pasted looks like grafton-sanity.check.sh is
passing and disable-multipath.sh script is failing if i understand
correctly. Can you please copy paste the file path and content of the file ?

Thanks
kasturi

On Mon, Nov 13, 2017 at 4:11 PM, Open tech  wrote:

> Hi Kasturi,
>Thanks a lot for taking a look at this. I think its
> "grafton-sanity-check.sh" . Following is the complete output from the
> install attempt. Ansible ver is 2.4. Gdeploy is 2.0.2.
>
> Do you have a tested step by step for 4.1.6/7 ?. Would be great if you can
> share it.
>
>
> PLAY [gluster_servers] **
> ***
>
> TASK [Run a shell script] **
> 
> changed: [ovirt2] => (item=/usr/share/gdeploy/
> scripts/grafton-sanity-check.sh -d sda1 -h ovirt1,ovirt2,ovirt3)
> changed: [ovirt3] => (item=/usr/share/gdeploy/
> scripts/grafton-sanity-check.sh -d sda1 -h ovirt1,ovirt2,ovirt3)
> changed: [ovirt1] => (item=/usr/share/gdeploy/
> scripts/grafton-sanity-check.sh -d sda1 -h ovirt1,ovirt2,ovirt3)
>
> PLAY RECAP 
> *
> ovirt1 : ok=1changed=1unreachable=0
>  failed=0
> ovirt2 : ok=1changed=1unreachable=0
>  failed=0
> ovirt3 : ok=1changed=1unreachable=0
>  failed=0
>
>
> PLAY [gluster_servers] **
> ***
>
> TASK [Enable or disable services] **
> 
> ok: [ovirt1] => (item=chronyd)
> ok: [ovirt3] => (item=chronyd)
> ok: [ovirt2] => (item=chronyd)
>
> PLAY RECAP 
> *
> ovirt1 : ok=1changed=0unreachable=0
>  failed=0
> ovirt2 : ok=1changed=0unreachable=0
>  failed=0
> ovirt3 : ok=1changed=0unreachable=0
>  failed=0
>
>
> PLAY [gluster_servers] **
> ***
>
> TASK [start/stop/restart/reload services] **
> 
> changed: [ovirt3] => (item=chronyd)
> changed: [ovirt1] => (item=chronyd)
> changed: [ovirt2] => (item=chronyd)
>
> PLAY RECAP 
> *
> ovirt1 : ok=1changed=1unreachable=0
>  failed=0
> ovirt2 : ok=1changed=1unreachable=0
>  failed=0
> ovirt3 : ok=1changed=1unreachable=0
>  failed=0
>
>
> PLAY [gluster_servers] **
> ***
>
> TASK [Run a command in the shell] **
> 
> changed: [ovirt2] => (item=vdsm-tool configure --force)
> changed: [ovirt3] => (item=vdsm-tool configure --force)
> changed: [ovirt1] => (item=vdsm-tool configure --force)
>
> PLAY RECAP 
> *
> ovirt1 : ok=1changed=1unreachable=0
>  failed=0
> ovirt2 : ok=1changed=1unreachable=0
>  failed=0
> ovirt3 : ok=1changed=1unreachable=0
>  failed=0
>
>
> PLAY [gluster_servers] **
> ***
>
> TASK [Run a shell script] **
> 
> fatal: [ovirt2]: FAILED! => {"failed": true, "msg": "The conditional check
> 'result.rc != 0' failed. The error was: error while evaluating conditional
> (result.rc != 0): 'dict object' has no attribute 'rc'"}
> fatal: [ovirt3]: FAILED! => {"failed": true, "msg": "The conditional check
> 'result.rc != 0' failed. The error was: error while evaluating conditional
> (result.rc != 0): 'dict object' has no attribute 'rc'"}
> fatal: [ovirt1]: FAILED! => {"failed": true, "msg": "The conditional check
> 'result.rc != 0' failed. The error was: error while evaluating conditional
> (result.rc != 0): 'dict object' has no attribute 'rc'"}
> to retry, use: --limit @/tmp/tmpEkEkpR/run-script.retry
>
> PLAY RECAP 
> *
> ovirt1 : ok=0changed=0unreachable=0
>  failed=1
> ovirt2 : ok=0changed=0unreachable=0
>  failed=1
> ovirt3 : ok=0changed=0unreachable=0
>  failed=1
>
> Error: Ansible(>= 2.2) is not installed.
> Some of the features might not work if not installed.
>
>
> [root@ovirt2 ~]# yum info ansible
>
> Loaded plugins: fastestmirror, imgbased-persist
>
> Loading mirror speeds from cached hostfile
>
>  * epel: mirror01.idc.hinet.net
>
>  * ovirt-4.1: ftp.nluug.nl
>
>  * ovirt-4.1-epel: mirror01.idc.hinet.net
>
> Installed Packages
>
> Name: *ansible*
>
> Arch: noarch
>
> Version : 2.4.0.0
>
> 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-11-13 Thread Kasturi Narra
Hello,

Can you please let me know which is the script it is failing  and
ansible and gdeploy versions?

Thanks
kasturi

On Mon, Nov 13, 2017 at 2:54 PM, Open tech  wrote:

> Hi All,
>I am new to Ovirt. I am hitting the exact same error while trying a new
> install in a nested virtualization setup on esxi 6.5.
> I am following this tutorial as well. Have three nodes on esxi with dual
> networks & passwordless ssh enabled.
> https://www.ovirt.org/blog/2017/04/up-and-running-with-
> ovirt-4.1-and-gluster-storage/
>
> Node install goes through without issue. Run into this error when i hit
> deploy.
>
> TASK [Run a shell script] **
> 
> fatal: [ovirt3]: FAILED! => {"failed": true, "msg": "The conditional check
> 'result.rc != 0' failed. The error was: error while evaluating conditional
> (result.rc != 0): 'dict object' has no attribute 'rc'"}
> fatal: [ovirt1]: FAILED! => {"failed": true, "msg": "The conditional check
> 'result.rc != 0' failed. The error was: error while evaluating conditional
> (result.rc != 0): 'dict object' has no attribute 'rc'"}
> fatal: [ovirt2]: FAILED! => {"failed": true, "msg": "The conditional check
> 'result.rc != 0' failed. The error was: error while evaluating conditional
> (result.rc != 0): 'dict object' has no attribute 'rc'"}
> to retry, use: --limit @/tmp/tmpbDBjAt/run-script.retry
>
>
> @Simone Marchioni were you able to find a solution ???.
>
> Thanks
> hk
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-11-13 Thread Open tech
Hi All,
   I am new to Ovirt. I am hitting the exact same error while trying a new
install in a nested virtualization setup on esxi 6.5.
I am following this tutorial as well. Have three nodes on esxi with dual
networks & passwordless ssh enabled.
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/

Node install goes through without issue. Run into this error when i hit
deploy.

TASK [Run a shell script]
**
fatal: [ovirt3]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt1]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt2]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmpbDBjAt/run-script.retry


@Simone Marchioni were you able to find a solution ???.

Thanks
hk
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-13 Thread knarra

On 07/13/2017 04:30 PM, Simone Marchioni wrote:

Il 12/07/2017 10:59, knarra ha scritto:

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

Hi,

reply here to both Gianluca and Kasturi.

Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 
packages, but glusterfs-server was missing in my "yum install" 
command, so added glusterfs-server to my installation.


Kasturi: packages ovirt-hosted-engine-setup, gdeploy and 
cockpit-ovirt-dashboard already installed and updated. vdsm-gluster 
was missing, so added to my installation.

okay, cool.


:-)



Rerun deployment and IT WORKED! I can read the message "Succesfully 
deployed Gluster" with the blue button "Continue to Hosted Engine 
Deployment". There's a minor glitch in the window: the green "V" in 
the circle is missing, like there's a missing image (or a wrong 
path, as I had to remove "ansible" from the grafton-sanity-check.sh 
path...)
There is a bug for this and it will be fixed soon. Here is the bug id 
for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082


Ok, thank you!



Although the deployment worked, and the firewalld and gluterfs 
errors are gone, a couple of errors remains:



AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:

PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script 
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That 
is why this failure.


You're right: changed the path and now it's ok.



PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}

to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

This error can be safely ignored.


Ok




These are a problem for my installation or can I ignore them?
You can just manually run the script to disable hooks on all the 
nodes. Other error you can ignore.


Done it



By the way, I'm writing and documenting this process and can prepare 
a tutorial if someone is interested.


Thank you again for your support: now I'll proceed with the Hosted 
Engine Deployment.

Good to know that you can now start with Hosted Engine Deployment.



Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-13 Thread Simone Marchioni

Il 12/07/2017 10:59, knarra ha scritto:

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

Hi,

reply here to both Gianluca and Kasturi.

Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 
packages, but glusterfs-server was missing in my "yum install" 
command, so added glusterfs-server to my installation.


Kasturi: packages ovirt-hosted-engine-setup, gdeploy and 
cockpit-ovirt-dashboard already installed and updated. vdsm-gluster 
was missing, so added to my installation.

okay, cool.


:-)



Rerun deployment and IT WORKED! I can read the message "Succesfully 
deployed Gluster" with the blue button "Continue to Hosted Engine 
Deployment". There's a minor glitch in the window: the green "V" in 
the circle is missing, like there's a missing image (or a wrong path, 
as I had to remove "ansible" from the grafton-sanity-check.sh path...)
There is a bug for this and it will be fixed soon. Here is the bug id 
for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082


Ok, thank you!



Although the deployment worked, and the firewalld and gluterfs errors 
are gone, a couple of errors remains:



AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:

PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script 
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That 
is why this failure.


You're right: changed the path and now it's ok.



PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}

to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

This error can be safely ignored.


Ok




These are a problem for my installation or can I ignore them?
You can just manually run the script to disable hooks on all the 
nodes. Other error you can ignore.


Done it



By the way, I'm writing and documenting this process and can prepare 
a tutorial if someone is interested.


Thank you again for your support: now I'll proceed with the Hosted 
Engine Deployment.

Good to know that you can now start with Hosted Engine Deployment.


Started the Hosted Engine Deployment, but I have a 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-12 Thread knarra

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

On 07/11/2017 01:32 PM, Simone Marchioni wrote:

Il 11/07/2017 07:59, knarra ha scritto:

Hi,

removed partition signatures with wipefs and run deploy again: this 
time the creation of VG and LV worked correctly. The deployment 
proceeded until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Add/Delete services to firewalld rules] 
**
failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}

to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry

PLAY RECAP 
*

ha1.domain.it: ok=1changed=0unreachable=0 failed=1
ha2.domain.it: ok=1changed=0unreachable=0 failed=1
ha3.domain.it: ok=1changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Open/Close firewalld ports] 
**

changed: [ha1.domain.it] => (item=111/tcp)
changed: [ha2.domain.it] => (item=111/tcp)
changed: [ha3.domain.it] => (item=111/tcp)
changed: [ha1.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=2049/tcp)
changed: [ha1.domain.it] => (item=54321/tcp)
changed: [ha3.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=54321/tcp)
changed: [ha1.domain.it] => (item=5900/tcp)
changed: [ha3.domain.it] => (item=54321/tcp)
changed: [ha2.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=5900-6923/tcp)
changed: [ha3.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5666/tcp)
changed: [ha2.domain.it] => (item=5666/tcp)
changed: [ha1.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5666/tcp)
changed: [ha3.domain.it] => (item=16514/tcp)

TASK [Reloads the firewall] 


changed: [ha1.domain.it]
changed: [ha2.domain.it]
changed: [ha3.domain.it]

PLAY RECAP 
*

ha1.domain.it: ok=3changed=2unreachable=0 failed=0
ha2.domain.it: ok=3changed=2unreachable=0 failed=0
ha3.domain.it: ok=3changed=2unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-12 Thread Simone Marchioni

Il 11/07/2017 11:23, knarra ha scritto:

On 07/11/2017 01:32 PM, Simone Marchioni wrote:

Il 11/07/2017 07:59, knarra ha scritto:

Hi,

removed partition signatures with wipefs and run deploy again: this 
time the creation of VG and LV worked correctly. The deployment 
proceeded until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Add/Delete services to firewalld rules] 
**
failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}

to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry

PLAY RECAP 
*

ha1.domain.it: ok=1changed=0unreachable=0 failed=1
ha2.domain.it: ok=1changed=0unreachable=0 failed=1
ha3.domain.it: ok=1changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Open/Close firewalld ports] 
**

changed: [ha1.domain.it] => (item=111/tcp)
changed: [ha2.domain.it] => (item=111/tcp)
changed: [ha3.domain.it] => (item=111/tcp)
changed: [ha1.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=2049/tcp)
changed: [ha1.domain.it] => (item=54321/tcp)
changed: [ha3.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=54321/tcp)
changed: [ha1.domain.it] => (item=5900/tcp)
changed: [ha3.domain.it] => (item=54321/tcp)
changed: [ha2.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=5900-6923/tcp)
changed: [ha3.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5666/tcp)
changed: [ha2.domain.it] => (item=5666/tcp)
changed: [ha1.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5666/tcp)
changed: [ha3.domain.it] => (item=16514/tcp)

TASK [Reloads the firewall] 


changed: [ha1.domain.it]
changed: [ha2.domain.it]
changed: [ha3.domain.it]

PLAY RECAP 
*

ha1.domain.it: ok=3changed=2unreachable=0 failed=0
ha2.domain.it: ok=3changed=2unreachable=0 failed=0
ha3.domain.it: ok=3changed=2unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread knarra

On 07/11/2017 01:32 PM, Simone Marchioni wrote:

Il 11/07/2017 07:59, knarra ha scritto:

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Hi,

I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. 
Can you try to do fill zeros in the disk for 512MB or 1GB and try 
again ?


dd if=/dev/zero of=

Before running the script again try to do pvcreate and see if 
that works. If it works, just do pvdelete and run the script. 
Everything should work fine.


Thanks
kasturi


Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

removed partition signatures with wipefs and run deploy again: this 
time the creation of VG and LV worked correctly. The deployment 
proceeded until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread Gianluca Cecchi
On Tue, Jul 11, 2017 at 10:02 AM, Simone Marchioni 
wrote:

>
> Hi,
>
> removed partition signatures with wipefs and run deploy again: this time
> the creation of VG and LV worked correctly. The deployment proceeded until
> some new errors... :-/
>
>
> PLAY [gluster_servers] **
> ***
>
> TASK [start/stop/restart/reload services] **
> 
> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item":
> "glusterd", "msg": "Could not find the requested service glusterd: host"}
> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item":
> "glusterd", "msg": "Could not find the requested service glusterd: host"}
> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item":
> "glusterd", "msg": "Could not find the requested service glusterd: host"}
> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
>
>
>
[snip]


> In start/stop/restart/reload services it complain about "Could not find
> the requested service glusterd: host". GlusterFS must be preinstalled or
> not? I simply installed the rpm packages manually BEFORE the deployment:
>
> yum install glusterfs glusterfs-cli glusterfs-libs
> glusterfs-client-xlators glusterfs-api glusterfs-fuse
>
> but never configured anything.
>
> For firewalld problem "ERROR: Exception caught:
> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not
> among existing services Services are defined by port/tcp relationship and
> named as they are in /etc/services (on most systems)" I haven't touched
> anything... it's an "out of the box" installation of CentOS 7.3.
>
> Don't know if the following problems - "Run a shell script" and "usermod:
> group 'gluster' does not exist" - are related to these... maybe the usermod
> problem.
>
> Thank you again.
>
> Simone
>
>
For sure you have to install also the package glusterfs-server, that
provides glusterd.
Probably you was misled by the fact that apparently base CentOS packages
seems not to provide the package?
But if you have ovirt-4.1-dependencies.repo enabled, you should have
gluster 3.8 packages and you do have to install glusterfs-server

[ovirt-4.1-centos-gluster38]
name=CentOS-7 - Gluster 3.8
baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.8/
gpgcheck=1
enabled=1
gpgkey=
https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-storage-common/master/RPM-GPG-KEY-CentOS-SIG-Storage

eg


[g.cecchi@ov300 ~]$ sudo yum install glusterfs-server
Loaded plugins: fastestmirror, langpacks
base
  | 3.6 kB  00:00:00
centos-opstools-release
 | 2.9 kB  00:00:00
epel-util/x86_64/metalink
 |  25 kB  00:00:00
extras
  | 3.4 kB  00:00:00
ovirt-4.1
 | 3.0 kB  00:00:00
ovirt-4.1-centos-gluster38
  | 2.9 kB  00:00:00
ovirt-4.1-epel/x86_64/metalink
  |  25 kB  00:00:00
ovirt-4.1-patternfly1-noarch-epel
 | 3.0 kB  00:00:00
ovirt-centos-ovirt41
  | 2.9 kB  00:00:00
rnachimu-gdeploy
  | 3.0 kB  00:00:00
updates
 | 3.4 kB  00:00:00
virtio-win-stable
 | 3.0 kB  00:00:00
Loading mirror speeds from cached hostfile
 * base: ba.mirror.garr.it
 * epel-util: epel.besthosting.ua
 * extras: ba.mirror.garr.it
 * ovirt-4.1: ftp.nluug.nl
 * ovirt-4.1-epel: epel.besthosting.ua
 * updates: ba.mirror.garr.it
Resolving Dependencies
--> Running transaction check
---> Package glusterfs-server.x86_64 0:3.8.13-1.el7 will be installed
--> Processing Dependency: glusterfs-libs(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-fuse(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.8.13-1.el7
for package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-cli(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-api(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs(x86-64) = 3.8.13-1.el7 for package:
glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: liburcu-cds.so.1()(64bit) for package:
glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: liburcu-bp.so.1()(64bit) for package:
glusterfs-server-3.8.13-1.el7.x86_64
--> Running transaction check
---> Package glusterfs.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs.x86_64 0:3.8.13-1.el7 will be an update
---> Package glusterfs-api.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-api.x86_64 0:3.8.13-1.el7 will be an update
---> Package glusterfs-cli.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-cli.x86_64 0:3.8.13-1.el7 will be an update
---> Package glusterfs-client-xlators.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-client-xlators.x86_64 0:3.8.13-1.el7 will be an
update
---> Package glusterfs-fuse.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-fuse.x86_64 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread Simone Marchioni

Il 11/07/2017 07:59, knarra ha scritto:

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Hi,

I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. 
Can you try to do fill zeros in the disk for 512MB or 1GB and try again ?


dd if=/dev/zero of=

Before running the script again try to do pvcreate and see if that 
works. If it works, just do pvdelete and run the script. Everything 
should work fine.


Thanks
kasturi


Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

removed partition signatures with wipefs and run deploy again: this time 
the creation of VG and LV worked correctly. The deployment proceeded 
until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread knarra

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Il 10/07/2017 13:49, knarra ha scritto:

On 07/10/2017 04:18 PM, Simone Marchioni wrote:

Il 10/07/2017 09:08, knarra ha scritto:

Hi Simone,

Can you please  let me know what is the version of gdeploy and 
ansible on your system? Can you check if the path 
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? 
If not, can you edit the generated config file and change the path 
to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh  and see if 
that works ?


You can check the logs in /var/log/messages , or setting 
log_path in /etc/ansbile/ansible.cfg file.


Thanks

kasturi.



Hi Kasturi,

thank you for your reply. Here are my versions:

gdeploy-2.0.2-7.noarch
ansible-2.3.0.0-3.el7.noarch

The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh 
is missing. For the sake of completeness, the entire directory 
ansible is missing under /usr/share.


In /var/log/messages there is no error message, and I have no 
/etc/ansbile/ansible.cfg config file...


I'm starting to think there are some missing pieces in my 
installation. I installed the following packages:


yum install ovirt-engine
yum install ovirt-hosted-engine-setup
yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome 
libgovirt ovirt-live-artwork ovirt-log-collector gdeploy 
cockpit-ovirt-dashboard


and relative dependencies.

Any idea?
Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" 
is present ? If yes, can you change the path in your generated 
gdeploy config file and run again ?


Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.lynx2000.it] => (item=/dev/md128)
skipping: [ha1.lynx2000.it] => (item=/dev/md128)
skipping: [ha3.lynx2000.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha1.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha3.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Hi,

I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. Can 
you try to do fill zeros in the disk for 512MB or 1GB and try again ?


dd if=/dev/zero of=

Before running the script again try to do pvcreate and 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 13:49, knarra ha scritto:

On 07/10/2017 04:18 PM, Simone Marchioni wrote:

Il 10/07/2017 09:08, knarra ha scritto:

Hi Simone,

Can you please  let me know what is the version of gdeploy and 
ansible on your system? Can you check if the path 
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? 
If not, can you edit the generated config file and change the path 
to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh  and see if 
that works ?


You can check the logs in /var/log/messages , or setting 
log_path in /etc/ansbile/ansible.cfg file.


Thanks

kasturi.



Hi Kasturi,

thank you for your reply. Here are my versions:

gdeploy-2.0.2-7.noarch
ansible-2.3.0.0-3.el7.noarch

The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh 
is missing. For the sake of completeness, the entire directory 
ansible is missing under /usr/share.


In /var/log/messages there is no error message, and I have no 
/etc/ansbile/ansible.cfg config file...


I'm starting to think there are some missing pieces in my 
installation. I installed the following packages:


yum install ovirt-engine
yum install ovirt-hosted-engine-setup
yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome 
libgovirt ovirt-live-artwork ovirt-log-collector gdeploy 
cockpit-ovirt-dashboard


and relative dependencies.

Any idea?
Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" 
is present ? If yes, can you change the path in your generated gdeploy 
config file and run again ?


Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I updated 
the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}
fatal: [ha3.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}
fatal: [ha2.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.lynx2000.it] => (item=/dev/md128)
skipping: [ha1.lynx2000.it] => (item=/dev/md128)
skipping: [ha3.lynx2000.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha1.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha3.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 13:06, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni 
> wrote:



Hi Gianluca,

I recently discovered that the file:

/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

is missing from the system, and probably is the root cause of my
problem.
Searched with

yum provides

but I can't find any package with the script inside... any clue?

Thank you
Simone

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



Hi,
but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you 
manually installed packages?
Becase the original web link covered the case of ovirt-ng nodes, not 
CentOS 7.3 OS.
Possibly you are missing any package that is instead installed inside 
ovirt-ng node by default?





Hi Gianluca,

I used plain CentOS 7.3 where I manually installed the necessary packages.
I know the original tutorial used oVirt Node, but I tought the two were 
almost the same, with the latter an "out of the box" solution but with 
the same features.


That said I discovered the problem: there is no missing package. The 
path of the script is wrong. In the tutorial it says:


/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

while the installed script is in:

/usr/share/gdeploy/scripts/grafton-sanity-check.sh

and is (correctly) part of the gdeploy package.

Updated the Gdeploy config and executed Deploy again. The situation is 
much better now, but still says "Deployment Failed". Here's the output:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
changed: [ha3.domain.it] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it,ha2.domain.it,ha3.domain.it)
changed: [ha2.domain.it] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it,ha2.domain.it,ha3.domain.it)
changed: [ha1.domain.it] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it,ha2.domain.it,ha3.domain.it)


PLAY RECAP 
*

ha1.domain.it: ok=1changed=1unreachable=0 failed=0
ha2.domain.it: ok=1changed=1unreachable=0 failed=0
ha3.domain.it: ok=1changed=1unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Enable or disable services] 
**

ok: [ha1.domain.it] => (item=chronyd)
ok: [ha3.domain.it] => (item=chronyd)
ok: [ha2.domain.it] => (item=chronyd)

PLAY RECAP 
*

ha1.lynx2000.it: ok=1changed=0unreachable=0 failed=0
ha2.lynx2000.it: ok=1changed=0unreachable=0 failed=0
ha3.lynx2000.it: ok=1changed=0unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**

changed: [ha1.domain.it] => (item=chronyd)
changed: [ha2.domain.it] => (item=chronyd)
changed: [ha3.domain.it] => (item=chronyd)

PLAY RECAP 
*

ha1.domain.it: ok=1changed=1unreachable=0 failed=0
ha2.domain.it: ok=1changed=1unreachable=0 failed=0
ha3.domain.it: ok=1changed=1unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**

changed: [ha1.domain.it] => (item=vdsm-tool configure --force)
changed: [ha3.domain.it] => (item=vdsm-tool configure --force)
changed: [ha2.domain.it] => (item=vdsm-tool configure --force)

PLAY RECAP 
*

ha1.lynx2000.it: ok=1changed=1unreachable=0 failed=0
ha2.lynx2000.it: ok=1changed=1unreachable=0 failed=0
ha3.lynx2000.it: ok=1changed=1unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread knarra

On 07/10/2017 04:18 PM, Simone Marchioni wrote:

Il 10/07/2017 09:08, knarra ha scritto:

On 07/07/2017 10:01 PM, Simone Marchioni wrote:

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a 
separate server. I wanted to test the last oVirt 4.1 with Gluster 
Storage and Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ 



I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the 
oVirt 4.1 repo and all required packages. Configured passwordless 
ssh as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as 
shown in the tutorial.


In the last step (5) the Generated Gdeply configuration (note: 
replaced the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp 


services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine 


ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data 


ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Gianluca Cecchi
On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni 
wrote:

>
> Hi Gianluca,
>
> I recently discovered that the file:
>
> /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
>
> is missing from the system, and probably is the root cause of my problem.
> Searched with
>
> yum provides
>
> but I can't find any package with the script inside... any clue?
>
> Thank you
> Simone
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
Hi,
but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you manually
installed packages?
Becase the original web link covered the case of ovirt-ng nodes, not CentOS
7.3 OS.
Possibly you are missing any package that is instead installed inside
ovirt-ng node by default?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 12:48, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:26 PM, Simone Marchioni 
> wrote:



Hi Gianluca,

thanks for your reply.
I didn't do any previous step: the 3 servers are freshly installed.
The disk was wrong: I had to use /dev/md128. Replaced sdb with the
correct one and redeployed, but the error was exactly the same.
The disks are already initialized because I created the XFS
filesystem on /dev/md128 before the deploy.

In /var/log/messages there is no error.

Hi,
Simone



You could try to run the script from command line of node 1, eg 
something like this
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it ,ha2.domain.it 
,ha3.domain.it 

and see what kind of output it gives...

Just a guess



Hi Gianluca,

I recently discovered that the file:

/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

is missing from the system, and probably is the root cause of my problem.
Searched with

yum provides

but I can't find any package with the script inside... any clue?

Thank you
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 09:08, knarra ha scritto:

On 07/07/2017 10:01 PM, Simone Marchioni wrote:

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a separate 
server. I wanted to test the last oVirt 4.1 with Gluster Storage and 
Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ 



I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the 
oVirt 4.1 repo and all required packages. Configured passwordless ssh 
as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as shown 
in the tutorial.


In the last step (5) the Generated Gdeply configuration (note: 
replaced the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp 


services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine 


ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data 


ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Gianluca Cecchi
On Mon, Jul 10, 2017 at 12:26 PM, Simone Marchioni 
wrote:

>
> Hi Gianluca,
>
> thanks for your reply.
> I didn't do any previous step: the 3 servers are freshly installed.
> The disk was wrong: I had to use /dev/md128. Replaced sdb with the correct
> one and redeployed, but the error was exactly the same. The disks are
> already initialized because I created the XFS filesystem on /dev/md128
> before the deploy.
>
> In /var/log/messages there is no error.
>
> Hi,
> Simone
>
>

You could try to run the script from command line of node 1, eg something
like this
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h
ha1.domain.it,ha2.domain.it,ha3.domain.it
and see what kind of output it gives...

Just a guess
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 07/07/2017 23:21, Gianluca Cecchi ha scritto:
Il 07/Lug/2017 18:38, "Simone Marchioni" > ha scritto:


Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a
separate server. I wanted to test the last oVirt 4.1 with Gluster
Storage and Hosted Engine.

Followed the following tutorial:


http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/






... snip ...


[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d
sdb -h ha1.domain.it ,ha2.domain.it
,ha3.domain.it 

... snip ...




When I hit "Deploy" button the Deployment fails with the following
error:

PLAY [gluster_servers]
*

TASK [Run a shell script]
**
fatal: [ha1.domain.it ]: FAILED! =>
{"failed": true, "msg": "The conditional check 'result.rc != 0'
failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ha2.domain.it ]: FAILED! =>
{"failed": true, "msg": "The conditional check 'result.rc != 0'
failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ha3.domain.it ]: FAILED! =>
{"failed": true, "msg": "The conditional check 'result.rc != 0'
failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry

PLAY RECAP
*
ha1.domain.it  : ok=0changed=0 
  unreachable=0 failed=1
ha2.domain.it  : ok=0changed=0 
  unreachable=0 failed=1
ha3.domain.it  : ok=0changed=0 
  unreachable=0 failed=1


What I'm doing wrong? Maybe I need to initializa glusterfs in some
way...
What are the logs used to log the status of this deployment so I
can check the errors?

Thanks in advance!
Simone
___



Gdeploy uses ansible that seems to fail at its first step when 
executing its shell module


http://docs.ansible.com/ansible/shell_module.html

In practice in my opinion the shell script defined by [script1] 
(grafton-sanity-check.sh) above doesn't exit with a return code (rc) 
for some reason...
Perhaps you have already done a partial step previously?  or your 
disks already contain a label?

Is it correct sdb as the target for your disk configuration for gluster?
I would try to reinitialize the disks, such as

dd if=/dev/zero of=/dev/sdb bs=1024k count=1

ONLY if it is correct that sdb is the disk to format for brick 
filesystem..

Hih,
Gianluca


Hi Gianluca,

thanks for your reply.
I didn't do any previous step: the 3 servers are freshly installed.
The disk was wrong: I had to use /dev/md128. Replaced sdb with the 
correct one and redeployed, but the error was exactly the same. The 
disks are already initialized because I created the XFS filesystem on 
/dev/md128 before the deploy.


In /var/log/messages there is no error.

Hi,
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread knarra

On 07/07/2017 10:01 PM, Simone Marchioni wrote:

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a separate 
server. I wanted to test the last oVirt 4.1 with Gluster Storage and 
Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ 



I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the 
oVirt 4.1 repo and all required packages. Configured passwordless ssh 
as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as shown 
in the tutorial.


In the last step (5) the Generated Gdeply configuration (note: 
replaced the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb 
-h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp 


services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine 


ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data 


ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export 



Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-07 Thread Gianluca Cecchi
Il 07/Lug/2017 18:38, "Simone Marchioni"  ha
scritto:

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a separate
server. I wanted to test the last oVirt 4.1 with Gluster Storage and Hosted
Engine.

Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-
4.1-and-gluster-storage/




... snip ...


[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
ha1.domain.it,ha2.domain.it,ha3.domain.it

... snip ...




When I hit "Deploy" button the Deployment fails with the following error:

PLAY [gluster_servers] **
***

TASK [Run a shell script] **

fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional
check 'result.rc != 0' failed. The error was: error while evaluating
conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional
check 'result.rc != 0' failed. The error was: error while evaluating
conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional
check 'result.rc != 0' failed. The error was: error while evaluating
conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry

PLAY RECAP 
*
ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

What I'm doing wrong? Maybe I need to initializa glusterfs in some way...
What are the logs used to log the status of this deployment so I can check
the errors?

Thanks in advance!
Simone
___



Gdeploy uses ansible that seems to fail at its first step when executing
its shell module

http://docs.ansible.com/ansible/shell_module.html

In practice in my opinion the shell script defined by [script1] (
grafton-sanity-check.sh) above doesn't exit with a return code (rc) for
some reason...
Perhaps you have already done a partial step previously?  or your disks
already contain a label?
Is it correct sdb as the target for your disk configuration for gluster?
I would try to reinitialize the disks, such as

dd if=/dev/zero of=/dev/sdb bs=1024k count=1

ONLY if it is correct that sdb is the disk to format for brick filesystem..
Hih,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-07 Thread Simone Marchioni

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a separate 
server. I wanted to test the last oVirt 4.1 with Gluster Storage and 
Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/

I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt 
4.1 repo and all required packages. Configured passwordless ssh as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as shown in 
the tutorial.


In the last step (5) the Generated Gdeply configuration (note: replaced 
the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb 
-h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
ignore_volume_errors=no
arbiter_count=1

[volume4]
action=create
volname=iso