Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-13 Thread Simone Marchioni

Il 12/07/2017 10:59, knarra ha scritto:

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

Hi,

reply here to both Gianluca and Kasturi.

Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 
packages, but glusterfs-server was missing in my "yum install" 
command, so added glusterfs-server to my installation.


Kasturi: packages ovirt-hosted-engine-setup, gdeploy and 
cockpit-ovirt-dashboard already installed and updated. vdsm-gluster 
was missing, so added to my installation.

okay, cool.


:-)



Rerun deployment and IT WORKED! I can read the message "Succesfully 
deployed Gluster" with the blue button "Continue to Hosted Engine 
Deployment". There's a minor glitch in the window: the green "V" in 
the circle is missing, like there's a missing image (or a wrong path, 
as I had to remove "ansible" from the grafton-sanity-check.sh path...)
There is a bug for this and it will be fixed soon. Here is the bug id 
for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082


Ok, thank you!



Although the deployment worked, and the firewalld and gluterfs errors 
are gone, a couple of errors remains:



AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:

PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script 
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That 
is why this failure.


You're right: changed the path and now it's ok.



PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}

to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry

PLAY RECAP 
**

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-12 Thread Simone Marchioni

Il 11/07/2017 11:23, knarra ha scritto:

On 07/11/2017 01:32 PM, Simone Marchioni wrote:

Il 11/07/2017 07:59, knarra ha scritto:

Hi,

removed partition signatures with wipefs and run deploy again: this 
time the creation of VG and LV worked correctly. The deployment 
proceeded until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Add/Delete services to firewalld rules] 
**
failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}

to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry

PLAY RECAP 
*

ha1.domain.it: ok=1changed=0unreachable=0 failed=1
ha2.domain.it: ok=1changed=0unreachable=0 failed=1
ha3.domain.it: ok=1changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Open/Close firewalld ports] 
**

changed: [ha1.domain.it] => (item=111/tcp)
changed: [ha2.domain.it] => (item=111/tcp)
changed: [ha3.domain.it] => (item=111/tcp)
changed: [ha1.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=2049/tcp)
changed: [ha1.domain.it] => (item=54321/tcp)
changed: [ha3.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=54321/tcp)
changed: [ha1.domain.it] => (item=5900/tcp)
changed: [ha3.domain.it] => (item=54321/tcp)
changed: [ha2.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=5900-6923/tcp)
changed: [ha3.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5666/tcp)
changed: [ha2.domain.it] => (item=5666/tcp)
changed: [ha1.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5666/tcp)
changed: [ha3.domain.it] => (item=16514/tcp)

TASK [Reloads the firewall] 


changed: [ha1.domain.it]
changed: [ha2.domain.it]
changed: [ha3.domain.it]

PLAY RECAP 
*

ha1.domain.it: ok=3changed=2unreachable=0 failed=0
ha2.domain.it: ok=3changed=2unreachable=0 failed=0
ha3.domain.it: ok=3changed=2unre

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread Simone Marchioni

Il 11/07/2017 07:59, knarra ha scritto:

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Hi,

I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. 
Can you try to do fill zeros in the disk for 512MB or 1GB and try again ?


dd if=/dev/zero of=

Before running the script again try to do pvcreate and see if that 
works. If it works, just do pvdelete and run the script. Everything 
should work fine.


Thanks
kasturi


Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

removed partition signatures with wipefs and run deploy again: this time 
the creation of VG and LV worked correctly. The deployment proceeded 
until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 13:49, knarra ha scritto:

On 07/10/2017 04:18 PM, Simone Marchioni wrote:

Il 10/07/2017 09:08, knarra ha scritto:

Hi Simone,

Can you please  let me know what is the version of gdeploy and 
ansible on your system? Can you check if the path 
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? 
If not, can you edit the generated config file and change the path 
to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh  and see if 
that works ?


You can check the logs in /var/log/messages , or setting 
log_path in /etc/ansbile/ansible.cfg file.


Thanks

kasturi.



Hi Kasturi,

thank you for your reply. Here are my versions:

gdeploy-2.0.2-7.noarch
ansible-2.3.0.0-3.el7.noarch

The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh 
is missing. For the sake of completeness, the entire directory 
ansible is missing under /usr/share.


In /var/log/messages there is no error message, and I have no 
/etc/ansbile/ansible.cfg config file...


I'm starting to think there are some missing pieces in my 
installation. I installed the following packages:


yum install ovirt-engine
yum install ovirt-hosted-engine-setup
yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome 
libgovirt ovirt-live-artwork ovirt-log-collector gdeploy 
cockpit-ovirt-dashboard


and relative dependencies.

Any idea?
Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" 
is present ? If yes, can you change the path in your generated gdeploy 
config file and run again ?


Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I updated 
the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}
fatal: [ha3.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}
fatal: [ha2.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.lynx2000.it] => (item=/dev/md128)
skipping: [ha1.lynx2000.it] => (item=/dev/md128)
skipping: [ha3.lynx2000.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha1.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha3.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 13:06, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni 
<s.marchi...@lynx2000.it <mailto:s.marchi...@lynx2000.it>> wrote:



Hi Gianluca,

I recently discovered that the file:

/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

is missing from the system, and probably is the root cause of my
problem.
Searched with

yum provides

but I can't find any package with the script inside... any clue?

Thank you
Simone

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


Hi,
but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you 
manually installed packages?
Becase the original web link covered the case of ovirt-ng nodes, not 
CentOS 7.3 OS.
Possibly you are missing any package that is instead installed inside 
ovirt-ng node by default?





Hi Gianluca,

I used plain CentOS 7.3 where I manually installed the necessary packages.
I know the original tutorial used oVirt Node, but I tought the two were 
almost the same, with the latter an "out of the box" solution but with 
the same features.


That said I discovered the problem: there is no missing package. The 
path of the script is wrong. In the tutorial it says:


/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

while the installed script is in:

/usr/share/gdeploy/scripts/grafton-sanity-check.sh

and is (correctly) part of the gdeploy package.

Updated the Gdeploy config and executed Deploy again. The situation is 
much better now, but still says "Deployment Failed". Here's the output:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
changed: [ha3.domain.it] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it,ha2.domain.it,ha3.domain.it)
changed: [ha2.domain.it] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it,ha2.domain.it,ha3.domain.it)
changed: [ha1.domain.it] => 
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it,ha2.domain.it,ha3.domain.it)


PLAY RECAP 
*

ha1.domain.it: ok=1changed=1unreachable=0 failed=0
ha2.domain.it: ok=1changed=1unreachable=0 failed=0
ha3.domain.it: ok=1changed=1unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Enable or disable services] 
**

ok: [ha1.domain.it] => (item=chronyd)
ok: [ha3.domain.it] => (item=chronyd)
ok: [ha2.domain.it] => (item=chronyd)

PLAY RECAP 
*

ha1.lynx2000.it: ok=1changed=0unreachable=0 failed=0
ha2.lynx2000.it: ok=1changed=0unreachable=0 failed=0
ha3.lynx2000.it: ok=1changed=0unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**

changed: [ha1.domain.it] => (item=chronyd)
changed: [ha2.domain.it] => (item=chronyd)
changed: [ha3.domain.it] => (item=chronyd)

PLAY RECAP 
*

ha1.domain.it: ok=1changed=1unreachable=0 failed=0
ha2.domain.it: ok=1changed=1unreachable=0 failed=0
ha3.domain.it: ok=1changed=1unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**

changed: [ha1.domain.it] => (item=vdsm-tool configure --force)
changed: [ha3.domain.it] => (item=vdsm-tool configure --force)
changed: [ha2.domain.it] => (item=vdsm-tool configure --force)

PLAY RECAP 
*

ha1.lynx2000.it: ok=1changed=1unreachable=0 failed=0
ha2.lynx2000.it: ok=1changed=1unreachable=0 failed=0
ha3.lynx2000.it: ok=1changed=1unreachable=0 failed=0


PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no attribute 
'rc'"}
fatal: [ha3.domain.it

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 12:48, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:26 PM, Simone Marchioni 
<s.marchi...@lynx2000.it <mailto:s.marchi...@lynx2000.it>> wrote:



Hi Gianluca,

thanks for your reply.
I didn't do any previous step: the 3 servers are freshly installed.
The disk was wrong: I had to use /dev/md128. Replaced sdb with the
correct one and redeployed, but the error was exactly the same.
The disks are already initialized because I created the XFS
filesystem on /dev/md128 before the deploy.

In /var/log/messages there is no error.

Hi,
Simone



You could try to run the script from command line of node 1, eg 
something like this
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h 
ha1.domain.it <http://ha1.domain.it>,ha2.domain.it 
<http://ha2.domain.it>,ha3.domain.it <http://ha3.domain.it>

and see what kind of output it gives...

Just a guess



Hi Gianluca,

I recently discovered that the file:

/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

is missing from the system, and probably is the root cause of my problem.
Searched with

yum provides

but I can't find any package with the script inside... any clue?

Thank you
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 10/07/2017 09:08, knarra ha scritto:

On 07/07/2017 10:01 PM, Simone Marchioni wrote:

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a separate 
server. I wanted to test the last oVirt 4.1 with Gluster Storage and 
Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ 



I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the 
oVirt 4.1 repo and all required packages. Configured passwordless ssh 
as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as shown 
in the tutorial.


In the last step (5) the Generated Gdeply configuration (note: 
replaced the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp 


services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine 


ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data 


ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread Simone Marchioni

Il 07/07/2017 23:21, Gianluca Cecchi ha scritto:
Il 07/Lug/2017 18:38, "Simone Marchioni" <s.marchi...@lynx2000.it 
<mailto:s.marchi...@lynx2000.it>> ha scritto:


Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a
separate server. I wanted to test the last oVirt 4.1 with Gluster
Storage and Hosted Engine.

Followed the following tutorial:


http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/

<http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/>




... snip ...


[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d
sdb -h ha1.domain.it <http://ha1.domain.it>,ha2.domain.it
<http://ha2.domain.it>,ha3.domain.it <http://ha3.domain.it>

... snip ...




When I hit "Deploy" button the Deployment fails with the following
error:

PLAY [gluster_servers]
*

TASK [Run a shell script]
**
fatal: [ha1.domain.it <http://ha1.domain.it>]: FAILED! =>
{"failed": true, "msg": "The conditional check 'result.rc != 0'
failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ha2.domain.it <http://ha2.domain.it>]: FAILED! =>
{"failed": true, "msg": "The conditional check 'result.rc != 0'
failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ha3.domain.it <http://ha3.domain.it>]: FAILED! =>
{"failed": true, "msg": "The conditional check 'result.rc != 0'
failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry

PLAY RECAP
*
ha1.domain.it <http://ha1.domain.it> : ok=0changed=0 
  unreachable=0 failed=1
ha2.domain.it <http://ha2.domain.it> : ok=0changed=0 
  unreachable=0 failed=1
ha3.domain.it <http://ha3.domain.it> : ok=0changed=0 
  unreachable=0 failed=1


What I'm doing wrong? Maybe I need to initializa glusterfs in some
way...
What are the logs used to log the status of this deployment so I
can check the errors?

Thanks in advance!
Simone
___



Gdeploy uses ansible that seems to fail at its first step when 
executing its shell module


http://docs.ansible.com/ansible/shell_module.html

In practice in my opinion the shell script defined by [script1] 
(grafton-sanity-check.sh) above doesn't exit with a return code (rc) 
for some reason...
Perhaps you have already done a partial step previously?  or your 
disks already contain a label?

Is it correct sdb as the target for your disk configuration for gluster?
I would try to reinitialize the disks, such as

dd if=/dev/zero of=/dev/sdb bs=1024k count=1

ONLY if it is correct that sdb is the disk to format for brick 
filesystem..

Hih,
Gianluca


Hi Gianluca,

thanks for your reply.
I didn't do any previous step: the 3 servers are freshly installed.
The disk was wrong: I had to use /dev/md128. Replaced sdb with the 
correct one and redeployed, but the error was exactly the same. The 
disks are already initialized because I created the XFS filesystem on 
/dev/md128 before the deploy.


In /var/log/messages there is no error.

Hi,
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-07 Thread Simone Marchioni

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a separate 
server. I wanted to test the last oVirt 4.1 with Gluster Storage and 
Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/

I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt 
4.1 repo and all required packages. Configured passwordless ssh as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as shown in 
the tutorial.


In the last step (5) the Generated Gdeply configuration (note: replaced 
the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb 
-h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
ignore_volume_errors=no
arbiter_count=1

[volume4]
action=create
volname=iso

Re: [ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-08 Thread Simone Marchioni

Il 08/07/2014 16:47, Andrew Lau ha scritto:

On Wed, Jul 9, 2014 at 12:23 AM, Sandro Bonazzola sbona...@redhat.com wrote:

Il 07/07/2014 15:38, Simone Marchioni ha scritto:

Hi,

I'm trying to install oVirt 3.4 + gluster looking at the following guides:

http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
http://community.redhat.com/blog/2014/03/up-and-running-with-ovirt-3-4/

It went smooth until the hosted engine VM configuration: I can reach it by VNC 
and the host IP, but I can't configure the VM network in a way it works.
Probably the problem is the assumption that the three hosts (2 hosts + the 
hosted engine) are on the same subnet sharing the same default gateway.

But my server is on OVH with the subnet 94.23.2.0/24 and my failover IPs are on 
the subnet 46.105.224.236/30, and my hosted engine need to use one IP
of the last ones.

Anyone installed oVirt in such a configuration and can give me any tip?

Never tested such configuration.
Andrew, something similar at your installation with an additional NIC?

If I understand correctly, you have your hosts on the 94.23.2.0/24
subnet but you need your hosted engine to be accessible as an address
within 46.105.224.236?


Exactly


If that's true, then the easiest way to do it
is simply run your hosted-engine install with the hosted-engine first
on 94.23.2.0/24, you then add another nic to that hosted-engine VM
which'll have the IP address for 46.105.224.237 (?)...


I'll try this


alternatively, you could just use a nic alias?


We made it work with the following changes (on the host machine in the 
subnet 94.23.2.0/24):
- commented out and removed from running configuration ip rules in 
/etc/sysconfig/network-scripts/rule-ovirtmgmt
- commented out and removed from running configuration ip routes in 
/etc/sysconfig/network-scripts/route-ovirtmgmt
- added /etc/sysconfig/network-scripts/ovirtmgmt:0 with the following 
configuration:

DEVICE=ovirtmgmt:238
ONBOOT=yes
DELAY=0
IPADDR=46.105.224.238
NETMASK=255.255.255.252
BOOTPROTO=static
NM_CONTROLLED=no
- enabled ip forwarding in /proc/sys/net/ipv4/ip_forward

After that installing the hosted-engine VM with the following IP stack:

NETMASK=255.255.255.252
IPADDR=46.105.224.237
GATEWAY=46.105.224.238

seems to work ok.


If you want to add an extra NIC to your hosted-engine to do that above
scenario, here's a snippet from my notes:
(storage_network is a bridge, replace that with ovirtmgmt or another
bridge you may have created)

hosted-engine --set-maintenance --mode=global

# On all installed hosts
nano /etc/ovirt-hosted-engine/vm.conf
# insert under earlier nicModel
# replace macaddress and uuid from above
# increment slot
devices={nicModel:pv,macAddr:00:16:3e:e1:7b:14,linkActive:true,network:storage_network,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:fdb11208-a888-e587-6053-32c9c0361f96,address:{bus:0x00,slot:0x04,
domain:0x, type:pci,function:0x0},device:bridge,type:interface}

hosted-engine --vm-shutdown
hosted-engine --vm-start

hosted-engine --set-maintenance --mode=none


Ok: thanks for the advice!


Although, re-reading your question, what do you mean by 'but I can't
configure the VM network in a way it works.' ? Does the setup fail, or
just when you create a VM you don't have any network connectivity..


The setup works ok: it creates the VM and i can login to it with VNC on 
the host IP (94.23.2.X).
I can install CentOS 6.5 as advised. After the reboot I login again by 
VNC and the host IP (94.23.2.X), and configure the IP stack with the 
other subnet (46.105.224.236/30) and after that the VM is isolated, 
unless I do the steps written above.


Thanks for your support!
Simone




Thanks
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt + gluster with host and hosted VM on different subnets

2014-07-07 Thread Simone Marchioni

Hi,

I'm trying to install oVirt 3.4 + gluster looking at the following guides:

http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
http://community.redhat.com/blog/2014/03/up-and-running-with-ovirt-3-4/

It went smooth until the hosted engine VM configuration: I can reach it 
by VNC and the host IP, but I can't configure the VM network in a way it 
works.
Probably the problem is the assumption that the three hosts (2 hosts + 
the hosted engine) are on the same subnet sharing the same default gateway.


But my server is on OVH with the subnet 94.23.2.0/24 and my failover IPs 
are on the subnet 46.105.224.236/30, and my hosted engine need to use 
one IP of the last ones.


Anyone installed oVirt in such a configuration and can give me any tip?

Thanks
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users