Il 10/07/2017 13:06, Gianluca Cecchi ha scritto:
On Mon, Jul 10, 2017 at 12:57 PM, Simone Marchioni <s.marchi...@lynx2000.it <mailto:s.marchi...@lynx2000.it>> wrote:


    Hi Gianluca,

    I recently discovered that the file:

    /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

    is missing from the system, and probably is the root cause of my
    problem.
    Searched with

    yum provides

    but I can't find any package with the script inside... any clue?

    Thank you
    Simone

    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>


Hi,
but are your nodes ovirt-ng nodes or plain CentOS 7.3 where you manually installed packages? Becase the original web link covered the case of ovirt-ng nodes, not CentOS 7.3 OS. Possibly you are missing any package that is instead installed inside ovirt-ng node by default?



Hi Gianluca,

I used plain CentOS 7.3 where I manually installed the necessary packages.
I know the original tutorial used oVirt Node, but I tought the two were almost the same, with the latter an "out of the box" solution but with the same features.

That said I discovered the problem: there is no missing package. The path of the script is wrong. In the tutorial it says:

/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

while the installed script is in:

/usr/share/gdeploy/scripts/grafton-sanity-check.sh

and is (correctly) part of the gdeploy package.

Updated the Gdeploy config and executed Deploy again. The situation is much better now, but still says "Deployment Failed". Here's the output:


PLAY [gluster_servers] *********************************************************

TASK [Run a shell script] ****************************************************** changed: [ha3.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it) changed: [ha2.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it) changed: [ha1.domain.it] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d md128 -h ha1.domain.it,ha2.domain.it,ha3.domain.it)

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=1    changed=1    unreachable=0 failed=0
ha2.domain.it            : ok=1    changed=1    unreachable=0 failed=0
ha3.domain.it            : ok=1    changed=1    unreachable=0 failed=0


PLAY [gluster_servers] *********************************************************

TASK [Enable or disable services] **********************************************
ok: [ha1.domain.it] => (item=chronyd)
ok: [ha3.domain.it] => (item=chronyd)
ok: [ha2.domain.it] => (item=chronyd)

PLAY RECAP *********************************************************************
ha1.lynx2000.it            : ok=1    changed=0    unreachable=0 failed=0
ha2.lynx2000.it            : ok=1    changed=0    unreachable=0 failed=0
ha3.lynx2000.it            : ok=1    changed=0    unreachable=0 failed=0


PLAY [gluster_servers] *********************************************************

TASK [start/stop/restart/reload services] **************************************
changed: [ha1.domain.it] => (item=chronyd)
changed: [ha2.domain.it] => (item=chronyd)
changed: [ha3.domain.it] => (item=chronyd)

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=1    changed=1    unreachable=0 failed=0
ha2.domain.it            : ok=1    changed=1    unreachable=0 failed=0
ha3.domain.it            : ok=1    changed=1    unreachable=0 failed=0


PLAY [gluster_servers] *********************************************************

TASK [Run a command in the shell] **********************************************
changed: [ha1.domain.it] => (item=vdsm-tool configure --force)
changed: [ha3.domain.it] => (item=vdsm-tool configure --force)
changed: [ha2.domain.it] => (item=vdsm-tool configure --force)

PLAY RECAP *********************************************************************
ha1.lynx2000.it            : ok=1    changed=1    unreachable=0 failed=0
ha2.lynx2000.it            : ok=1    changed=1    unreachable=0 failed=0
ha3.lynx2000.it            : ok=1    changed=1    unreachable=0 failed=0


PLAY [gluster_servers] *********************************************************

TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
    to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP *********************************************************************
ha1.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.lynx2000.it            : ok=0    changed=0    unreachable=0 failed=1


PLAY [gluster_servers] *********************************************************

TASK [Clean up filesystem signature] *******************************************
skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)

TASK [Create Physical Volume] ************************************************** failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5} failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, "failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n Aborted wiping of xfs.\n 1 existing signature left on the device.\n", "rc": 5}
    to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1

Ignoring errors...


Hope to be near the solution... ;-)

Hi,
Simone
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to