Dear Riccardo,

Yes, I am using Ubuntu 16.04 image xenial-server-cloudimg-amd64-disk1.img 
<https://gateway.bsc.es/xenial/current/,DanaInfo=cloud-images.ubuntu.com+xenial-server-cloudimg-amd64-disk1.img>
 from http://docs.openstack.org/image-guide/obtain-images.html 
<https://gateway.bsc.es/image-guide/,DanaInfo=docs.openstack.org+obtain-images.html>.
 
I did as you suggested, but I still see the problem:

(elasticluster) ajokanovic@bscgrid28:~$ elasticluster start slurm -n 
mycluster

Starting cluster `mycluster` with:

* 1 frontend nodes.

* 1 compute nodes.

(This may take a while...)

2017-01-23 18:23:50 bscgrid28 gc3.elasticluster[9409] *WARNING* 
DeprecationWarning: 
The novaclient.v2.security_groups module is deprecated and will be removed.

2017-01-23 18:23:50 bscgrid28 gc3.elasticluster[9409] *WARNING* 
DeprecationWarning: 
The novaclient.v2.images module is deprecated and will be removed after 
Nova 15.0.0 is released. Use python-glanceclient or python-openstacksdk 
instead.

Configuring the cluster.

(this too may take a while...)


PLAY [Common setup for all hosts] 
**********************************************


TASK [setup] 
*******************************************************************

fatal: [compute001]: FAILED! => {"changed": false, "failed": true, 
"module_stderr": 
"@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\n@    
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     
@\r\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\nIT IS 
POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\r\nSomeone could be 
eavesdropping on you right now (man-in-the-middle attack)!\r\nIt is also 
possible that a host key has just been changed.\r\nThe fingerprint for the 
ECDSA key sent by the remote host 
is\n75:a0:4f:28:c8:7e:a8:c9:b3:01:2f:b2:47:95:3f:bf.\r\nPlease contact your 
system administrator.\r\nAdd correct host key in 
/home/ajokanovic/.ssh/known_hosts to get rid of this message.\r\nOffending 
ECDSA key in /home/ajokanovic/.ssh/known_hosts:1\r\n  remove with: 
ssh-keygen -f \"/home/ajokanovic/.ssh/known_hosts\" -R 
10.0.0.6\r\nChallenge/response authentication is disabled to avoid 
man-in-the-middle attacks.\r\n/bin/sh: 1: /usr/bin/python: not found\n", 
"module_stdout": "", "msg": "MODULE FAILURE"}

fatal: [frontend001]: FAILED! => {"changed": false, "failed": true, 
"module_stderr": "Warning: Permanently added '172.16.8.153' (ECDSA) to the 
list of known hosts.\r\n/bin/sh: 1: /usr/bin/python: not found\n", 
"module_stdout": "", "msg": "MODULE FAILURE"}

to retry, use: --limit 
@/home/ajokanovic/elasticluster/src/elasticluster/share/playbooks/site.retry


PLAY RECAP 
*********************************************************************

compute001                 : ok=0    changed=0    unreachable=0    failed=1  
 

frontend001                : ok=0    changed=0    unreachable=0    failed=1  
 


2017-01-23 18:26:06 bscgrid28 gc3.elasticluster[9409] *ERROR* Command 
`ansible-playbook 
/home/ajokanovic/elasticluster/src/elasticluster/share/playbooks/site.yml 
--inventory=/home/ajokanovic/.elasticluster/storage/mycluster.inventory 
--become --become-user=root` failed with exit code 2.

2017-01-23 18:26:06 bscgrid28 gc3.elasticluster[9409] *ERROR* Check the 
output lines above for additional information on this error.

2017-01-23 18:26:06 bscgrid28 gc3.elasticluster[9409] *ERROR* The cluster 
has likely *not* been configured correctly. You may need to re-run 
`elasticluster setup` or fix the playbooks.

2017-01-23 18:26:06 bscgrid28 gc3.elasticluster[9409] *WARNING* Cluster 
`mycluster` not yet configured. Please, re-run `elasticluster setup 
mycluster` and/or check your configuration


WARNING: YOUR CLUSTER IS NOT READY YET!


Cluster name:     mycluster

Cluster template: slurm

Default ssh to node: frontend001

- frontend nodes: 1

- compute nodes: 1


To login on the frontend node, run the command:


    elasticluster ssh mycluster


To upload or download files to the cluster, use the command:


    elasticluster sftp mycluster

Best regards,
Ana

On Monday, 23 January 2017 17:00:59 UTC+1, Riccardo Murri wrote:
>
> Dear Ana: 
>
> > I tried using new image, but now I end up with the problem below. I can 
> ssh 
> > to frontend, though. 
> > [...] 
> > TASK [setup] 
> > ******************************************************************* 
> > 
> > fatal: [frontend001]: FAILED! => {"changed": false, "failed": true, 
> > "module_stderr": "Warning: Permanently added '172.16.8.169' (ECDSA) to 
> the 
> > list of known hosts.\r\n/bin/sh: 1: /usr/bin/python: not found\n", 
> > "module_stdout": "", "msg": "MODULE FAILURE"} 
>
> It looks like you're using Ubuntu 16.04 images, are you? 
>
> The official Ubuntu 16.04 do not ship `/usr/bin/python` and also run 
> some `unattended_upgrade` scripts by default, which does not play well 
> with Ansible.  A little extra config is needed, see: 
>
> https://github.com/gc3-uzh-ch/elasticluster/issues/304#issuecomment-252698444 
>
> In short, you need to add this to your `[cluster/slurm]` section:: 
>
> [cluster/slurm] 
> # ...everything else stays the same... 
> image_userdata=#!/bin/bash 
>
>     systemctl stop apt-daily.service 
>     systemctl kill --kill-who=all apt-daily.service 
>
>     # wait until done before starting other APT tasks 
>     while ! (systemctl list-units --all apt-daily.service | fgrep -q dead) 
>     do 
>       sleep 1; 
>     done 
>     # print state, mainly for debugging 
>     systemctl list-units --all 'apt-daily.*' 
>
>     # now ensure Ansible can find /usr/bin/python 
>     apt-get install -y python 
>
> Warning: you need to keep the indentation! 
>
> (For the full story, read: 
> https://github.com/gc3-uzh-ch/elasticluster/issues/304 ) 
>
> Ciao, 
> R 
>
> -- 
> Riccardo Murri, Schwerzenbacherstrasse 2, CH-8606 Nänikon, Switzerland 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to