Hello all,

I have a playbook that is 99% complete but fails on the last step.My 
inventory file and playbook are below. I only added the last parts

[dev]
dev_jenkins  ansible_ssh_host=44.244.444.44 
ansible_python_interpreter=/usr/bin/python27
will have multiple servers

[dev:vars]
ansible_ssh_user=atest
ansible_ssh_private_key_file=/Users/confluence/Desktop/atest
ansible_become_user=root
ansible_become_method=sudo
ansible_become_password=changeme
ansible_ssh_pass= changeme

[test]


[test:vars]
ansible_ssh_user=atest
ansible_ssh_private_key_file=/Users/confluence/Desktop/atest
ansible_become_user=root
ansible_become_method=sudo
ansible_become_password= changeme
ansible_ssh_pass= changeme



1)takes image of live server
2)launches the server
3)verify login 
4)kill the server

The issue I am having is with killing the server - name: 
Terminate instances that were previously launched

- hosts: all

- name: Launch instance
    ec2:
      key_name: ansible
      group: "{{ group }}"
      instance_type: m1.small
      image: "{{ image }}"
      wait: true
      region: "{{ region }}"
      vpc_subnet_id: "{{ subnet}}"
      assign_public_ip: yes
    register: ec2

  - name: Add new instance to host group
    add_host:
      hostname: "{{ item.public_ip }}"
      groupname: test
    loop: "{{ ec2.instances }}"

  - name: Waiting for SSH to come up
    delegate_to: "{{ item.public_ip }}"
    wait_for_connection:
      delay: 60
      timeout: 320
    loop: "{{ ec2.instances }}"



- hosts: test. ##blank in the inventory file, but new instance added to it 
so i can connect to the new instance with password and key
  become: True
  gather_facts: True

  tasks:

  - name: Testing out the servers
    shell: ". /etc/profile; echo $DEV_ENV"
    args:
      executable: /bin/bash
    register: ENTRY

  - name: Output for test server
    debug: msg="{{ ENTRY.stdout }}"


  - name: getting instance id again
    shell: "wget -q -O - http:
//169.254.169.254/latest/meta-data/instance-id"
    register: instanceterminate

  - name: Terminate instances that were previously launched
    ec2:
      state: 'absent'
      instance_ids: "{{ instanceterminate.stdout }}"
      region: us-east-1
   # when: ENTRY.rc == 0  or ENTRY.rc >= 2

Problem:

In my inventory file, I have unique parameters per host like 
*ansible_python_interpreter=/usr/bin/python27* , maybe different for each 
server. So when I add the newly spun up instance from the freshly taken ami 
I add it to a group called test which is blank in my inventory file(it has 
the key and password needed to login into the machine.) I do not know of a 
way to specify the  *ansible_python_interpreter *that was used to be used 
again in this last step and I cannot specify it because it will change from 
server to server, and so commands to terminate the instance gives error.

changed=false 
  msg: boto required for this module



Is there a way I can make this work?

Thank you 


-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/0c3c8f5d-3db3-4dba-90b5-c195757d0eff%40googlegroups.com.

Reply via email to