This is how I got something similar working in order to terminate 
instances. You will need to change the state and I'm using the inventory 
script as my hosts file. I'm sure it can be cleaned up. In this case, I'm 
targeting ec2 instances with the Nginx_Server tag.

# Connect to the node and gather facts,
# including the instance-id. These facts
# are added to inventory hostvars for the
# duration of the playbook's execution
# Typical "provisioning" tasks would go in
# this playbook.

- hosts: tag_Name_Nginx_Server
  gather_facts: True
  user: root
  sudo: True
  tasks:

    # fetch instance data from the metadata servers in ec2
    - ec2_facts:

    # show all known facts for this host
    #- debug: var=hostvars[inventory_hostname]

    # just show the instance-id
    - debug: msg="{{ 
hostvars[inventory_hostname]['ansible_ec2_instance_id'] }}"


# Using the instanceid, call the ec2 module
# locally to remove the instance by declaring
# its state is "absent"

- hosts: tag_Name_Nginx_Server
  gather_facts: True
  connection: local
  tasks:
    - name: destroy all instances
      ec2: state='absent'
           instance_ids={{ item }}
           wait=true
      with_items: hostvars[inventory_hostname]['ansible_ec2_instance_id']





On Tuesday, October 28, 2014 4:23:14 PM UTC-7, Bryan Hunt wrote:
>
>
> I'm writing an ansible demo, showing how to provision Riak clusters, and I 
> want to make it super self contained.
>
> With that in mind, I’ve been playing with the ec2.py inventory script, and 
> the ansible ec2 module.
>
> Starting the nodes works great (if anyone is interested, I'm happy to 
> share the code, or perhaps just make the repo public)
>
> But, I’m having a problem with stopping the nodes. 
>
> I have those vars available:
>
> TASK: [List tagged instances] 
> *************************************************
> ok: [127.0.0.1] => {
>     "groups[\"tag_instance_type_ansible_demo\"]": [
>         "54.171.160.194"
>     ]
> }
>
> TASK: [everything for that one host] 
> ******************************************
> ok: [127.0.0.1] => {
>     "hostvars[\"54.171.160.194\"][\"ec2_id\"]": "i-f28793b0"
> }
>
> The problem is that in order to stop and ec2 instances, I need to get at 
> the ec2_id,  in order to feed it to the ec2 module (which stops nodes based 
> upon the instance id, not the ip address. 
>
>  tasks:
>     - name: shut down all the spot instances
>       local_action:
>        module: ec2
>        region: eu-west-1
>        wait: no
>        instance_ids: '{{ instance_ids }}'
>        state: absent
>
> I can't figure out how to munge this into a usable list, has anyone else 
> experienced this before? 
>
> Bryan
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/aa734830-5207-4c47-83aa-b1cbae95885d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to