Dick - 

Here's how I do it.  I defined the "instances" dictionary in the playbook 
under vars, but you can do it with extra_vars, or whatever works.

- name: Launch Instance
  ec2:
    instance_type: "{{ item.ec2_server_instance_type }}"
    group: "{{ item.ec2_server_security_group }}"
    image: "{{ item.ec2_server_image }}"
    wait: yes
    region: "{{ item.ec2_server_region }}"
    key_name: "{{ ec2_server_keypair }}"
    aws_access_key: "{{ ec2_server_access_key }}"
    aws_secret_key: "{{ ec2_server_secret_key }}"
    exact_count: "{{item.exact_count|default(1)}}"
    vpc_subnet_id: "{{vpc_subnet_ids['awsdev-app-1']}}"
    count_tag:
      Name: "{{item.ec2_server_name }}"
    instance_tags:
      Name: "{{ item.ec2_server_name }}"
      class: "{{ item.class|default(omit) }}"
      Type: "{{ item.type|default(omit) }}"
  with_items:
    - "{{instances}}"
  register: ec2
  become: no

- name: gather facts about ec2 instances in case they were created a while ago
  ec2_instance_facts:
    aws_access_key: "{{ ec2_server_access_key }}"
    aws_secret_key: "{{ ec2_server_secret_key }}"
    region: "{{ item.ec2_server_region }}"
    filters:
#      "tag:Name": "{{item.ec2_server_name}}"
      "tag:class": "{{item.class}}"
      "instance-state-name": "running"
  with_items:
    - "{{instances}}"
  register: ec2
  no_log: true
  become: no


Hope that makes sense.

-Adam

On Saturday, July 20, 2019 at 12:08:03 PM UTC-4, Dick Visser wrote:
>
> Hi 
>
> I can see the benefits of using the ec2.py dynamic inventory script, 
> but I run into some issues which I can't figure out how to fix. 
>
> 1. The ec2.py requires the credentials to be available as environment 
> variables. But my deployment only has them available inside a vaulted 
> vars file (in host_vars/localhost, so they can be used by a previous 
> ansible role that creates infrastructure). How do people handle the 
> storage of credentials? 
>
> 2. Up to now my inventory looks pretty simple: 
>
> [proxy] 
> proxy1 ansible_host=10.20.1.16 
> proxy2 ansible_host=10.20.1.17 
>
> [web] 
> web1 ansible_host=10.20.1.24 
> web2 ansible_host=10.20.1.25 
>
> [vars:all] 
> ansible_user=admin ansible_ssh_common_args='-o ProxyJump="
> [email protected] <javascript:>"' 
>
>
>
> I've set things up so that the AWS instance names are the same as my 
> old inventory (i.e. proxy1, proxy2, web1, web2). 
> So, I can successfully ping instances by their name, for instance 
> using this syntax: 
>
> (ansible-2.7.12) dick.visser@nuc8 scripts$ ansible -i ec2.py 
> tag_Name_proxy* -m ping 
> 10.20.1.16 | SUCCESS => { 
>     "changed": false, 
>     "ping": "pong" 
> } 
> 10.20.1.17 | SUCCESS => { 
>     "changed": false, 
>     "ping": "pong" 
> } 
>
> But how do I set up the groups now? 
> Do i have to assign a "group" tag to the instance in AWS first with 
> value 'web', 'proxy', etc? 
>
> Ideally I'd like to keep the 'simple' group name like web, proxy, etc. 
>
> thx!! 
>
>
> -- 
> Dick Visser 
> Trust & Identity Service Operations Manager 
> GÉANT 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/64edf8d9-3aeb-42ef-91cc-6721b081c25a%40googlegroups.com.

Reply via email to