If you already have a bastion host, security groups, and the proxy command
configured you can do the following for new private subnet instances:
- name: Launch new instance
hosts: localhost
gather_facts: true
vars:
some vars: ...
tasks:
- name: Launch instance
ec2:
key_name: "{{ key_name }}"
group_id: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
etc.....
register: ec2
- name: Add new instance to host group
add_host: hostname="{{ item.private_ip }}" groupname=launched
with_items: ec2.instances # note, this only works if you've launched
a single instance!
# see https://github.com/ansible/ansible/issues/3848
- name: Pause for EC2 instance to start SSH
pause:
prompt: "Pausing to allow new instance to start accepting ssh
connections"
minutes: 2
- name: Provision Jenkins in EC2
hosts: launched
gather_facts: True
tasks:
whatever...
The issue you're encountering is vexing me too.
Problem summary: can't use dynamic inventories with dynamic proxycommand
You can't add multiple machines via add_host
(https://github.com/ansible/ansible/issues/3848) and you can't patch ssh's
config at runtime.
I have multiple vpcs and multiple bastions. I am using cut in the proxy
command per the following:
http://copperlight.github.io/ansible/2014/06/30/ansible-vault-and-ssh-key-distribution/
Using a dynamic proxy command with a separator per the above link only
works in my example code above when there is a single ec2 host passed to
add_host. I am stuck wanting to use a tag_some_tag list with multiple
hosts returned from the ec2 inventory plugin with this workflow.
Any help is appreciated!
On Friday, January 23, 2015 at 4:10:32 AM UTC-5, Даниил Ярославцев wrote:
>
> Hello, Ansible admins and users!
>
> I am using Ansible with Amazon EC2.
> I've configured provisioning of EC2 private hosts over public SSH bastions
> (exactly as specified here:
> http://alexbilbie.com/2014/07/using-ansible-with-a-bastion-host).
> So I have an ssh_config like below containing settings for forwarding an
> SSH requests to private hosts over public ones:
>
> # DEV bastion
> Host ###.###.###.###
> User ubuntu
> HostName ###.###.###.###
> ProxyCommand none
> BatchMode yes
> PasswordAuthentication no
> ForwardAgent yes
>
> Host *
> User ubuntu
> ServerAliveInterval 60
> TCPKeepAlive yes
> ProxyCommand ssh -q -A [email protected]
> <javascript:> nc %h %p
> ControlMaster auto
> ControlPath ~/.ssh/mux-%r@%h:%p
> ControlPersist 8h
>
>
> But now I want to make generate this ssh_config on-the-fly from the
> playbook. I.e what do I need to implement:
>
> 1. Single playbook spins up public and private EC2 hosts, attaches EIPs to
> public host (SSH bastions, etc.) and adds them to public hosted zones so
> SSH bastions will have public DNS names
> 2. Using info about created topology, playbook generates new ssh_config
> file and starts using it instead of default one (without relaunch or
> retries)
> 3. Playbook continues nodes provisioning, but now private EC2 hosts are
> provisioned over public SSH bastions as configured in generated ssh_config
> specified
>
> I am done with p.1, but now there is the problem - there is no way to tell
> Ansible to reload SSH config on-the-fly (without restarting playbook).
> So no way to continue provisioning private hosts over public ones at the
> same playbook.
>
> Could you, please suggest me an option to overcome this? Or, maybe, point
> me to the code I need to modify in Ansible to make this possible - any help
> will be very appreciated ;)
>
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/ansible-project/19cf8cfb-ceae-42e3-b231-6c2e7f9faa2c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.