At the moment my code is bound up in a playbook that I can't share. Sorry.
On Sunday, July 20, 2014 8:03:05 PM UTC-4, Bob Tiernay wrote:
>
> Hey Kurt,
>
> Do you happen to have a full example in a repo somewhere? I would be very
> interested to check it out.
>
> Thanks in advance,
>
> Bob
>
> On Thursday, 17 July 2014 10:19:40 UTC-4, Kurt Yoder wrote:
>>
>> I forgot to include a critical piece of this solution. Here's how you
>> invoke the task to tell Openstack to start the VMs:
>> - name: Create cluster
>> hosts: os_api
>> gather_facts: no
>> roles: [instantiate] # "instantiate" is a role that includes the
>> Openstack startup task
>>
>>
>> Perhaps this kind of example should be included in docs for parallel
>> execution in Ansible. Let me know if you want me to help with that.
>>
>> -Kurt
>>
>> On Thursday, July 17, 2014 10:04:14 AM UTC-4, Kurt Yoder wrote:
>>>
>>> I found a different approach to this problem.
>>>
>>>
>>> First create a cluster configuration file.
>>> $ cat cluster.yml
>>> ---
>>> cluster:
>>> - cluster1
>>> - cluster2
>>>
>>>
>>> Define the first task: setting up one connection to localhost for each
>>> API call.
>>> - include_vars: cluster.yml
>>>
>>> - add_host:
>>> name: "os_api_{{ item }}"
>>> ansible_ssh_host: 127.0.0.1
>>> groups: os_api
>>> ansible_connection: local
>>> oshost: "{{ item }}"
>>> with_items: cluster
>>>
>>> Define a follow-on task: tell Openstack to start up the VMs.
>>> - name: Show host name
>>> debug:
>>> msg: "API connection: os_api_{{ oshost }}; Openstack host: {{ oshost
>>> }}"
>>>
>>> - name: Launch cluster VM on Openstack
>>> nova_compute:
>>> name: "{{ os_username }}_{{ oshost }}"
>>> state: present
>>> login_username: "{{ os_username }}"
>>> login_tenant_name: "{{ os_tenant }}"
>>> login_password: "{{ os_password }}"
>>> image_id: "{{ os_image_id }}"
>>> key_name: "{{ os_username }}_controller_key"
>>> wait_for: 200
>>> flavor_id: "{{ os_flavor_id }}"
>>> auth_url: "{{ os_url }}"
>>> user_data: "#cloud-config\nmanage_etc_hosts: true"
>>>
>>> - name: Assign IP address to cluster VM
>>> quantum_floating_ip:
>>> state: present
>>> login_username: "{{ os_username }}"
>>> login_password: "{{ os_password }}"
>>> login_tenant_name: "{{ os_tenant }}"
>>> network_name: "{{ os_network_name }}"
>>> instance_name: "{{ os_username }}_{{ oshost }}"
>>> internal_network_name: "{{ os_internal_network_name }}"
>>> auth_url: "{{ os_url }}"
>>> register: quantum_info
>>>
>>> - name: Wait for cluster SSH to become available
>>> wait_for:
>>> port: 22
>>> host: "{{ quantum_info.public_ip }}"
>>> timeout: 180
>>> state: started
>>>
>>>
>>>
>>> This method gives the following benefits:
>>>
>>> - I can define lots of flavors and images in my cluster.yml
>>> definition.
>>> - I can launch them all in parallel using Ansible's built-in, robust
>>> parallel execution.
>>> - I have access to all of Ansible's primitives while doing so, so I
>>> can build in *any* custom logic.
>>> - No need to add options to nova_compute.
>>>
>>>
>>> Overall, I'm extremely happy with this solution. To reiterate: *no code
>>> changes are required*!
>>>
>>>
>>>
>>> On Wednesday, July 16, 2014 5:02:31 PM UTC-4, Michael DeHaan wrote:
>>>>
>>>> Additions of new params to add IP spawning behavior would be reasonable.
>>>>
>>>> (assign_public_ip, True/False, etc)
>>>>
>>>> What might you prefer on names?
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 15, 2014 at 1:27 PM, Kurt Yoder <[email protected]>
>>>> wrote:
>>>>
>>>>> I dug a bit further. The API does allow min_count and max_count, much
>>>>> the way boto does for AWS.
>>>>>
>>>>> When you submit a request with min_count, your instances are named
>>>>> <ansible-provide name>-<instance UUID>. That's acceptable, though not
>>>>> ideal.
>>>>>
>>>>> I'm taking a look at the Ansible ec2 module, and the code for boto
>>>>> instance-launching code looks very different than the nova_compute
>>>>> instance-launching code. I haven't run it yet; I need to dig around to
>>>>> find
>>>>> my ec2 creds so I can run a test.
>>>>>
>>>>> The Ansible ec2 module also allows one to assign public IPs while
>>>>> launching multiple instances. The Ansible nova_compute module does not
>>>>> permit this ATM.
>>>>>
>>>>>
>>>>> On Friday, July 11, 2014 9:50:10 PM UTC-4, Michael DeHaan wrote:
>>>>>
>>>>>> "So should I make a custom module which loops over nova_compute
>>>>>> asynchronously, and also assigns floating IPs?"
>>>>>>
>>>>>> I'd first rather know where the openstack API allows simultaneous
>>>>>> creation of N virtual machines of the same image type.
>>>>>>
>>>>>> I expect the floating IP stuff is fast and a usual with_items loop
>>>>>> isn't a problem there, once those guests exist.
>>>>>>
>>>>>> (Using neutron, I assume?)
>>>>>>
>>>>>>
>>>>>> On Fri, Jul 11, 2014 at 6:54 PM, Kurt Yoder <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> So should I make a custom module which loops over nova_compute
>>>>>>> asynchronously, and also assigns floating IPs?
>>>>>>>
>>>>>>> Would such a module be useful to the wider community, or is it too
>>>>>>> specialized to contribute back?
>>>>>>>
>>>>>>> On Friday, July 11, 2014 6:34:47 PM UTC-4, Michael DeHaan wrote:
>>>>>>>
>>>>>>>> So some of the provisioning modules, like AWS in particular,
>>>>>>>> support spinning up "N" modules at a time by passing "count" or
>>>>>>>> "exact_count".
>>>>>>>>
>>>>>>>> Rackspace I believe does this with manual looping (for now), but I
>>>>>>>> could be wrong and that might have just been historical truth.
>>>>>>>>
>>>>>>>> If the OpenStack API says we can launch 10 at once, it could be
>>>>>>>> made to do similar things.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jul 11, 2014 at 5:28 PM, Kurt Yoder <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> I guess that error is because I put a "with_items" in there.
>>>>>>>>>
>>>>>>>>> How does everyone else do this? I don't understand how to loop
>>>>>>>>> asynchronously. See pseudo-code:
>>>>>>>>>
>>>>>>>>> < start all 5 at once: >
>>>>>>>>> < start up openstack host >
>>>>>>>>> < assign it a floating ip >
>>>>>>>>> < capture the floating ip >
>>>>>>>>> < end when all 5 have a floating ip >
>>>>>>>>> < wait for all 5 floating IPs to have an open SSH port >
>>>>>>>>>
>>>>>>>>> On Friday, July 11, 2014 5:20:34 PM UTC-4, Kurt Yoder wrote:
>>>>>>>>>>
>>>>>>>>>> Well, "async" is totally a bust. I got a message:
>>>>>>>>>>
>>>>>>>>>> fatal: [localhost] => lookup plugins (with_*) cannot be used with
>>>>>>>>>> async tasks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Thursday, July 10, 2014 6:17:48 PM UTC-4, Kurt Yoder wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hello list,
>>>>>>>>>>>
>>>>>>>>>>> I anticipate provisioning 10-20 VMs using Ansible, then
>>>>>>>>>>> assigning floating IPs to each, then waiting for SSH to become
>>>>>>>>>>> available
>>>>>>>>>>> for each VM. I would like to do this in parallel instead of
>>>>>>>>>>> serially.
>>>>>>>>>>> Specifically:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> - Start the VMs, but don't block
>>>>>>>>>>> - Assign the IPs, but don't block
>>>>>>>>>>> - Wait on SSH until all VMs respond
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I saw the nova_compute "wait: 'no'" option, but when I use it I
>>>>>>>>>>> get a stack trace:
>>>>>>>>>>>
>>>>>>>>>>> failed: [localhost] => (item=1) => {"failed": true, "item": 1,
>>>>>>>>>>>> "parsed": false}
>>>>>>>>>>>> invalid output was: Traceback (most recent call last):
>>>>>>>>>>>> File "/home/ubuntu/.ansible/tmp/ansible-tmp-1405028178.0-
>>>>>>>>>>>> 234314980043958/nova_compute", line 1490, in <module>
>>>>>>>>>>>> main()
>>>>>>>>>>>> File "/home/ubuntu/.ansible/tmp/ansible-tmp-1405028178.0-
>>>>>>>>>>>> 234314980043958/nova_compute", line 266, in main
>>>>>>>>>>>> _create_server(module, nova)
>>>>>>>>>>>> File "/home/ubuntu/.ansible/tmp/ansible-tmp-1405028178.0-
>>>>>>>>>>>> 234314980043958/nova_compute", line 194, in _create_server
>>>>>>>>>>>> private = [ x['addr'] for x in getattr(server,
>>>>>>>>>>>> 'addresses').itervalues().next() if x['OS-EXT-IPS:type'] ==
>>>>>>>>>>>> 'fixed']
>>>>>>>>>>>> StopIteration
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Perhaps I'm using it incorrectly:
>>>>>>>>>>>
>>>>>>>>>>> - name: Launch cluster VM on Openstack
>>>>>>>>>>>> nova_compute:
>>>>>>>>>>>> name: "{{ os_username }}_cluster1"
>>>>>>>>>>>> state: present
>>>>>>>>>>>> login_username: "{{ os_username }}"
>>>>>>>>>>>> login_tenant_name: "{{ os_tenant }}"
>>>>>>>>>>>> login_password: "{{ os_password }}"
>>>>>>>>>>>> image_id: "{{ os_image_id }}"
>>>>>>>>>>>> key_name: "{{ os_username }}_controller_key"
>>>>>>>>>>>> wait: "no"
>>>>>>>>>>>> flavor_id: "{{ os_flavor_id }}"
>>>>>>>>>>>> auth_url: "{{ os_url }}"
>>>>>>>>>>>> user_data: "#cloud-config\nmanage_etc_hosts: true"
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> So, two questions:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> 1. Am I using "wait" correctly?
>>>>>>>>>>> 2. Should I use "wait" to get to my desired parallel VM
>>>>>>>>>>> launch, as described above, or should I use something else, e.g.
>>>>>>>>>>> "async"?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>>
>>>>>>>>>>> -Kurt
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>> You received this message because you are subscribed to the Google
>>>>>>>>> Groups "Ansible Project" group.
>>>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>>>> send an email to [email protected].
>>>>>>>>> To post to this group, send email to [email protected].
>>>>>>>>> To view this discussion on the web visit
>>>>>>>>> https://groups.google.com/d/msgid/ansible-project/32b58cdf-
>>>>>>>>> 17cd-42c9-8ef6-dc90327e989a%40googlegroups.com
>>>>>>>>> <https://groups.google.com/d/msgid/ansible-project/32b58cdf-17cd-42c9-8ef6-dc90327e989a%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>>>> .
>>>>>>>>>
>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>> You received this message because you are subscribed to the Google
>>>>>>> Groups "Ansible Project" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>> send an email to [email protected].
>>>>>>> To post to this group, send email to [email protected].
>>>>>>> To view this discussion on the web visit
>>>>>>> https://groups.google.com/d/msgid/ansible-project/
>>>>>>> 2326c06a-dec5-43b2-bd22-bb6856d14227%40googlegroups.com
>>>>>>> <https://groups.google.com/d/msgid/ansible-project/2326c06a-dec5-43b2-bd22-bb6856d14227%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>> .
>>>>>>>
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>>
>>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Ansible Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> To post to this group, send email to [email protected].
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/ansible-project/2949ac2a-e83e-4074-bc26-02652b90e605%40googlegroups.com
>>>>>
>>>>> <https://groups.google.com/d/msgid/ansible-project/2949ac2a-e83e-4074-bc26-02652b90e605%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/ansible-project/fcc599ca-c151-45aa-820b-910e13e1e4de%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.