Hello,

I've been diving into ansible most recently because I wan't to automate the 
management of several virtual environments deployed with OVM (oracle 
virtualization solution for x86 using xen). I found an ansible module 
created for oracle cloud 
<https://github.com/dbgeek/ansible_oracle_cloud_modules/> that implements 
most of the functionalities to manage a VM (take a peek at the *oracle_ovm* 
file) but I'm writing my own to also provide check mode and have a better 
feedback of the status of the running jobs, set some timeouts for them and 
autoabort options too. OVM Manager (the console to manage the VMs, it's 
like a Vcenter) exposes an API. The ansible module I just referenced takes 
a naive approach about the api jobs just waiting for them to finish 
(forever) and if they do finish, it assumes they were succesful.

The problem I have is I don't know how to get feedback about the jobs while 
they are running. I already read the doc about async jobs 
<http://docs.ansible.com/ansible/playbooks_async.html>, and it seemed the 
"fire and forget + check later" approach would help but there's one 
problem. The api works this way: if I request to do a post/put request like 
"Stop VM" (i.e "post api/rest/vm/stop") it returns a jobId. This ID can be 
used to check the status of the async job using *another *api call (i.e 
"get api/rest/job/<id>"). So, what I tried to do was pass the jobId to the 
"check later" task and call the same module requesting another state that 
is supposed to return the job status at a given time until it's finished. 
That didn't worked =(, the playbook complains because I'm not able to call 
modules if async_status is present (I believe this might be the problem I'm 
not completely sure, again, I'm just noobing here).

I will paste some lines now, I hope my explanation is good enough, let me 
know if you need any more insight.

#### ovm module ####
    .
    .
    .
    # Send job to stop the vm and return the id of it in the job variable
    elif module.params["state"] == "stop":
        job = stopVm(restSession,module)
        module.exit_json(msg='vm stopped', changed=True, 
job=job['id']['value'])
    .
    .
    .
    # Get the job info and return it in the job variable
    elif module.params["state"] == "jobinfo":
        jobinfo = getJobInfo(restSession,module.params["jobId"],module)
        module.exit_json(msg=jobinfo, changed=False, job=jobinfo)

#### playbook ####
- hosts: 127.0.0.1
  connection: local

  tasks:

    - name: 'Stop Vm test'
      oracle_ovm2:
        state: stop
        user: xxxxxx
        password: xxxxxx
        vmName: test
        ovmHostname: ovmm-test
        ovmPort: 7002
      async: 3000
      poll: 0
      register: stop_job

    - name: 'OVM Check stop job'
      oracle_ovm2:
        state: jobinfo
        user: xxxxxxxxxx
        password: xxxxx
        vmName: test
        ovmHostname: ovmm-test
        ovmPort: 7002
        jobId: "{{ stop_job.job }}"
      async_status: jid={{ stop_job.ansible_job }}
      register: job_result
      until: job_result.job['summaryDone']
      retries: 10
      delay: 30
      failed_when: job_result.job['jobRunState'] != 'SUCCESS'

If you run the playbook like that it complains about parse errors. Those 
errors go away if I take away the module call in the second task but then 
how can I send the get request to the api for the jobinfo?

Thank you very much in advance,

Nestor

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/763324d2-c0c2-4c72-ae3b-c3ff6340fc49%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to