I'm exploring Ansible for managing HPC workflows. The use case is that a 
user
prepares a set of files on a central computer (e.g., a workstation or 
laptop),
transfers them to a bunch of HPC machines, and submits a job to their 
queuing
systems. Once the job on any of them starts running, the other ones are then
canceled. When the job finishes, the files are transfered back.

It seems that Ansible has the necessary parts to transfer files, run 
commands,
poll status, and pull info (by the remote machines), etc. I'm wondering if 
there
are already modules to handle these types of tasks. If not, what would you
recommend for putting up such a solution? I imagine a few new plugins and
modules will have to be developed. This will be really useful to avoid the 
file
syncing disaster when each of the servers has some but not all of the latest
data. A lot of people roll their own impromptu solutions, but most just 
"live"
with the pain.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/551c0b71-4c4d-4714-b90d-72577e5d1e98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to