Tony, Have you considered using Slurm job dependencies for this workflow? That way you can submit the initial job and the post-processing job at the same time, but set a dependency on the post-processing job so that it can't start until the first job has finished successfully. We've had users who manage fairly complicated analysis pipelines entirely with job dependencies.
Regards, Pete On 7/19/17, 10:07 AM, "Glover, Anthony E CTR USARMY RDECOM (US)" <anthony.e.glover....@mail.mil> wrote: CLASSIFICATION: UNCLASSIFIED Got a general question, but one that might be specifically addressed by Slurm - don't know. We have a multi-process, distributed simulation that runs as a single job and generates a significant amount of data. At the end of that run, we would like to be able to post-process the data. The post-processing currently consists of python scripts wrapped up in luigi workflows/tasks. We would like to be able to distribute those tasks across the cluster as well to speed up the post-processing. So, my question is: what is the best way to trigger submitting a job to Slurm based upon the completion of a previous job? I see that the strigger command can probably do what I need, but maybe it is more of a workflow question that I have. If we have say 100 of these simulation jobs in the queue, then it would seem like I would want the post-processing to run at the end of each job, but if the trigger submits another job with multiple CPU needs, then that job would go in at the back of the queue. I guess I could set the priority such that it jumps the remaining simulation jobs, or maybe a separate post-processing queue is more appropriate. Anyway, just looking for some ideas as to how others might be addressing this type or problem. Any guidance would be much appreciated. Thanks, Tony CLASSIFICATION: UNCLASSIFIED