Just to add a some more ideas in response to your other comments which I
realized & read a little later.
It's not practical for user to write rules for priority order or job resource
limits etc other thanthrough configuration of an existing scheduling
solution.Also, such config about job limits
Hi Daniel,
Our jobs would have mutually exhaustive set of resources, though they may share
hosts.Resources are mainly CPU cores on hosts.Private trackable resources can
be defined according to needs & they would be exclusive to jobs too.
Jobs, as I mentioned, can share hosts. Their resource gran
Hi Bhaskar,
I think I'm missing something here.
All the below points are valid, but mainly of concern for shared
resources. You have your private, non-sharing resources, dedicated to
the app, or so I understood.
Does your app compete against itself? Do users of the app interfere with
job
Hi Daniel,
Appreciate your response.
I think you may be feeling that since we take the placement part of the
scheduling to ourselves then Slurm has no other role to play!
That's not quite true. Below in brief are other important roles which Slurm
must perform which presently come to my mind(this
In the scenario you provide, you don't need anything special.
You just have to configure a partition that is available only to
you, and to no other account on the cluster. This partition will
only include your hosts. All other partition will not include any
Hi Daniel,
Thanks for picking up this query. Let me try to briefly describe my problem.
As you rightly guessed, we have some hardware on the backend which would be
used for our
jobs to run. The app which manages the h/w has its own set of resource
placement/remapping
rules to place a job.
So, fo
I'm not sure I understand why your app must decide the placement, rather
then tell Slurm about the requirements (This sounds suspiciously like
Not Invented Here syndrome), but Slurm does have the '-w' flag to
salloc,sbatch and srun.
I just don't understand if you don't have an entire cluster