Have a look at MyCluster on PyPi it was written by one of my colleagues, it should be able to generate the kind of submission files you are after.
Patches are welcome too.. James On 27 Jun 2016 12:55, "Baker D.J." <d.j.ba...@soton.ac.uk> wrote: > Hello, > > > > At Southampton we are in the process of evaluating SURM with a view to > replacing torque/moab. It’s highly likely that our next HPC cluster will > use SLURM. One of our major concerns is migrating users from torque/moab to > SLURM. In this respect a web based job submission system or job script > template would be invaluable. > > > > I’m aware of slurm-web, however I get the impression that that is just for > monitoring systems. That is, it does not provide a job submission > mechanism. Is that correct? If so is there anything out there that can be > used as a web based submission portal? > > > > I have come across a job script template developed by BYU in the US. This > is simple to install, however I see that it assumes that MPI jobs utilize > all the cores on the requested compute nodes. That is, it generates a > script template with the following line… > > > > #SBATCH --ntasks=32 > > > > It would be great to hear from a site that has generalized this template > to specify/limit the number of cores per node (due to memory requirements > for example) and the total number of cores in use. That is, a template that > can generate the following scripting.. > > > > #SBATCH --ntasks=16 # Number of processor cores (i.e. tasks) > > #SBATCH --nodes=4 # Number of nodes requested > > #SBATCH --ntasks-per-node=4 # Tasks per node > > > > If anyone has modified the BYU template with the above flexibility then it > would be good to hear from them, please. We would be keen to have a copy of > the modified scripts/executables, please. That’s assuming that they are > happy to share their files/executables with us. > > > > Best regards, > > David >