Ciao Riccardo, all, thanks, this is indeed a step forward, this is a very common need. If you recall, I brought this up years ago. Is there any way this could be somewhat more automated, fi. “all goolfc builds are to go on resource/gpunodes”?
I am now eyeing at the following list and wonder which concepts seen in other tools may be relevant here: https://github.com/common-workflow-language/common-workflow-language/wiki/Existing-Workflow-systems In short, we should remove human choice from the build process and, to the possible extend, cluster related builds together (fi. I often prefer each toolchain is build on a single dedicated node). tia! F. On Dec 1, 2017, at 12:26 PM, Riccardo Murri <riccardo.mu...@uzh.ch> wrote: > (Jure Pečar, Fri, Dec 01, 2017 at 01:17:05PM +0100:) >> On Fri, 1 Dec 2017 12:34:42 +0100 >> Riccardo Murri <riccardo.mu...@uzh.ch> wrote: >> >>> Do I understand correctly that you want to generate programmatically a >>> number >>> of different values for `-C`? >> >> Yes, like nehalem, sandybridge, haswell, skylake, epyc ... >> >> So multiple backend definitions in gc3pie.conf would be an option. How do I >> then tell eb --job which backend to use? --job-backend is a choice between >> gc3pie and pbspython. I assume I can play with --job-backend-config and have >> one backend per gc3pie.conf.arch file, right? I'll try that ... > > That's one option. Another one is to define different GC3Pie resources > in the same configuration file and use EB's `--job-target-resource`:: > > ### gc3pie.conf > [resource/nehalem] > # ... generic SLURM config here > sbatch = sbatch -C nehalem > > [resource/sandybridge] > # ... (copy config from `nehalem`) > sbatch = sbatch -C sandybridge > > and then: > > eb --job-backend gc3pie --job-target-resource nehalem ... > > Hope this helps, > Riccardo > > -- > Riccardo Murri > > S3IT: Services and Support for Science IT > University of Zurich cheers, Fotis -- echo "sysadmin know better bash than english" | sed s/min/mins/ \ | sed 's/better bash/bash better/' # signal detected in a CERN forum