Hi Douglas,
A little time ago, I had to consolidate to a "common experience" around a dozen
of HPC clusters, spread globally.
Inevitably, it included the complete spectrum of x86 CPUs released over a span
of several years.
The long and short of it is:
* Expert users will have a demand to provide their python/R/other custom (f.i.
C++) codes, having some level of choice on their builds (think: some provide
RPMs externally).
* Any node before AVX microarchitecture either got phased out or setup in a
special queue (i.e. avoid them)
* Any node above it, was free to be used at will for runs, but builds were done
in a well planned manner
* The assumptions above worked well, because the majority of workloads have
been I/O bound; rest handcrafted
* By default, builds would go on AVX microarchitecture (f.i. ivybridge happened
to be the common denominator)
* Builds were done on dedicated queues and the supported scheme involved using
SGE's features, as needed, to steer the builds where they should go, per
microarchitecture.
* On that last one, nodes have been tagged, each with a corresponding
`cpuflags` tag. f.i. `:avx:avx2:avx512cd:avx512f:f16c:fma:lm:sse:sse4_2:` - if
you have SGE, or not, copy from this, and it will allow you to do reliable
substring search:
https://github.com/easybuilders/easybuild/wiki/Conference-call-notes-20171011
```
CPU_FLAGS_LIST="lm sse sse4_2 f16c avx avx2 avx512 avx512f avx512dq avx512cd
avx512bw avx512vl eagerfpu epb ida epb fma"
cpuflags="$(grep ^flags /proc/cpuinfo |uniq|cut -d: -f2-|xargs -n1|sort|comm -1
-2 - <(echo $CPU_FLAGS_LIST|xargs -n1|sort)|xargs|sed 's/
/:/g;s/$/:/g;s/^/:/g')"
```
That's all folks! it basically worked well, with only very minor maintenance
and without annoying surprises.
And the name of the game is to watch out your allocation of CPU horsepower and
check if there is some user who needs further help to optimise code.
enjoy - I hope this helps,
F.
--
echo "sysadmin know better bash than english" | sed s/min/mins/ \
| sed 's/better bash/bash better/' # Yelling in a CERN forum
From: easybuild-requ...@lists.ugent.be [easybuild-requ...@lists.ugent.be] on
behalf of Douglas Scofield [douglas.scofi...@ebc.uu.se]
Sent: Thursday, November 07, 2019 12:57
To: easybuild@lists.ugent.be
Subject: [easybuild] EasyBuild in a heterogeneous HPC centre
Hej,
We are curious how to maintain architecture-specific EasyBuild trees. We are
new to EasyBuild and already have many, many (mostly bioinformatics) tools
installed in hand-maintained module and software trees. In our centre, we have
clusters running Sandy Bridge EP, Haswell EP, and Broadwell EP. Most of our
users are on Broadwell, so we if we compile with -march=native etc., we compile
for Broadwell and for Sandy Bridge, which covers it and Haswell.
In our standard module tree we manage architecture automatically, keying off a
$Cluster variable set by the module system. We handle architectures as if we
had modules versioned Tool/Version/Architecture, with the last bit hidden from
the user.
We have also decided to (mostly) hide our EasyBuild tree from the user, and
instead provide access to EasyBuild-built tools using what we are calling alias
modules, which we place in our standard module tree. An alias module performs
a 'module use' of the easybuild tree and then loads the appropriate
EasyBuild-built modules. The large majority of our users do not care about
toolchains, etc. Those that do, we will have docs they can consult for working
with EasyBuild modules directly. The very large majority of our installed
tools do not currently have easyconfigs.
We are guessing that our architecture solution with EasyBuild will end up being
completely separate EasyBuild trees, accessed using distinct 'module use'
paths. The EasyBuild docs point to a 2015 presentation by Pablo Escobar
describing an automounter solution which we are definitely not going to
implement, but this suggests completely separate trees as well.
How is this typically handled by other centres ?
Thanks in advance,
Douglas
—
Douglas G. Scofield
Evolutionary Biology Centre, Uppsala University
douglas.scofi...@ebc.uu.se
douglasgscofi...@gmail.com
När du har kontakt med oss på Uppsala universitet med e-post så innebär det att
vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du
läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/
E-mailing Uppsala University means that we will process your personal data. For
more information on how this is performed, please read here:
http://www.uu.se/en/about-uu/data-protection-policy