The present thread contains many excellent suggestions. In order to
keep life simple, I've chosen to maintain separate EB trees for each
One question remains: How to determine the current host's
CPU-architecture. The best solution I've been able to find is to use
GCC 4.9 or newer:
module load GCC
gcc -march=native -Q --help=target | grep march
The architecture can be made available to users with a script
/etc/profile.d/cpu_arch.sh (for example):
This has been summarized in our Wiki:
On 04-10-2016 10:06, Kenneth Hoste wrote:
On 30/09/16 18:55, Martin wrote:
I think this is a recurring question.
My impression is that "HPC" seems to somehow imply that there's a
divide in people.
With quite some exxageration:
One group wants to squeeze out every possible CPU cycle and in turn is
willing to invest the time to recompile multiple times, even within
the same CPU Family (like here Xeon v1, v2, ..., v5)
The second group (like me) would like to simply have repeatable
builds. I'd rather prefer compiling against Pentium I but have
reliable builds that will run on all the hardware that is floating around.
Is this something that should be discussed more broadly?
Maybe whether testing should include different sets of CPU Flags? I'm
pretty sure Kenneth (or UGent) can't pull this off. Something where
volunteers could maybe provide build slaves or similiar. I'm
definitely having the constant problem that some easyconfigs seem to
work very well for most people. When I do a rollout at my current
client it works only on half the nodes due to illegal instruction errors.
We tend to test things with a default configuration (well, except for
using Lmod as modules tool, which will become the default soon), so we
optimize for the architecture we're building on.
Do you have examples of stuff that fails with illegal instruction errors?
Where are you building the software to be used anywhere, and using which
configuration? Are you using --optarch=GENERIC?