On Wed, 24 Dec 2014 10:47:37 AM Prentice Bisbal wrote:

> I see the logic in having separate /usr/local for every cluster so you 
> can install optimized binaries for each processor, but do you find your 
> users take the time to recompile their own codes for each processor 
> type, or did you come up with this arrangement to force them to do so?

As we are supporting life sciences most of our users are not programmers and 
so we build a lot of software for them and so each system has its own 
/usr/local.  We do have some people who do build their own code, but not that 
many.

One other reason is that we keep all our healthcheck scripts in /usr/local 
(sourced from a central git repo) which runs over ethernet and we want them to 
keep working if IB has issues (which would cause GPFS to be unavailable on a 
node) to flag up problems and take nodes offline automatically in Slurm.

All the best,
Chris
-- 
 Christopher Samuel        Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: [email protected] Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/      http://twitter.com/vlsci

_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to