Re: [easybuild] gaussian build with eb

2017-03-02 Thread Åke Sandgren
PR 4247 binary install only.

On 03/03/2017 07:57 AM, Åke Sandgren wrote:
> Yes, i have both.
> Did i forget to push them too?
> 
> On 03/03/2017 05:08 AM, Siddiqui, Shahzeb wrote:
>> Hello,
>>
>>  
>>
>> Anyone have a Gaussian g16 or g09 build with Easybuild. I can’t seem to
>> find any easyconfig in the repo or in the PR.
>>
>>  
>>
>> Regards,
>>
>>  
>>
>> Shahzeb Siddiqui
>>
>> HPC Linux Engineer
>>
>> B2220-447.2
>>
>> Groton, CT
>>
>>  
>>
> 

-- 
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se


Re: [easybuild] gaussian build with eb

2017-03-02 Thread Åke Sandgren
Yes, i have both.
Did i forget to push them too?

On 03/03/2017 05:08 AM, Siddiqui, Shahzeb wrote:
> Hello,
> 
>  
> 
> Anyone have a Gaussian g16 or g09 build with Easybuild. I can’t seem to
> find any easyconfig in the repo or in the PR.
> 
>  
> 
> Regards,
> 
>  
> 
> Shahzeb Siddiqui
> 
> HPC Linux Engineer
> 
> B2220-447.2
> 
> Groton, CT
> 
>  
> 

-- 
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se


[easybuild] gaussian build with eb

2017-03-02 Thread Siddiqui, Shahzeb
Hello,

Anyone have a Gaussian g16 or g09 build with Easybuild. I can't seem to find 
any easyconfig in the repo or in the PR.

Regards,

Shahzeb Siddiqui
HPC Linux Engineer
B2220-447.2
Groton, CT



Re: [easybuild] FOSS vs CUDA

2017-03-02 Thread Robert Schmidt
I don't think anyone feels very strongly about foss ideologically, it is
just a name that is better than goolf. The bioinfo people tend to use it
for ease of support as much of the software is built with it already and
absolute best performance isn't always more important than getting the
compilation done in less time.


On Thu, Mar 2, 2017 at 6:55 PM Maxime Boissonneault <
maxime.boissonnea...@calculquebec.ca> wrote:

> Hi David,
> Understood. We also go for minimal toolchains. We're however doing mostly
>
> dummy -> GCCcore -> iccifort -> iompi -> iomkl -> iomklc
> and
> dummy -> GCCcore -> gcc -> gompi -> gomkl -> gomklc
>
>
> Maxime
>
>
>
> On 17-03-02 18:38, Vanzo, Davide wrote:
>
> Maxime,
> your point it totally legitimate. My approach is less about philosophy and
> more about practicality.
> We picked the foss toolchain instead of the goolf toolchain because of its
> more collaborative nature and scheduled release. The problem is that if we
> now start using a goolfc toolchain, we could not get the benefit of reusing
> most of the software built with foss since we build with minimal
> toolchains. Hence I proposed of starting a fosscuda toolchain that is
> aligned with the foss release. That's it.
>
> --
> Davide Vanzo, PhD
> Application Developer
> Adjunct Assistant Professor of Chemical and Biomolecular Engineering
> Advanced Computing Center for Research and Education (ACCRE)
> Vanderbilt University - Hill Center 201
> (615)-875-9137 <(615)%20875-9137>
> www.accre.vanderbilt.edu
>
> On Mar 2 2017, at 5:30 pm, Maxime Boissonneault
> 
>  wrote:
>
> Hi,
>
> I've seen a couple emails about CUDA recently, and I was a bit surprised
> to see work done about FOSS and CUDA.
>
> Isn't the whole point of FOSS to be free and open source ? CUDA is not
> open source. Won't die-hard fan of FOSS object to having CUDA in a FOSS
> toolchain ?
>
> I personally don't really care, I just want the best performance for my
> users (which is why we don't go with FOSS in the first place, since MKL
> gives better performances than OpenBLAS).
>
> I just thought I'ld raise the question.
>
>
> --
> -
> Maxime Boissonneault
> Analyste de calcul - Calcul Québec, Université Laval
> Président - Comité de coordination du soutien à la recherche de Calcul
> Québec
> Team lead - Research Support National Team, Compute Canada
> Instructeur Software Carpentry
> Ph. D. en physique
>
>
>
> --
> -
> Maxime Boissonneault
> Analyste de calcul - Calcul Québec, Université Laval
> Président - Comité de coordination du soutien à la recherche de Calcul Québec
> Team lead - Research Support National Team, Compute Canada
> Instructeur Software Carpentry
> Ph. D. en physique
>
>


Re: [easybuild] FOSS vs CUDA

2017-03-02 Thread Maxime Boissonneault

Hi David,
Understood. We also go for minimal toolchains. We're however doing mostly

dummy -> GCCcore -> iccifort -> iompi -> iomkl -> iomklc
and
dummy -> GCCcore -> gcc -> gompi -> gomkl -> gomklc

Maxime


On 17-03-02 18:38, Vanzo, Davide wrote:

Maxime,
your point it totally legitimate. My approach is less about philosophy 
and more about practicality.
We picked the foss toolchain instead of the goolf toolchain because of 
its more collaborative nature and scheduled release. The problem is 
that if we now start using a goolfc toolchain, we could not get the 
benefit of reusing most of the software built with foss since we build 
with minimal toolchains. Hence I proposed of starting a 
fosscuda toolchain that is aligned with the foss release. That's it.


--
Davide Vanzo, PhD
Application Developer
Adjunct Assistant Professor of Chemical and Biomolecular Engineering
Advanced Computing Center for Research and Education (ACCRE)
Vanderbilt University - Hill Center 201
(615)-875-9137
www.accre.vanderbilt.edu

On Mar 2 2017, at 5:30 pm, Maxime Boissonneault 
 wrote:


Hi,

I've seen a couple emails about CUDA recently, and I was a bit
surprised
to see work done about FOSS and CUDA.

Isn't the whole point of FOSS to be free and open source ? CUDA is
not
open source. Won't die-hard fan of FOSS object to having CUDA in a
FOSS
toolchain ?

I personally don't really care, I just want the best performance
for my
users (which is why we don't go with FOSS in the first place,
since MKL
gives better performances than OpenBLAS).

I just thought I'ld raise the question.


-- 
-

Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Président - Comité de coordination du soutien à la recherche de
Calcul Québec
Team lead - Research Support National Team, Compute Canada
Instructeur Software Carpentry
Ph. D. en physique




--
-
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Président - Comité de coordination du soutien à la recherche de Calcul Québec
Team lead - Research Support National Team, Compute Canada
Instructeur Software Carpentry
Ph. D. en physique



Re: [easybuild] FOSS vs CUDA

2017-03-02 Thread Vanzo, Davide
Maxime,
your point it totally legitimate. My approach is less about philosophy and more 
about practicality.
We picked the foss toolchain instead of the goolf toolchain because of its more 
collaborative nature and scheduled release. The problem is that if we now start 
using a goolfc toolchain, we could not get the benefit of reusing most of the 
software built with foss since we build with minimal toolchains. Hence I 
proposed of starting a fosscuda toolchain that is aligned with the foss 
release. That's it.

--
Davide Vanzo, PhD
Application Developer
Adjunct Assistant Professor of Chemical and Biomolecular Engineering
Advanced Computing Center for Research and Education (ACCRE)
Vanderbilt University - Hill Center 201
(615)-875-9137
www.accre.vanderbilt.edu

On Mar 2 2017, at 5:30 pm, Maxime Boissonneault 
 wrote:

Hi,

I've seen a couple emails about CUDA recently, and I was a bit surprised
to see work done about FOSS and CUDA.

Isn't the whole point of FOSS to be free and open source ? CUDA is not
open source. Won't die-hard fan of FOSS object to having CUDA in a FOSS
toolchain ?

I personally don't really care, I just want the best performance for my
users (which is why we don't go with FOSS in the first place, since MKL
gives better performances than OpenBLAS).

I just thought I'ld raise the question.

--
-
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Président - Comité de coordination du soutien à la recherche de Calcul Québec
Team lead - Research Support National Team, Compute Canada
Instructeur Software Carpentry
Ph. D. en physique


[easybuild] FOSS vs CUDA

2017-03-02 Thread Maxime Boissonneault

Hi,

I've seen a couple emails about CUDA recently, and I was a bit surprised 
to see work done about FOSS and CUDA.


Isn't the whole point of FOSS to be free and open source ? CUDA is not 
open source. Won't die-hard fan of FOSS object to having CUDA in a FOSS 
toolchain ?


I personally don't really care, I just want the best performance for my 
users (which is why we don't go with FOSS in the first place, since MKL 
gives better performances than OpenBLAS).


I just thought I'ld raise the question.


--
-
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Président - Comité de coordination du soutien à la recherche de Calcul Québec
Team lead - Research Support National Team, Compute Canada
Instructeur Software Carpentry
Ph. D. en physique



Re: [easybuild] sanity check issue CUDA with GCC

2017-03-02 Thread Benjamin Evans
Shahzeb,

I had a similar error a few days ago. It is probably somewhere in the build
log. Without any patching, CUDA refuses to install if your gcc is too new
(for CUDA 7.5 it can't be newer than gcc 4.8). For cuda and gcc versions in
one place see here .

Cheers,
Ben

On Thu, Mar 2, 2017 at 3:23 PM, Siddiqui, Shahzeb <
shahzeb.siddi...@pfizer.com> wrote:

> Hello,
>
>
>
> I am puzzled why I am running into a issue when rebuilding CUDA with GCC
> support. It works fine when building with dummy toolchain.
>
>
>
> hpcswadm@hpcv18$eb CUDA-7.5.18-GCC-6.2.0.eb -r ..
>
> == temporary log file in case of crash /tmp/eb-ds88em/easybuild-YiC3Kf.log
>
> == resolving dependencies ...
>
> == processing EasyBuild easyconfig /hpc/hpcswadm/easybuild/CUDA/
> CUDA-7.5.18-GCC-6.2.0.eb
>
> == building and installing Compiler/GCC/6.2.0/CUDA/7.5.18...
>
> == fetching files...
>
> == creating build dir, resetting environment...
>
> == unpacking...
>
> == patching...
>
> == preparing...
>
> == configuring...
>
> == building...
>
> == testing...
>
> == installing...
>
> == taking care of extensions...
>
> == postprocessing...
>
> == sanity checking...
>
> == FAILED: Installation ended unsuccessfully (build directory:
> /nfs/grid/software/RHEL7-BUILD/easybuild/build/CUDA/7.5.18/GCC-6.2.0):
> build failed (first 300 chars): Sanity check failed: no file of
> ('bin/fatbinary',) in /nfs/grid/software/testing/RHEL7/easybuild/software/
> Compiler/GCC/6.2.0/CUDA/7.5.18, no file of ('bin/nvcc',) in
> /nfs/grid/software/testing/RHEL7/easybuild/software/
> Compiler/GCC/6.2.0/CUDA/7.5.18, no file of ('bin/nvlink',) in
> /nfs/grid/software/t
>
> == Results of the build can be found in the log file(s)
> /tmp/eb-ds88em/easybuild-CUDA-7.5.18-20170302.152200.XsipE.log
>
> ERROR: Build of /hpc/hpcswadm/easybuild/CUDA/CUDA-7.5.18-GCC-6.2.0.eb
> failed (err: "build failed (first 300 chars): Sanity check failed: no file
> of ('bin/fatbinary',) in /nfs/grid/software/testing/
> RHEL7/easybuild/software/Compiler/GCC/6.2.0/CUDA/7.5.18, no file of
> ('bin/nvcc',) in /nfs/grid/software/testing/RHEL7/easybuild/software/
> Compiler/GCC/6.2.0/CUDA/7.5.18, no file of ('bin/nvlink',) in
> /nfs/grid/software/t")
>
>
>
> Shahzeb Siddiqui
>
> HPC Linux Engineer
>
> B2220-447.2
>
> Groton, CT
>
>
>


[easybuild] sanity check issue CUDA with GCC

2017-03-02 Thread Siddiqui, Shahzeb
Hello,

I am puzzled why I am running into a issue when rebuilding CUDA with GCC 
support. It works fine when building with dummy toolchain.

hpcswadm@hpcv18$eb CUDA-7.5.18-GCC-6.2.0.eb -r ..
== temporary log file in case of crash /tmp/eb-ds88em/easybuild-YiC3Kf.log
== resolving dependencies ...
== processing EasyBuild easyconfig 
/hpc/hpcswadm/easybuild/CUDA/CUDA-7.5.18-GCC-6.2.0.eb
== building and installing Compiler/GCC/6.2.0/CUDA/7.5.18...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== FAILED: Installation ended unsuccessfully (build directory: 
/nfs/grid/software/RHEL7-BUILD/easybuild/build/CUDA/7.5.18/GCC-6.2.0): build 
failed (first 300 chars): Sanity check failed: no file of ('bin/fatbinary',) in 
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/GCC/6.2.0/CUDA/7.5.18,
 no file of ('bin/nvcc',) in 
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/GCC/6.2.0/CUDA/7.5.18,
 no file of ('bin/nvlink',) in /nfs/grid/software/t
== Results of the build can be found in the log file(s) 
/tmp/eb-ds88em/easybuild-CUDA-7.5.18-20170302.152200.XsipE.log
ERROR: Build of /hpc/hpcswadm/easybuild/CUDA/CUDA-7.5.18-GCC-6.2.0.eb failed 
(err: "build failed (first 300 chars): Sanity check failed: no file of 
('bin/fatbinary',) in 
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/GCC/6.2.0/CUDA/7.5.18,
 no file of ('bin/nvcc',) in 
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/GCC/6.2.0/CUDA/7.5.18,
 no file of ('bin/nvlink',) in /nfs/grid/software/t")

Shahzeb Siddiqui
HPC Linux Engineer
B2220-447.2
Groton, CT



Re: [easybuild] MODULEPATH issue when supporting intel and intelcuda toolchain concurrently

2017-03-02 Thread Alan O'Cais
At JSC we handle this issue by treating CUDA as a simple dependency of packages 
built at the compiler level, we only incorporate it into a toolchain when we 
use a CUDA-aware MPI (which means that the MODULEPATH expansion only happens 
once rather than twice, once for CUDA and once for MPI). Since our MPI 
implementations are in a "family" this is very safe. It also has very little 
side-effects because how CUDA is included is very heterogeneous across packages 
and typically needs to be implemented by hand anyway.

On 2 March 2017 at 16:31, Alan O'Cais 
> wrote:
Dear Shahzeb,

I think this is probably the same (or at least related to the) issue that is 
being discussed in https://github.com/hpcugent/easybuild-framework/pull/2135

It also exposes one of the problems of a HMNS, the potential non-uniqueness of 
module names. The problem with not building software with minimal toolchains is 
that you can have multiple copies at various levels of your toolchain 
hierarchy. What module you end up loading is then dependent on the order that 
you load things (perhaps not in Lmod because it is hierarchy-aware but 
definitely for other module tools). This can clearly lead to issues.

Alan

On 2 March 2017 at 15:32, Siddiqui, Shahzeb 
> wrote:
Hello,

I seem to notice an issue when building modules using HierarchicalNamingScheme 
when building out the intel and intelcuda toolchains.

I notice that MODULEPATH is set for icc and ifort for intel directory. This is 
correct when setting up intel toolchain.

hpcswadm@hpcv18$grep -iR MODULEPATH
icc/2017.1.132-GCC-5.2.0.lua:prepend_path("MODULEPATH", 
"/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0")
ifort/2017.1.132-GCC-5.2.0.lua:prepend_path("MODULEPATH", 
"/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0")

impi  gets installed in the path for intel as expected.

hpcswadm@hpcv18$ls -R 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/:
impi

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/impi:
2017.1.132.lua

As for impi built with iccifortcuda it gets installed in

hpcswadm@hpcv18$ls -R 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/:
2017.1.132-GCC-5.2.0-7.5.18

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18:
impi

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18/impi:
2017.1.132.lua

The problem is when loading iimpic toolchain it loads up icc and ifort modules 
along with impi that belongs to the module tree from intel and not intel-CUDA. 
I am not sure if this is a problem, but it seems like the impi is not being 
picked up correctly.

hpcswadm@hpcv18$ml av iimpi
- 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Core 
-
   iimpi/2017.01-GCC-5.2.0 (TC)iimpic/2017.01-GCC-5.2.0 (TC)

hpcswadm@hpcv18$ml

Currently Loaded Modules:
  1) EasyBuild/3.1.0

hpcswadm@hpcv18$ml iimpic
Currently Loaded Modules:
  1) EasyBuild/3.1.0   3) icc/2017.1.132-GCC-5.2.0   (I)   5) impi/2017.1.132 
(I)   7) iimpic/2017.01-GCC-5.2.0 (
TC)
  2) GCC/5.2.0 4) ifort/2017.1.132-GCC-5.2.0 (I)   6) CUDA/7.5.18

hpcswadm@hpcv18$which mpicc
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/intel/2017.1.132-GCC-5.2.0/impi/2017.1.132/bin64/mpicc

The one that should be loaded is from
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18/impi/2017.1.132/bin64/mpicc

I think the impi module should not sit inside intel directory or somehow icc 
and ifort MODULEPATH need to be changed to intel-CUDA when loading iimpic
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0
 --
   impi/2017.1.132 (I,L)

Anyone else come across this issue.


Shahzeb Siddiqui
HPC Linux Engineer
B2220-447.2
Groton, CT




--
Dr. Alan O'Cais
E-CAM Software Manager
Juelich Supercomputing Centre
Forschungszentrum Juelich GmbH
52425 Juelich, Germany

Phone: +49 2461 61 5213
Fax: +49 2461 61 6656
E-mail: a.oc...@fz-juelich.de
WWW:http://www.fz-juelich.de/ias/jsc/EN



--
Dr. Alan O'Cais
E-CAM Software Manager
Juelich Supercomputing Centre
Forschungszentrum Juelich GmbH
52425 Juelich, Germany

Phone: +49 2461 61 5213
Fax: +49 2461 61 6656
E-mail: a.oc...@fz-juelich.de
WWW:http://www.fz-juelich.de/ias/jsc/EN



Re: [easybuild] MODULEPATH issue when supporting intel and intelcuda toolchain concurrently

2017-03-02 Thread Alan O'Cais
Dear Shahzeb,

I think this is probably the same (or at least related to the) issue that is 
being discussed in https://github.com/hpcugent/easybuild-framework/pull/2135

It also exposes one of the problems of a HMNS, the potential non-uniqueness of 
module names. The problem with not building software with minimal toolchains is 
that you can have multiple copies at various levels of your toolchain 
hierarchy. What module you end up loading is then dependent on the order that 
you load things (perhaps not in Lmod because it is hierarchy-aware but 
definitely for other module tools). This can clearly lead to issues.

Alan

On 2 March 2017 at 15:32, Siddiqui, Shahzeb 
> wrote:
Hello,

I seem to notice an issue when building modules using HierarchicalNamingScheme 
when building out the intel and intelcuda toolchains.

I notice that MODULEPATH is set for icc and ifort for intel directory. This is 
correct when setting up intel toolchain.

hpcswadm@hpcv18$grep -iR MODULEPATH
icc/2017.1.132-GCC-5.2.0.lua:prepend_path("MODULEPATH", 
"/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0")
ifort/2017.1.132-GCC-5.2.0.lua:prepend_path("MODULEPATH", 
"/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0")

impi  gets installed in the path for intel as expected.

hpcswadm@hpcv18$ls -R 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/:
impi

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/impi:
2017.1.132.lua

As for impi built with iccifortcuda it gets installed in

hpcswadm@hpcv18$ls -R 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/:
2017.1.132-GCC-5.2.0-7.5.18

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18:
impi

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18/impi:
2017.1.132.lua

The problem is when loading iimpic toolchain it loads up icc and ifort modules 
along with impi that belongs to the module tree from intel and not intel-CUDA. 
I am not sure if this is a problem, but it seems like the impi is not being 
picked up correctly.

hpcswadm@hpcv18$ml av iimpi
- 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Core 
-
   iimpi/2017.01-GCC-5.2.0 (TC)iimpic/2017.01-GCC-5.2.0 (TC)

hpcswadm@hpcv18$ml

Currently Loaded Modules:
  1) EasyBuild/3.1.0

hpcswadm@hpcv18$ml iimpic
Currently Loaded Modules:
  1) EasyBuild/3.1.0   3) icc/2017.1.132-GCC-5.2.0   (I)   5) impi/2017.1.132 
(I)   7) iimpic/2017.01-GCC-5.2.0 (
TC)
  2) GCC/5.2.0 4) ifort/2017.1.132-GCC-5.2.0 (I)   6) CUDA/7.5.18

hpcswadm@hpcv18$which mpicc
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/intel/2017.1.132-GCC-5.2.0/impi/2017.1.132/bin64/mpicc

The one that should be loaded is from
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18/impi/2017.1.132/bin64/mpicc

I think the impi module should not sit inside intel directory or somehow icc 
and ifort MODULEPATH need to be changed to intel-CUDA when loading iimpic
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0
 --
   impi/2017.1.132 (I,L)

Anyone else come across this issue.


Shahzeb Siddiqui
HPC Linux Engineer
B2220-447.2
Groton, CT




--
Dr. Alan O'Cais
E-CAM Software Manager
Juelich Supercomputing Centre
Forschungszentrum Juelich GmbH
52425 Juelich, Germany

Phone: +49 2461 61 5213
Fax: +49 2461 61 6656
E-mail: a.oc...@fz-juelich.de
WWW:http://www.fz-juelich.de/ias/jsc/EN




Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt





[easybuild] MODULEPATH issue when supporting intel and intelcuda toolchain concurrently

2017-03-02 Thread Siddiqui, Shahzeb
Hello,

I seem to notice an issue when building modules using HierarchicalNamingScheme 
when building out the intel and intelcuda toolchains.

I notice that MODULEPATH is set for icc and ifort for intel directory. This is 
correct when setting up intel toolchain.

hpcswadm@hpcv18$grep -iR MODULEPATH
icc/2017.1.132-GCC-5.2.0.lua:prepend_path("MODULEPATH", 
"/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0")
ifort/2017.1.132-GCC-5.2.0.lua:prepend_path("MODULEPATH", 
"/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0")

impi  gets installed in the path for intel as expected.

hpcswadm@hpcv18$ls -R 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/:
impi

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0/impi:
2017.1.132.lua

As for impi built with iccifortcuda it gets installed in

hpcswadm@hpcv18$ls -R 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/:
2017.1.132-GCC-5.2.0-7.5.18

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18:
impi

/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18/impi:
2017.1.132.lua

The problem is when loading iimpic toolchain it loads up icc and ifort modules 
along with impi that belongs to the module tree from intel and not intel-CUDA. 
I am not sure if this is a problem, but it seems like the impi is not being 
picked up correctly.

hpcswadm@hpcv18$ml av iimpi
- 
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Core 
-
   iimpi/2017.01-GCC-5.2.0 (TC)iimpic/2017.01-GCC-5.2.0 (TC)

hpcswadm@hpcv18$ml

Currently Loaded Modules:
  1) EasyBuild/3.1.0

hpcswadm@hpcv18$ml iimpic
Currently Loaded Modules:
  1) EasyBuild/3.1.0   3) icc/2017.1.132-GCC-5.2.0   (I)   5) impi/2017.1.132 
(I)   7) iimpic/2017.01-GCC-5.2.0 (
TC)
  2) GCC/5.2.0 4) ifort/2017.1.132-GCC-5.2.0 (I)   6) CUDA/7.5.18

hpcswadm@hpcv18$which mpicc
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/intel/2017.1.132-GCC-5.2.0/impi/2017.1.132/bin64/mpicc

The one that should be loaded is from
/nfs/grid/software/testing/RHEL7/easybuild/software/Compiler/intel-CUDA/2017.1.132-GCC-5.2.0-7.5.18/impi/2017.1.132/bin64/mpicc

I think the impi module should not sit inside intel directory or somehow icc 
and ifort MODULEPATH need to be changed to intel-CUDA when loading iimpic
/nfs/grid/software/testing/RHEL7/easybuild/modules/all/Compiler/intel/2017.1.132-GCC-5.2.0
 --
   impi/2017.1.132 (I,L)

Anyone else come across this issue.


Shahzeb Siddiqui
HPC Linux Engineer
B2220-447.2
Groton, CT