Hi,

I am currently installing tensorflow via easybuild (I assume many of us do 
these days) and am trying to understand EasyBuild’s ideas on toolchains 
supporting cuda.

I looked at  TensorFlow-1.5.0-goolfc-2017b-Python-3.6.3.eb, which builds ontop 
of a toolchain containing GCC, Cuda (installed as a compiler module), and 
OpenMPI, Blas, FFTW etc.

I now noticed that there is a new  
TensorFlow-1.6.0-foss-2018a-Python-3.6.4-CUDA-9.1.85.eb, which is accepted into 
the development branch (PR 6016).   This builds ontop a “vanilla” foss-2018a 
toolchain, using a  Cuda and cuDNN modules installed as a core module (system 
compiler).

I am wondering how do we want to organise us in future?  Do we want to continue 
with the goolfc idea or do we go for a “core” cuda and cuDNN?  I feel this 
needs standardising soonish.  It is also something I feel I need to document 
for my users, who want to build their own cuda based software.  What models 
should be loaded to build software.

Any comments, how we take this further?

Best wishes
   Joachim

Reply via email to