Hi,
I would personally prefer the non-CUDA toolchain to prevent to have a lot of
duplicate modules that do not really use CUDA themselves, but are just needed
as dependencies for a module that uses CUDA.
An example would be the Tensorflow Python modules. If they are in a separate
toolchain, t
On 07/10/2018 09:59 AM, Alan O'Cais wrote:
> Ok, I got to build with a patch from the developers:
> https://github.com/easybuilders/easybuild-easyconfigs/pull/6568
> I'm checking with them if it'll also work with Flang.
> Maybe it's time for a full LLVM toolchain?
+1
This is probably also the opp
Ok, I got to build with a patch from the developers:
https://github.com/easybuilders/easybuild-easyconfigs/pull/6568
I'm checking with them if it'll also work with Flang.
Maybe it's time for a full LLVM toolchain? Flang works with an LLVM fork though
so in both cases we probably want to do rpath i
Have you tried to build polly and GPU support using the EB easyblock? I get
some test failures with Clang 6.0.0. For the build you need:
```
usepolly = True
configopts = '-DPOLLY_ENABLE_GPGPU_CODEGEN=ON'
# Build capability to target GPUs
build_targets = ['X86', 'NVPTX']
```
and a CUDA dep (not su
On Mon, 19 Mar 2018 14:43:05 +
Joachim Hein wrote:
> I am wondering how do we want to organise us in future? Do we want to
> continue with the goolfc idea or do we go for a “core” cuda and cuDNN? I
> feel this needs standardising soonish.
On this topic, does anyone have any opinions abou
I agree with Bart here.
Minimizing toolchain dependencies is key. Don't build something with
Intel + OpenMPI + CUDA if it's something that should be built with GCC
(for example CMake or Autoconf)... it is just wasting time and space.
That's why I dislike the "intel" and "foss" toolchains and w
You can mitigate this issue by installing as much as possible at the
compiler level -- we do that at Compute Canada. I have some pending work on
the framework that could make that possible for Python too.
The major incompatibility between goolfc and goolf=foss is in the MPI
libraries, one with and
I very strongly agree with Jack on this. If only a single program / python
module uses CUDA, it is wasteful to have to build and install a new toolchain,
and to rebuild everything on the system, including Python and perhaps even X11
(if using matplotlib).
But there may be something I have over
Hi All,
I am also in favor of CUDA suffixes. CUDA containing toolchain is only
needed when you need CUDA aware MPI (It does not work with Intel as far
as I know). In other words, if you want to use GPUs in different boxes.
Sincerely,
Balazs
On 19/03/2018 15:43, Joachim Hein wrote:
Hi,
I
On 03/19/2018 09:43 AM, Joachim Hein wrote:
Hi,
I am currently installing tensorflow via easybuild (I assume many of
us do these days) and am trying to understand EasyBuild’s ideas on
toolchains supporting cuda.
I looked at TensorFlow-1.5.0-goolfc-2017b-Python-3.6.3.eb, which
builds ontop
10 matches
Mail list logo