Hi Kenneth,

thanks for the response; my line of thinking is very similar to yours; 

comments:

On 22 Jan, 2013, at 16:33, Kenneth Hoste wrote:
> Didn't get to answer this yet, here's my view on this...
> 
> The way I get it, the particular flavor you pick is tied to the system you're 
> installing CUDA on.
> So, if you're on a Ubuntu system, you need to pick the ubuntu packages, if 
> you're on RHEL, pick rhel, etc.

Yes; although we could ride along the concept that Jens suggested
(ie. to decipher dependencies and provide them in the easyconfig)
this is not exact science (I now have to do ldd to discover them) nor,
is it guaranteed that future CUDA versions won't make us reconsider the 
approach.

> Usually, you handle something like this with a version suffix in the CUDA 
> easyconfig file, and hence you'll end up with a CUDA module like 
> "CUDA/4.2.9-ubuntu10.04" .

Yes; I think adding flavors is kind of unavoidable...
(I wish we had the freedom to influence how vendors do packaging but,
we hang out in this list 'cause we know how scientific software is delivered :)

> If you want to avoid dragging along the suffix for everything you build using 
> that CUDA version+flavor, maybe defining a new toolchain would be wise.
> You could have a versionsuffix for the toolchain (a brief one, e.g. only the 
> first two letters of the CUDA flavor) to indicate the CUDA flavor included in 
> it.

Could we perhaps "hide" the cude flavor as part of the toolchain name?
See below what I mean.

> Including CUDA in your compiler toolchain is something you'll need to do 
> anything anyway, to handle all the environment variables, compiler commands 
> and library paths that are specific to CUDA...
> 
> That would yield a toolchain like "goalfc/1.2.0-ub", or something like that.

I'd like that to be goalfc/1.2.0 instead (ie. c= cuda/5.0.35-ubuntu and so on);

My understanding is that, this would limit the need to do much redundant work
and have to provide multiple easyconfigs for what is essentially the same 
software
(fingers crossed, there is an assumption here for CUDA version compatibility).

> TL;DR: We don't have a standard way of handling something like this, except 
> for setting a version suffix and dragging it along everywhere.

And also we need to take into account very complicated toolchains whereby
some functions may be provided by GPU libraries instead (MAGMA, ViennaCL etc)
http://hpcbios.readthedocs.org/en/latest/HPCBIOS_2012-99.html

It may even be the case that portions of the code need to bind against
one LAPACK implementation for CPUs while others need a GPU-enabled one.
Yeah, tricky business, no need to reply fast on it, just think about it.

to be continued,

Fotis

Reply via email to