>  Configure by default should find out the available GPU and build for that 
> sm_*  it should not require the user to set this (how the heck is the user 
> going to know what to set?)  If I remember correctly there is a utility 
> available that gives this information. 
For CUDA I believe the tool is nvidia-smi. Should make sure this automatic 
detection works when configuring —with-batch though since login nodes might 
have different arch than compute.

Best regards,

Jacob Faibussowitsch
(Jacob Fai - booss - oh - vitch)
Cell: (312) 694-3391

> On Sep 25, 2020, at 21:09, Barry Smith <[email protected]> wrote:
> 
> 
>   Configure by default should find out the available GPU and build for that 
> sm_*  it should not require the user to set this (how the heck is the user 
> going to know what to set?)  If I remember correctly there is a utility 
> available that gives this information. 
> 
>   For generic builds like in package distributions I don't know how it should 
> work, ideally all the possibilities would be available in the library and at 
> run time the correct one will be utilized. 
> 
>   Barry
> 
> 
>> On Sep 25, 2020, at 5:49 PM, Mark Adams <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>>    '--CUDAFLAGS=-arch=sm_70',
>> 
>> seems to fix this.
>> 
>> On Fri, Sep 25, 2020 at 6:31 PM Mark Adams <[email protected] 
>> <mailto:[email protected]>> wrote:
>> I see kokkos and hyper have a sm_70 flag, but I don't see one for PETSc.
>> 
>> It looks like you have to specify this to get modern atomics to work in 
>> Cuda. I get:
>> 
>> /ccs/home/adams/petsc/include/petscaijdevice.h(99): error: no instance of 
>> overloaded function "atomicAdd" matches the argument list
>>             argument types are: (double *, double)
>> 
>> I tried using a Kokkos configuration, thinking I could get these sm_70 
>> flags, but that did not work.
>> 
>> Any ideas?
>> 
>> Mark
> 

Reply via email to