Bruno 

On Monday, 4 March 2019 08:28:18 UTC-5, Bruno Turcksin wrote:

> Bruno,
>
> On Monday, March 4, 2019 at 7:41:27 AM UTC-5, Bruno Blais wrote:
>>
>> 2. Furthermore, when you compile Trilinos with OpenMP and you try to 
>> compile the latest version of DEAL.II, you get a compilation error when 
>> ".hpp" from Kokkos are included. The error reads something like:
>> Kokkos was compiled with Openmp but the compiler did not pass an openmp 
>> flag.
>>
>> This can be easily fixed by manually adding -fopenmp to the CXX flags 
>> used by dealii. However, would it not be a better idea to add a 
>> DEALII_ENABLE_OpenMP Flag directly in the cmake to ensure that if you put 
>> that flag on, the -fopenmp flag is enabled?
>> Maybe I missed such an option. It just made me unsure if I was doing 
>> something supported or not.
>>
> deal.II does not use OpenMP for multithreading so if -fopenmp is missing 
> it's because Trilinos did not export it correctly. Unless you mean that you 
> include Kokkos in your own code? In that case you are responsible for the 
> flags if you use OpenMP. 
>

The issue would be then that my Trilinos installation did not export the 
flag correctly. I will check that out. It makes sense. I only use the 
solvers with the DEALII TrilinosWrappers, I have not tried to use the 
Trilinos solvers directly.
 

>
>> 3. When compiled with OpenMP, I got deceptively poor performances, but 
>> maybe this is because of the relatively small size of my application. I 
>> would have (naively maybe) expected that the time to solver a linear system 
>> with GMRES using 1 MPI with 4 OMP threads would have been lower than the 
>> time it takes with 4 MPI and on my application this was not the case. I was 
>> surprised because I was expecting my ILU Preconditioning to work better on 
>> lower amount of cores, but maybe this is related to fill-in or other issues?
>>
> Two things here:
>   1) Which package are using? The Epetra stack does not support OpenMP so 
> you can compile with OpenMP but it won't be used.
>

I'm using the wrapper, so I guess by default that means it is using the 
AztecOO stack of solvers?
 

>   2) Why do you think that OpenMP would be faster than MPI? MPI is usually 
> faster than OpenMP unless you are very careful about your data management.
>

My original idea was that since in shared memory parallelism you could 
precondition a larger chunk of the matrix as a whole, that the ILU 
preconditioning would be more efficient in a shared-memory context than in 
a distributed one. Thus you would need less GMRES iterations to solve your 
system. It seems I am wrong :) ?

 

>
> Best,
>
Thanks, this is very interesting / enlightening
 

>
> Bruno
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to