So long as 

1) you configure PETSc with --with-openmp 

2) your pragma loops inside the functions inside RHSFunction do not touch PETSc 
objects directly (PETSc arrays like from VecGetArray() or DMDAVecGetArray() are 
fine you can access them) or make PETSc calls then

this is fine. But how much speed up you get depends on how much extra memory 
bandwidth and cores you have available to doe the work after you have already 
parallelized with MPI.

   Note you have to set the number of threads you want to use inside these 
functions which can be done with the horrible environmental variable 
OMP_NUM_THREADS or the PETSc option -omp_num_threads <n> or by calling 
omp_set_num_threads() in your code.



   Barry


> On Mar 3, 2021, at 1:02 AM, Thibault Bridel-Bertomeu 
> <[email protected]> wrote:
> 
> Dear all,
> 
> I am aware that the strategy chosen by PETSc is to rely exclusively on a MPI 
> paradigm with therefore functions, methods and routines that are not 
> necessarily thread-safe in order not to impede the performance too much.
> I had however one interrogation : what happens if, say, the user passes 
> functions containing OpenMP pragma to wrappers like TSSetRHSFunction, or even 
> writes a new TSAdapt containing OpenMP pragma ?
> If the threads are started before the TSSolve, would we benefit from some 
> performance increase from the pragmas in the user functions, or would it lead 
> to instability because the PETSc functions calling the user functions are 
> anyways not built for OpenMP and it would fail ?
> 
> Thank you for your insight,
> 
> Thibault

Reply via email to