Thanks for this information. Could you tell me an efficient way to do this
in PETSc? I am planning to use at least 32 threads and need to minimize the
synchronization overhead Any suggestions?

Thanks!
Wen


On Mon, Jun 23, 2014 at 10:59 PM, Jed Brown <[email protected]> wrote:

> Wen Jiang <[email protected]> writes:
>
> > Dear all,
> >
> > I am trying to change my MPI finite element code to OPENMP one. I am not
> > familiar with the usage of OPENMP in PETSc and could anyone give me some
> > suggestions?
> >
> > To assemble the matrix in parallel using OpenMP pragmas, can I directly
> > call MATSETVALUES(ADD_VALUES) or do I need to add some locks around it?
>
> You need to ensure that only one thread is setting values on a given
> matrix at any one time.
>

Reply via email to