>> A lot of people (including myself) are still skeptical that it's even
>> a good idea.  I personally think that the complexity involved in
>> creating MPISMP software outweighs any potential gains.  MPI software
>> is hard to write... and so is SMP.... put the two together and you are
>> just asking for trouble.
> 
> I totally disagree, it's all a matter of making it simple. Of course, if
> you write threads code using calls to pthread_create & friends, then it's
> a pain, but if you can write
> 
>   Thread thread_id;
>   thread_id = spawn matrix.vec_mult (v);
>   // do something else
>   Vector w = thread_id.return_value();
> 
> or something similar to run the matrix-vector multiplication on a separate
> thread, then it's a separate matter. This is by and large the syntax we
> have in deal.II.


I'm with Wolfgang here - ultimately I'd like to partition for nodes and use
thread-based parallelism on each node.  I am not sure this will work with
PETSc, though, and I need to look into that some more.  Certainly
shared-memory within a node makes the load balancing problem *much* easier
for on-node cores.

The issue is that PETSc does everything on a per-MPI-process basis and, to
the best of my knowledge, does not use threads itself to implement e.g. its
matvec.  And threaded BLAS will only take you so far...  So you could
assemble the system matrix with threads on 16 cores in a node, but when
PETSc solves the linear system I think it will only be using one core.

Wolfgang's petsc flag was new to me and I'm gonna look into it, although
this is not promising:
http://www-unix.mcs.anl.gov/petsc/petsc-2/miscellaneous/petscthreads.html

-Ben


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to