Florian,

There's two ways. You can make your whole app MPI-based, but only have the master process do any of the sequential work, while the others spin until the parallel part. That's the easiest, but you then have MPI everywhere in your app. The other way is to have the MPI processes exist totally outside your main sequential process. This keeps you isolated from the MPI, but it's a lot more work.

I've done the MPI on the outside with the MUMPS linear solver. You need to spin up the MPI process group separately, so your sequential process isn't doing any work while they're running. You also need to send data to the MPI processes, which I used Boost's Shared-Memory library for (if you can't use C++ in your project this won't work for you at all). You also have to keep the MPI processes and the main process synchronised and you need your main process to surrender it's core while the MPI solve is going on, so you end up with a bunch of Sleep or sched_yield calls so that everything plays nicely. The whole thing takes a *lot* of tweaking to get right. Honestly, it's a total pig and I'd recommend against this path (we don't use it anymore in our software).

If you just need a good parallel, direct linear solver (I'm making an assumption here) to run in one memory space, go grab SuperLU-MT from here:

http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_mt

or use the LiS solver package from here if you want an iterative solver:

http://www.ssisc.org/lis/

Both of these can handle very large problems in shared-memory and have good scale-up.

Damien


On 18/11/2013 7:09 AM, Florian Bruckner wrote:
hi,

how can i call an MPI parallelized solver routine from a sequential code. The sequential code is already existing and the structure looks like to following:

void main()
{
   do {
         x = rand();

sequential_code(); // this sequential code should only be executed on the master node
         if (x == 2345) MPIsolve(); // this should be run in parallel
   } while(x == 1234);
}

i'm wondering how the call MPI-parallelized solver routine can be called without parallelizing the whole existing sequential code. at a certain point of the code-path of the sequential code, the parallel execution should be started, but how can this be achived.

when starting the application with mpirun there must be some code running on each node. and the same code-path needs to be followed by each process. But this would mean that exactly the same sequential code needs to be executed on each node!?

what am i missing?
thanks in advance
Florian

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to