I started parallelizing my code. I have part of the code that is serial because I need to have access to the whole mesh. In this part of the code, I build a PetscMatrix. Once I'm done building it (in serial), I would like to scatter it through the other processors.
Now, given that I built the PetscMatrix in serial (the local dimensions are the same than the global dimensions) I assume I should build another matrix with local dimensions in parallel. I need to set this size equal to a solution vector in one of my systems (that's what I found here, http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetSizes.html) If this system is called "densities", will I just set it equal to densities.solution->local_size()? Once the local dimensions are set, I need to copy the contents. Because the original matrix is run in serial in all the processors, I wouldn't need to make any communication, right? I would copy the matrix value by value, or there is a better way to do this? Thanks in advance Miguel On Thu, Aug 28, 2014 at 9:45 PM, Miguel Angel Salazar de Troya < [email protected]> wrote: > Yeah, it says 4. I think I will try to parallelize my code with MPI to get > better performance. > > > On Thu, Aug 28, 2014 at 9:10 PM, Roy Stogner <[email protected]> > wrote: > >> >> >> On Thu, 28 Aug 2014, Miguel Angel Salazar de Troya wrote: >> >> How can I give one MPI rank per shared-memory system in my own >>> computer? I thought that running the program in serial with the >>> option "--n_threads=4" would work, but it doesn't seem so. >>> >> >> On a single computer, no clustering, that should have been sufficient. >> >> Do you have a mesh.print_info() in your app, and if so what does it >> say n_threads is? >> >> If it says n_threads is 1, is it possible that you configured without >> TBB installed? >> >> >> It might be that the rest of my code that is not "threaded" is too >>> slow. >>> >> >> Yes, or it might be possible that the unthreaded parts of our code are >> too slow. Getting the algebraic solver to run multithreaded is >> tricky, and in a lot of codes the solve is the expensive part. >> --- >> Roy > > > > > -- > *Miguel Angel Salazar de Troya* > > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > [email protected] > > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 [email protected] ------------------------------------------------------------------------------ Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/ _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
