On 08/02/2016 12:45 PM, Marek Čapek wrote:

I coded it in the step-40 like manner.
When I run it on one core (either ./main or mpirun -np 1 main ), the
results are physically reasonable.
However, when I switch to more cores, e.g. two - "mpirun -np 2 main", I
get meaningless solutions.
You know, that the Chorin method is three-step, wherein it is computed
with velocity in the first and third step.
I use the same degree of freedom handler for the solution vectors of the
first and third step. I initialize the vectors with
the same sets of dofs - locally relevant dofs  & locally owned dofs.
Could the problem originate from there?
I compiled the problem on my computer as well as on cluster and the
results are the same.
I suppose, that the scheme is coded correctly as it works on single core...

Marek -- "meaningless" leaves many options open. Since it's a three-step scheme, have you tried to output the solution after the first step and compare what you get with one and two processors? If you look at the solution, does it give an indication of what may be going wrong in parallel? (E.g., you could potentially see if something is wrong with boundary values, hanging node constraints, etc.)

Best
 W.

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to