Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-30 Thread Jimmy Ho
Hi Dr. Bangerth,

Thanks a lot for the clarification! They are really helpful!

Best,
Jimmy

On Thursday, July 30, 2020 at 11:47:21 AM UTC-5, Wolfgang Bangerth wrote:
>
> On 7/30/20 10:11 AM, Jimmy Ho wrote: 
> > 
> > As a follow-up question, upon calling compress(), will the local copy of 
> the 
> > system matrix on a specific processor get updated to contain information 
> from 
> > all other processors? In other words, if I print out the system matrix 
> from a 
> > particular processor after calling compress(), is that the same global 
> system 
> > matrix that the linear solver is solving? 
>
> Each processor only stores a subset of the rows of the matrix. During 
> assembly, each processor computes a number of matrix entries, but it can 
> not 
> compute all entries for the rows of the matrix it owns -- some need 
> contributions from other processes. The call to compress() makes sure the 
> contributions from these other processes are sent to the one that owns 
> these 
> rows of the matrix. 
>
> In any case, after compress(), the rows each processor owns are correct -- 
> but 
> each processor doesn't know anything about the rows of the matrix it 
> doesn't own. 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b424e816-cd22-4854-b432-d9d9d745ae8do%40googlegroups.com.


Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-30 Thread Wolfgang Bangerth

On 7/30/20 10:11 AM, Jimmy Ho wrote:


As a follow-up question, upon calling compress(), will the local copy of the 
system matrix on a specific processor get updated to contain information from 
all other processors? In other words, if I print out the system matrix from a 
particular processor after calling compress(), is that the same global system 
matrix that the linear solver is solving?


Each processor only stores a subset of the rows of the matrix. During 
assembly, each processor computes a number of matrix entries, but it can not 
compute all entries for the rows of the matrix it owns -- some need 
contributions from other processes. The call to compress() makes sure the 
contributions from these other processes are sent to the one that owns these 
rows of the matrix.


In any case, after compress(), the rows each processor owns are correct -- but 
each processor doesn't know anything about the rows of the matrix it doesn't own.


Best
 W.


--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9a27440e-2864-afb2-d21c-cecdcc229305%40colostate.edu.


Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-30 Thread Jimmy Ho
Hi Dr. Bangerth,

As a follow-up question, upon calling compress(), will the local copy of 
the system matrix on a specific processor get updated to contain 
information from all other processors? In other words, if I print out the 
system matrix from a particular processor after calling compress(), is that 
the same global system matrix that the linear solver is solving?

Thanks a lot for clarifying!

Best,
Jimmy

On Wednesday, July 29, 2020 at 9:55:38 AM UTC-5, Wolfgang Bangerth wrote:
>
>
> Jimmy, 
>
> > A minimum example to reproduce this is attached. When the mesh is built 
> using 
> > GridGenerator::hyper_cube or GridGenerator::subdivided_hyper_rectangle 
> with 
> > subsequent refinement, the program works as expected. When the same mesh 
> is 
> > generated using GridGenerator::subdivided_hyper_rectangle without any 
> > subsequent refinement, some entries in the global stiffness matrix (the 
> nodes 
> > on the right hand side of the mesh, in this case, using 2 processors) do 
> not 
> > get updated. The example output of stiffness matrices when using one and 
> two 
> > processors are also attached for reference. 
> > 
> > So my question is, is this the expected behavior of the function? If so, 
> why 
> > is that the case? 
>
> There is still a lot of stuff in the program that could be remove, 
> including 
> all of the comments, to make it substantially smaller and easier to 
> understand. 
>
> I don't know whether there is a bug, but here is a suggestion: The finite 
> element solution u_h(x) that results from the linear system should be the 
> same 
> independent of the partitioning. But the order of degrees of freedom may 
> be 
> different, and consequently the matrix may not be the same -- it should 
> only 
> be the same up to some column and row permutation. Have you verified that 
> the 
> *solution function* (not the solution vector) that results is the same 
> independent of the number of processors? 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/39a0b8f8-c0f4-437d-b70c-7eed4295b13co%40googlegroups.com.


Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-30 Thread Jimmy Ho
Hi Dr. Bangerth,

Thanks a lot for your guidance! I compared the solution in the vtu files 
using the minimal example above, they are nearly identical. Looking back 
into the code, I am outputting the system matrix from processor 0, which 
probably only printed the part that it locally owns, hence there's a 
difference in the matrices when running in serial and parallel. I guess I 
rushed to conclusion too quickly.

Best,
Jimmy

On Wednesday, July 29, 2020 at 9:55:38 AM UTC-5, Wolfgang Bangerth wrote:
>
>
> Jimmy, 
>
> > A minimum example to reproduce this is attached. When the mesh is built 
> using 
> > GridGenerator::hyper_cube or GridGenerator::subdivided_hyper_rectangle 
> with 
> > subsequent refinement, the program works as expected. When the same mesh 
> is 
> > generated using GridGenerator::subdivided_hyper_rectangle without any 
> > subsequent refinement, some entries in the global stiffness matrix (the 
> nodes 
> > on the right hand side of the mesh, in this case, using 2 processors) do 
> not 
> > get updated. The example output of stiffness matrices when using one and 
> two 
> > processors are also attached for reference. 
> > 
> > So my question is, is this the expected behavior of the function? If so, 
> why 
> > is that the case? 
>
> There is still a lot of stuff in the program that could be remove, 
> including 
> all of the comments, to make it substantially smaller and easier to 
> understand. 
>
> I don't know whether there is a bug, but here is a suggestion: The finite 
> element solution u_h(x) that results from the linear system should be the 
> same 
> independent of the partitioning. But the order of degrees of freedom may 
> be 
> different, and consequently the matrix may not be the same -- it should 
> only 
> be the same up to some column and row permutation. Have you verified that 
> the 
> *solution function* (not the solution vector) that results is the same 
> independent of the number of processors? 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/285f4af3-3e01-4cd8-bcf0-9e6062a53967o%40googlegroups.com.


Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-29 Thread Wolfgang Bangerth



Jimmy,

A minimum example to reproduce this is attached. When the mesh is built using 
GridGenerator::hyper_cube or GridGenerator::subdivided_hyper_rectangle with 
subsequent refinement, the program works as expected. When the same mesh is 
generated using GridGenerator::subdivided_hyper_rectangle without any 
subsequent refinement, some entries in the global stiffness matrix (the nodes 
on the right hand side of the mesh, in this case, using 2 processors) do not 
get updated. The example output of stiffness matrices when using one and two 
processors are also attached for reference.


So my question is, is this the expected behavior of the function? If so, why 
is that the case?


There is still a lot of stuff in the program that could be remove, including 
all of the comments, to make it substantially smaller and easier to understand.


I don't know whether there is a bug, but here is a suggestion: The finite 
element solution u_h(x) that results from the linear system should be the same 
independent of the partitioning. But the order of degrees of freedom may be 
different, and consequently the matrix may not be the same -- it should only 
be the same up to some column and row permutation. Have you verified that the 
*solution function* (not the solution vector) that results is the same 
independent of the number of processors?


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8ae2f688-280f-9e71-414c-e519ffeaad82%40colostate.edu.


[deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-28 Thread Jimmy Ho
Hi All,

I am using the Step 40 tutorial to build a parallel program using MPI. The 
code runs but generates different results when using one and multiple 
processors. After stripping it down to the bare minimum, it appears that 
when the mesh is built using GridGenerator::subdivided_hyper_rectangle 
without any subsequent refinement, the compress() function does not work 
properly, leading to a different global stiffness when using multiple 
processors.

A minimum example to reproduce this is attached. When the mesh is built 
using GridGenerator::hyper_cube or 
GridGenerator::subdivided_hyper_rectangle with subsequent refinement, the 
program works as expected. When the same mesh is generated using 
GridGenerator::subdivided_hyper_rectangle without any subsequent 
refinement, some entries in the global stiffness matrix (the nodes on the 
right hand side of the mesh, in this case, using 2 processors) do not get 
updated. The example output of stiffness matrices when using one and two 
processors are also attached for reference.

So my question is, is this the expected behavior of the function? If so, 
why is that the case?

Thanks a lot for any answers! Your inputs are highly appreciated!

Best,
Jimmy

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/35ccb8ca-855d-4b01-95e7-cd549b947db5o%40googlegroups.com.
/* -
 *
 * Copyright (C) 2009 - 2019 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * -

 *
 * Author: Wolfgang Bangerth, Texas A&M University, 2009, 2010
 * Timo Heister, University of Goettingen, 2009, 2010
 */


// @sect3{Include files}
//
// Most of the include files we need for this program have already been
// discussed in previous programs. In particular, all of the following should
// already be familiar friends:
#include 
#include 
#include 

#include 

// uncomment the following #define if you have PETSc and Trilinos installed
// and you prefer using Trilinos in this example:
#define FORCE_USE_OF_TRILINOS

// This will either import PETSc or TrilinosWrappers into the namespace
// LA. Note that we are defining the macro USE_PETSC_LA so that we can detect
// if we are using PETSc (see solve() for an example where this is necessary)
namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
  !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
  using namespace dealii::LinearAlgebraPETSc;
#  define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
  using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
} // namespace LA

#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

// The following, however, will be new or be used in new roles. Let's walk
// through them. The first of these will provide the tools of the
// Utilities::System namespace that we will use to query things like the
// number of processors associated with the current MPI universe, or the
// number within this universe the processor this job runs on is:
#include 
// The next one provides a class, ConditionOStream that allows us to write
// code that would output things to a stream (such as std::cout
// on every processor but throws the text away on all but one of them. We
// could achieve the same by simply putting an if statement in
// front of each place where we may generate output, but this doesn't make the
// code any prettier. In addition, the condition whether this processor should
// or should not produce output to the screen is the same every time -- and
// consequently it should be simple enough to put it into the statements that
// generate output itself.
#include 
// After these preliminaries, here is where it becomes more interesting. As
// mentioned in the @ref distributed module, one of the fundamental truths of
// solving problems on large numbers of processors is that there is no way for
// any processor to store everything (e.g. information