Hi Dr. Bangerth,
Thanks a lot for the clarification! They are really helpful!
Best,
Jimmy
On Thursday, July 30, 2020 at 11:47:21 AM UTC-5, Wolfgang Bangerth wrote:
>
> On 7/30/20 10:11 AM, Jimmy Ho wrote:
> >
> > As a follow-up question, upon calling compress(), will the local copy of
> the
On 7/30/20 10:11 AM, Jimmy Ho wrote:
As a follow-up question, upon calling compress(), will the local copy of the
system matrix on a specific processor get updated to contain information from
all other processors? In other words, if I print out the system matrix from a
particular processor af
Hi Dr. Bangerth,
As a follow-up question, upon calling compress(), will the local copy of
the system matrix on a specific processor get updated to contain
information from all other processors? In other words, if I print out the
system matrix from a particular processor after calling compress()
Hi Dr. Bangerth,
Thanks a lot for your guidance! I compared the solution in the vtu files
using the minimal example above, they are nearly identical. Looking back
into the code, I am outputting the system matrix from processor 0, which
probably only printed the part that it locally owns, hence
Jimmy,
A minimum example to reproduce this is attached. When the mesh is built using
GridGenerator::hyper_cube or GridGenerator::subdivided_hyper_rectangle with
subsequent refinement, the program works as expected. When the same mesh is
generated using GridGenerator::subdivided_hyper_rectang
Hi All,
I am using the Step 40 tutorial to build a parallel program using MPI. The
code runs but generates different results when using one and multiple
processors. After stripping it down to the bare minimum, it appears that
when the mesh is built using GridGenerator::subdivided_hyper_rectangl