Re: [deal.II] Solving Step-74 by MPI

2022-08-17 Thread Timo Heister
For error computations using cellwise errors you can use
VectorTools::compute_global_error(), which does the MPI communication
for you:
https://www.dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a21eb62d70953182dcc2b15c4e14dd533

See step-55 for example.

On Wed, Aug 17, 2022 at 1:42 PM Wolfgang Bangerth
 wrote:
>
> On 8/17/22 03:10, chong liu wrote:
> >
> > I modified Step-74 based on the error_estimation part of Step-50. I found it
> > can work for the attached step-74-mpi, while it cannot work for the attached
> > step-74-mpi-error. The only difference is the location of the output command
> > as the attached figure 1 shows. This is weird. The error information states
> > that one vector is out of bounds as shown in figure 2.
>
> The location of the print message isn't the problem -- it's just that because
> you don't end the
>std::cout << "...";
> call with std::endl, the text is put into a buffer but never printed to
> screen. You should run the program in a debugger and check where the erroneous
> access comes from. I bet that if the two programs are otherwise the same, they
> should both fail.
>
>
> > In addition, there are three points I would like to ask
> >
> >  1. The direct solver cannot work for the modified MPI program. I changed it
> > to the iterative solver (solver_cg same as Step-40) since I am not
> > familiar with the MPI direct solver. Could you give me some suggestions 
> > on
> > the direct solver for MPI?
>
> There isn't a good option. There are some parallel direct solvers in both the
> PETSc and Trilinos (for which you can probably find information by searching
> the mailing list archives), but at the end of the day, if the problem becomes
> sufficiently large, even parallel direct solvers cannot compete.
>
>
> >  2. Does not ConvergenceTable support the parallel data output? I found that
> > the first parameter for convergencetable.write_text() is std::out. How 
> > can
> > I modify it to pcout for MPI?
>
> I'm not sure the class supports this, but you can always put a
>if (Utilities::MPI::this_mpi_process(...) == 0)
> in front of the place where you generate output.
>
>
> >  3. I guess the l1_norm() calculation for a vector should be modified. For
> > example, the code std::sqrt(energy_norm_square_per_cell.l1_norm()) 
> > should
> > be modified to std::sqrt
> > 
> >  >   >(Utilities::MPI::sum
> > 
> >  >   >(estimated_error_square_per_cell.l1_norm(),
> > mpi_communicator)).
>
> Yes, something like this.
>
> Best
>   W.
>
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: 
> https://urldefense.com/v3/__http://www.math.colostate.edu/*bangerth/__;fg!!PTd7Sdtyuw!QO7eTW5kBhW7q3_5N54yZsv-uBFuFmPb-34csveLecKNqiHWY-Dvroos6-RjWwuJhwHp3T_urOOoL15w0hl1NSs$
>


-- 
Timo Heister
http://www.math.clemson.edu/~heister/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAMRj59F9WzrMfDkg-eTghzAr-finA%3DNH3F2W3bXfPJmBi9yk%2Bw%40mail.gmail.com.


Re: [deal.II] Re: Iterating over mesh cells in a custom order

2022-08-17 Thread Wolfgang Bangerth

On 8/17/22 13:04, Bruno Turcksin wrote:


It's possible to do it using WorkStream::run (see here 
) 
However, you need to create the ordering manually by "coloring" the 
cells. All the cells in the same color can be worked on in parallel but 
the colors are treated sequentially; we first go over all the cells in 
color 0, then in color 1, etc.


Alternatively, WorkStream works on a set of iterators. These can be the 
iterators into a

  std::set
if you choose the right comparator object that ensures that 
active_cell_iterators that need to be worked on first compare as less 
than those active_cell_iterators that need to be worked on later.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3bfcbc01-1e72-46e3-2320-f0b5a9974a89%40colostate.edu.


[deal.II] Re: Iterating over mesh cells in a custom order

2022-08-17 Thread Bruno Turcksin
Corbin,

It's possible to do it using WorkStream::run (see here 
)
 
However, you need to create the ordering manually by "coloring" the cells. 
All the cells in the same color can be worked on in parallel but the colors 
are treated sequentially; we first go over all the cells in color 0, then 
in color 1, etc.

Best,

Bruno

On Wednesday, August 17, 2022 at 2:05:00 PM UTC-4 corbin@gmail.com 
wrote:

> Hello everyone,
>
> I have a problem in which I'm propagating information downwards in depth 
> by solving the same local finite element problem on each element in an 
> adaptive grid. The only condition is that the cells above the current cell 
> must have already been worked on. 
>
> I'm looking for a way to loop over cells using something akin to 
> MeshWorker::mesh_loop or WorkStream::run, but with a custom order, namely, 
> according to the z-coordinate of each cell center. Is there a simple way to 
> do this?
>
> If not, I can always loop over the cells manually by recording a 
> depth-sorted list, but then I'd lose the multi-thread capabilities of 
> mesh_loop() and the like. Any advice would be appreciated.
>
> Thank you,
> Corbin
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c097cee5-1e04-4549-8cd1-49f8a43cc8a3n%40googlegroups.com.


[deal.II] Iterating over mesh cells in a custom order

2022-08-17 Thread Corbin Foucart
Hello everyone,

I have a problem in which I'm propagating information downwards in depth by 
solving the same local finite element problem on each element in an 
adaptive grid. The only condition is that the cells above the current cell 
must have already been worked on. 

I'm looking for a way to loop over cells using something akin to 
MeshWorker::mesh_loop or WorkStream::run, but with a custom order, namely, 
according to the z-coordinate of each cell center. Is there a simple way to 
do this?

If not, I can always loop over the cells manually by recording a 
depth-sorted list, but then I'd lose the multi-thread capabilities of 
mesh_loop() and the like. Any advice would be appreciated.

Thank you,
Corbin

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a599de3d-86b8-4681-8204-bb0881b4a35bn%40googlegroups.com.


Re: [deal.II] Solving Step-74 by MPI

2022-08-17 Thread Wolfgang Bangerth

On 8/17/22 03:10, chong liu wrote:


I modified Step-74 based on the error_estimation part of Step-50. I found it 
can work for the attached step-74-mpi, while it cannot work for the attached 
step-74-mpi-error. The only difference is the location of the output command 
as the attached figure 1 shows. This is weird. The error information states 
that one vector is out of bounds as shown in figure 2.


The location of the print message isn't the problem -- it's just that because 
you don't end the

  std::cout << "...";
call with std::endl, the text is put into a buffer but never printed to 
screen. You should run the program in a debugger and check where the erroneous 
access comes from. I bet that if the two programs are otherwise the same, they 
should both fail.




In addition, there are three points I would like to ask

 1. The direct solver cannot work for the modified MPI program. I changed it
to the iterative solver (solver_cg same as Step-40) since I am not
familiar with the MPI direct solver. Could you give me some suggestions on
the direct solver for MPI?


There isn't a good option. There are some parallel direct solvers in both the 
PETSc and Trilinos (for which you can probably find information by searching 
the mailing list archives), but at the end of the day, if the problem becomes 
sufficiently large, even parallel direct solvers cannot compete.




 2. Does not ConvergenceTable support the parallel data output? I found that
the first parameter for convergencetable.write_text() is std::out. How can
I modify it to pcout for MPI?


I'm not sure the class supports this, but you can always put a
  if (Utilities::MPI::this_mpi_process(...) == 0)
in front of the place where you generate output.



 3. I guess the l1_norm() calculation for a vector should be modified. For
example, the code std::sqrt(energy_norm_square_per_cell.l1_norm()) should
be modified to std::sqrt

(Utilities::MPI::sum

(estimated_error_square_per_cell.l1_norm(),
mpi_communicator)).


Yes, something like this.

Best
 W.


--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0dc890b4-4592-23da-095d-bd5d88ae613a%40colostate.edu.


Re: [deal.II] get_generalized_support_points() returns only a vector of size 12 ?

2022-08-17 Thread Wolfgang Bangerth



I am debugging a program using the function 
*'get_generalized_support_points()' *( where**has_support_points()=0, 
while has_generalized_support_points()=1*)*. My FE system is defined as 
*'FESystem<3>        fe(FE_Nedelec<3>(0), 2);'*, therefore, each active cell 
has 12*2 dofs. So I would also expect  'get_generalized_support_points()' can 
return the support points with a vector of size 24 (of course the value will 
repeat once). However, it only has 12 valid Point<3> values, the other 12 are 
zero or some crazy number.


My question is, is this reasonable, or there is sth wrong with my understand 
on this.


Longying & Jochen:
the latter. You assume that the array you are reading from has 12*2 entries, 
but the array really only has 12 entries and as a consequence when you output 
24 elements, the latter half is accessing invalid memory.


Generalized support points are the ones at which you need to know the values 
of a function to compute some kind of interpolant or projection. It is enough 
to know a function (which would have 2 components) at 12 points to determine 
the 24 coefficients of the interpolant because in your specific case, the two 
elements that describe the two components have support points at the same 
location.


Best
 W.


--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/edfc2483-14b1-4aad-efdc-baaa27abfcf8%40colostate.edu.


Re: [deal.II] Refine per direction

2022-08-17 Thread Daniel Arndt
Uclus,

Use GridOut::write_vtu or GridOut::write_vtu_with_pvtu_record as
demonstrated in step-40 instead.

Best,
Daniel

On Wed, Aug 17, 2022 at 3:06 AM Uclus Heis  wrote:

> Dear Wolfgang,
>
> Thank you very much I could solve that.
>
> I would like to ask another question if it is ok. When I try to export the
> mesh, using a parallel::distributed::Triangulation  with MPI, I am not
> able to export the whole mesh. What I get is the pieces of the mesh
> corresponding to a certain process.  Is it possible to export the msh file
> of the whole domain grid?
>
>
> GridGenerator::subdivided_hyper_rectangle (
> triangulation,repetitions,Point(0,0,0),Point(2, 1, 1),false);
> triangulation.refine_global(1);
> std::ofstream out("domain_grid.msh");
> GridOut grid_out;
> grid_out.write_msh(triangulation, out);
>
>
> Thank you
> Regards
>
>
>
> El martes, 16 de agosto de 2022 a las 15:38:15 UTC+2, Wolfgang Bangerth
> escribió:
>
>> On 8/16/22 07:23, Uclus Heis wrote:
>> >
>> >
>> GridGenerator::hyper_rectangle(triangulation,Point(0,0,0),Point(2,1,1),false
>>
>> > );
>> > triangulation.refine_global(2);
>> >
>> > This code generates. rectangle with 4 cells per direction. How can I
>> perform a
>> > refinement so that I get more elements at the long dimension of the
>> rectangle,
>> > for example 8 cells? I can not find a function to determine the number
>> of
>> > refinements per direction. Could you please help me?
>> >
>>
>> Take a look at GridGenerator::subdivided_hyper_rectangle().
>>
>> Best
>> W.
>>
>> --
>> 
>> Wolfgang Bangerth email: bang...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/c51493cc-a651-4504-8dc9-9d901a5cf4efn%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAOYDWb%2BgrJUpLLQLLS-uYOWRAb2Tuy8PYRO134FBCAw3Wgo_rQ%40mail.gmail.com.


[deal.II] Re: dealii and hdf5 compile problem

2022-08-17 Thread Praveen C
The detailed.log shows this

#DEAL_II_WITH_HDF5 set up with external dependencies
#HDF5_VERSION = 1.12.2
#HDF5_DIR = 
/Users/praveen/Applications/spack/opt/spack/darwin-monterey-m1/apple-clang-13.1.6/hdf5-1.12.2-gxrwbuzg3xom562obmqaqtu5forevio5/cmake
#HDF5_INCLUDE_DIRS = 
/Users/praveen/Applications/spack/opt/spack/darwin-monterey-m1/apple-
clang-13.1.6/hdf5-1.12.2-gxrwbuzg3xom562obmqaqtu5forevio5/include
#HDF5_USER_INCLUDE_DIRS = 
/Users/praveen/Applications/spack/opt/spack/darwin-monterey-m1/ 
apple-clang-13.1.6/hdf5-1.12.2-gxrwbuzg3xom562obmqaqtu5forevio5/include
#HDF5_LIBRARIES = hdf5-shared

and last line above seems wrong.

Thanks
praveen

> On 17-Aug-2022, at 11:49 AM, Praveen C  wrote:
> 
> Hello
> 
> I am compiling dealii@9.4.0 myself using dependent packages installed via 
> spack.
> 
> I am facing the issue reported here
> 
> https://github.com/spack/spack/issues/32023 
> 
> 
> and is fixed here
> 
> https://github.com/spack/spack/pull/32079 
> 
> 
> This fix seems to remove the file
> 
> cmake/macros/macro_find_package.cmake
> I removed it and was able to compile dealii. But when I compile an example, I 
> get this error
> 
> Consolidate compiler generated dependencies of target step-1
> [ 50%] Linking CXX executable step-1
> ld: library not found for -lhdf5-shared
> clang: error: linker command failed with exit code 1 (use -v to see 
> invocation)
> make[2]: *** [step-1] Error 1
> make[1]: *** [CMakeFiles/step-1.dir/all] Error 2
> make: *** [all] Error 2
> 
> Thanks for any help.
> Best
> praveen

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/B665C01B-FF1B-4586-9FEA-D6C0F4CE0E86%40gmx.net.


Re: [deal.II] Refine per direction

2022-08-17 Thread Uclus Heis
Dear Wolfgang, 

Thank you very much I could solve that. 

I would like to ask another question if it is ok. When I try to export the 
mesh, using a parallel::distributed::Triangulation  with MPI, I am not 
able to export the whole mesh. What I get is the pieces of the mesh 
corresponding to a certain process.  Is it possible to export the msh file 
of the whole domain grid?


GridGenerator::subdivided_hyper_rectangle ( 
triangulation,repetitions,Point(0,0,0),Point(2, 1, 1),false); 
triangulation.refine_global(1);
std::ofstream out("domain_grid.msh");
GridOut grid_out;
grid_out.write_msh(triangulation, out);


Thank you
Regards



El martes, 16 de agosto de 2022 a las 15:38:15 UTC+2, Wolfgang Bangerth 
escribió:

> On 8/16/22 07:23, Uclus Heis wrote:
> > 
> > 
> GridGenerator::hyper_rectangle(triangulation,Point(0,0,0),Point(2,1,1),false
>  
>
> > );
> > triangulation.refine_global(2);
> > 
> > This code generates. rectangle with 4 cells per direction. How can I 
> perform a 
> > refinement so that I get more elements at the long dimension of the 
> rectangle, 
> > for example 8 cells? I can not find a function to determine the number 
> of 
> > refinements per direction. Could you please help me?
> > 
>
> Take a look at GridGenerator::subdivided_hyper_rectangle().
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c51493cc-a651-4504-8dc9-9d901a5cf4efn%40googlegroups.com.


[deal.II] dealii and hdf5 compile problem

2022-08-17 Thread Praveen C
Hello

I am compiling dealii@9.4.0 myself using dependent packages installed via spack.

I am facing the issue reported here

https://github.com/spack/spack/issues/32023 


and is fixed here

https://github.com/spack/spack/pull/32079 


This fix seems to remove the file

cmake/macros/macro_find_package.cmake
I removed it and was able to compile dealii. But when I compile an example, I 
get this error

Consolidate compiler generated dependencies of target step-1
[ 50%] Linking CXX executable step-1
ld: library not found for -lhdf5-shared
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [step-1] Error 1
make[1]: *** [CMakeFiles/step-1.dir/all] Error 2
make: *** [all] Error 2

Thanks for any help.
Best
praveen

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/42B601A2-4E87-4C58-A90C-192FADBCDC48%40gmx.net.