[deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-28 Thread Jimmy Ho
Hi All,

I am using the Step 40 tutorial to build a parallel program using MPI. The 
code runs but generates different results when using one and multiple 
processors. After stripping it down to the bare minimum, it appears that 
when the mesh is built using GridGenerator::subdivided_hyper_rectangle 
without any subsequent refinement, the compress() function does not work 
properly, leading to a different global stiffness when using multiple 
processors.

A minimum example to reproduce this is attached. When the mesh is built 
using GridGenerator::hyper_cube or 
GridGenerator::subdivided_hyper_rectangle with subsequent refinement, the 
program works as expected. When the same mesh is generated using 
GridGenerator::subdivided_hyper_rectangle without any subsequent 
refinement, some entries in the global stiffness matrix (the nodes on the 
right hand side of the mesh, in this case, using 2 processors) do not get 
updated. The example output of stiffness matrices when using one and two 
processors are also attached for reference.

So my question is, is this the expected behavior of the function? If so, 
why is that the case?

Thanks a lot for any answers! Your inputs are highly appreciated!

Best,
Jimmy

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/35ccb8ca-855d-4b01-95e7-cd549b947db5o%40googlegroups.com.
/* -
 *
 * Copyright (C) 2009 - 2019 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * -

 *
 * Author: Wolfgang Bangerth, Texas A University, 2009, 2010
 * Timo Heister, University of Goettingen, 2009, 2010
 */


// @sect3{Include files}
//
// Most of the include files we need for this program have already been
// discussed in previous programs. In particular, all of the following should
// already be familiar friends:
#include 
#include 
#include 

#include 

// uncomment the following #define if you have PETSc and Trilinos installed
// and you prefer using Trilinos in this example:
#define FORCE_USE_OF_TRILINOS

// This will either import PETSc or TrilinosWrappers into the namespace
// LA. Note that we are defining the macro USE_PETSC_LA so that we can detect
// if we are using PETSc (see solve() for an example where this is necessary)
namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
  !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
  using namespace dealii::LinearAlgebraPETSc;
#  define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
  using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
} // namespace LA

#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

// The following, however, will be new or be used in new roles. Let's walk
// through them. The first of these will provide the tools of the
// Utilities::System namespace that we will use to query things like the
// number of processors associated with the current MPI universe, or the
// number within this universe the processor this job runs on is:
#include 
// The next one provides a class, ConditionOStream that allows us to write
// code that would output things to a stream (such as std::cout
// on every processor but throws the text away on all but one of them. We
// could achieve the same by simply putting an if statement in
// front of each place where we may generate output, but this doesn't make the
// code any prettier. In addition, the condition whether this processor should
// or should not produce output to the screen is the same every time -- and
// consequently it should be simple enough to put it into the statements that
// generate output itself.
#include 
// After these preliminaries, here is where it becomes more interesting. As
// mentioned in the @ref distributed module, one of the fundamental truths of
// solving problems on large numbers of processors is that there is no way for
// any processor to store everything (e.g. information 

[deal.II] HOW TO OUTPUT SYSTEM MATRIX IN A FILE

2020-07-28 Thread 孙翔
Hi, I followed step-17 and build a system matrix of which type 
is PETScWrappers::MPI::SparseMatrix. I want to output it after assembling. 
How should I do it? Thank you very much. 

Best,

Xiang

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/92a3194f-d396-4cf2-b20a-0ae2f278d63do%40googlegroups.com.


Re: [deal.II] Output cell variables as cell arrays in vtu files

2020-07-28 Thread Jimmy Ho
Hi Dr. Bangerth,

Thanks a lot for pointing me to the right direction! That's what I am 
looking for!

Best,
Jimmy

On Monday, July 27, 2020 at 3:55:41 PM UTC-5, Wolfgang Bangerth wrote:
>
> On 7/27/20 2:47 PM, Jimmy Ho wrote: 
> > 
> > data_out.add_data_vector( cellData , "Name" , data_out.type_cell_data ); 
> > 
> > to force deal.ii to recognize that "cellData" should be interpreted as a 
> > cell-based data. When I read the vtu files generated by this program via 
> > ParaView, I found that although cellData looks like it is cell-based in 
> the 
> > visualization (i.e., it is discontinuous over the cells), the data array 
> > itself is stored as a point array, so I couldn't use the 
> > Cell_Data_to_Point_Data filter to get a smoothed and continuous 
> visualization. 
> > So, is there a way to write a cell-based data into cell arrays in vtu 
> files? 
>
> No, all data produced by DataOut is point data. For cell-wise constants, 
> it is 
> just written in a way that makes point data look piecewise constant. 
>
> You will have to find a different way to achieve what that filter does. 
> What 
> does it do? Maybe we can suggest a way to achieve the same from within 
> deal.II. 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a8641b6e-e158-4f66-8e23-937b1078ae07o%40googlegroups.com.


Re: [deal.II] Output of Gauss point stress tensor

2020-07-28 Thread Muhammad Mashhood
Dear Prof. Wolfgang,
  Thank you very much for the 
suggestions. To have a quick solution, I am opting the first suggestion 
i.e. to make particles on quadrature points and assigning them the tensor 
quantities. And if I gain some reasonable results specific to my project 
then I might also want to add it as a permanent feature in my code by 
spending more time to implement the second suggested solution of elegant 
way. So I have updated my version from 9.1.1 to 9.2.0 and started writing 
the code for particles. 

My original code is similar as step-42 but first I am using step-42 to 
fulfill at least the task of creating particles on quadrature point 
locations. 

I have declared the concerned variables:



*Particles::ParticleHandler particle_handler;
types::particle_index   next_unused_particle_id;*
*MappingQ mapping;*

and then added the following syntax during construction of  class (
*PlasticityContactProblem::PlasticityContactProblem*) (after line 833):



*, mapping(1), particle_handler(triangulation, mapping, 
/*n_properties=*/dim), next_unused_particle_id(0)*

When I run the step-42 then it returns following error:


*.~/working_dir/step-42$ /step-42 p1_chinese.prmSegmentation fault (core 
dumped)*

I tried to comment out the line "  , particle_handler(triangulation, 
mapping, /*n_properties=*/dim) " and by doing so the code starts running 
normally. 

I have tried to run it in the debug mode but over there the program exits 
with following error:






*(gdb) rStarting program: ~/working_dir/step-42/step-42 [Thread debugging 
using libthread_db enabled]Using host libthread_db library 
"/lib/x86_64-linux-gnu/libthread_db.so.1".*** Call this program as 
<./step-42 input.prm>[Inferior 1 (process 5082) exited with code 01]*

Can there be a possible problem in using the " *particle_handler(triangulation, 
mapping, /*n_properties=*/dim) *" ? Thank you!
On Friday, July 17, 2020 at 1:35:53 AM UTC+2 Wolfgang Bangerth wrote:

> On 7/16/20 9:15 AM, Muhammad Mashhood wrote:
> > So far it would be enough if I have the Gauss point values only at the 
> points 
> > rather than having complete field. I like to access these point values 
> of 
> > stress and strain tensors in .vtk file or .pvtu file through ParaView 
> (same as 
> > I am accessing the results currently).
> > I would be grateful if you or any other user has experience in this 
> regard or 
> > know about relevant deal.ii tutorial dealing with same feature.
>
> deal.II has functionality to output data on individual points if that data 
> is 
> associated with particles. I don't know how much work you want to invest 
> in 
> getting this to work for you, but here are two options:
>
> * Easy: Create a ParticleHandler object, loop over all of your quadrature 
> points, and for each quadrature point you create a particle. You can then 
> associate "properties" with each particle, and use Particles::DataOut to 
> output these properties in the same way as you would use DataOut for field 
> data. You can probably figure out how this works by looking at the draft 
> step-19 tutorial program:
> https://github.com/dealii/dealii/pull/10301
>
> * More work, but more elegant: You could write a class, let's say 
> QuadraturePointData::DataOut or something similar, to which you could 
> describe 
> the information of location + values without the detour of particles. I'm 
> sure 
> we'd be happy to walk you through the process of doing so if you wanted to 
> go 
> for that, and it would be a very nice addition to deal.II if you wanted to 
> get 
> it merged.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d3dbe6ca-cf2a-4126-95d7-4a36001e4430n%40googlegroups.com.


Re: [deal.II] Memory loss in system solver

2020-07-28 Thread Alberto Salvadori
Dear Wolfgang,

thank you for your guidance, I did what you suggested. Basically all calls 
in the system matrix construction have been removed but integration of unit 
per each element, with zero rhs. To make the system solvable, (non-zero, 
but it should be non relevant) Dirichlet boundary conditions have been 
imposed on the whole boundary. I checked the memory consumption through 
top->RES without solving the linear system, no issues arose. Again, as soon 
as I invoke 

BiCG.solve (this->system_matrix, distributed_incremental_displacement, 
this->system_rhs, preconditioner);

top->RES shows memory leaks. Guided by Richard remark, I implemented the 
trilinos solver as well. In such a case, no leaks at all. Even building the 
system matrix in the complete form.
My guess is that there must be either some conflicts or installation 
related issues with PETSc. Note that the system solution was always OK, the 
concern was just about the memory loss.

I did check that no issue arise in running step-18 from the library 
examples. 

In conclusion, the problem must be there, sorry I was not able to identify 
it more properly. Maybe Richard can be more precise, and I am at disposal, 
of course.

Hope this helps. 

Alberto


Il giorno sabato 25 luglio 2020 alle 05:46:08 UTC+2 Wolfgang Bangerth ha 
scritto:

> On 7/24/20 3:32 AM, Alberto Salvadori wrote:
> > 
> > It turns out that this code produces a memory loss, quite significant 
> since I 
> > am solving my system thousands of times, eventually inducing the run to 
> fail. 
> > I am not sure what is causing this issue and how to solve it, maybe more 
> > experienced users than myself can catch the problem with a snap of 
> fingers.
> > 
> > I have verified the issue on my mac (Catalina) as well as on linux 
> ubuntu 
> > (4.15.0), using deal.ii 9.1.1.
> > Apparently the issue reveals only when mpi is invoked with more than one 
> > processor, whereas it does not emerge when running in serial or by 
> mpirun -np 1.
>
> Alberto -- I've taken a look at the SolverBicgstab class and don't see 
> anything glaringly obvious that would suggest where the memory is lost. 
> It's 
> also funny that that would only happen with more than one processor 
> because 
> the memory handling of PETSc vectors shouldn't be any different for one or 
> more processors.
>
> Do you think you could come up with a simple test case that illustrates 
> the 
> problem? In your case, I'd start with the code you have and remove 
> basically 
> everything you do: replace the assembly by a function that just fills the 
> matrix with the identity matrix (or something similarly simple), remove 
> everything that does anything useful with the solution, remove graphical 
> output, etc. The only thing that should remain is the loop that repeatedly 
> solves a linear system and illustrates the memory leak, but the program no 
> longer has to do anything useful (in fact, it probably shouldn't -- it 
> should 
> only exercise the one part you suspect of causing the memory leak).
>
> I think that would make finding the root cause substantially simpler!
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
-- 


Informativa sulla Privacy: http://www.unibs.it/node/8155 


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/903450f4-1846-4979-978c-929f8c3167b9n%40googlegroups.com.