[deal.II] Re: Is a call to compress() required after scale()?

2019-11-24 Thread vachan potluri
I was able to reproduce this behaviour with the following code (also 
attached); the CMakeLists file is also attached. The code hangs after 
printing 'Scaled variable 0'.

Let me mention that I have used a different algorithm to obtain locally 
relevant dofs, rather than directly using the function from DoFTools. My 
algorithm is as follows:

Loop over owned interior cells
  Loop over faces
  If neighbor cell is ghost:
Add all neighbors dofs on this face to relevant dofs

With this algorithm, relevant dofs are not all the ghost cell's dofs, but 
only those lying on a subdomain interface. This is implemented in lines 
49-69 in the file main.cc. I verified that this algorithm works correctly 
for a small mesh, I don't presume this is wrong.

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 

#include 
#include 

/**
 * See README file for details
 */
using namespace dealii;
namespace LA
{
using namespace ::LinearAlgebraPETSc;
}

int main(int argc, char **argv)
{
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

parallel::distributed::Triangulation<2> triang(MPI_COMM_WORLD);
GridGenerator::hyper_cube(triang);
triang.refine_global(5);
const MappingQ1<2> mapping;
const uint degree = 1;
FE_DGQ<2> fe(degree);
FE_FaceQ<2> fe_face(degree);
const std::array::faces_per_cell> 
face_first_dof{0, degree, 0, (degree+1)*degree};
const std::array::faces_per_cell> 
face_dof_increment{degree+1, degree+1, 1, 1};

DoFHandler<2> dof_handler(triang);
dof_handler.distribute_dofs(fe);
IndexSet locally_owned_dofs = dof_handler.locally_owned_dofs();
IndexSet locally_relevant_dofs;

locally_relevant_dofs = locally_owned_dofs; // initialise with 
owned dofs
uint face_id, face_id_neighbor, i; // face ids wrt owner and 
neighbor
std::vector dof_ids_neighbor(fe.dofs_per_cell);
for(auto : dof_handler.active_cell_iterators()){
if(!(cell->is_locally_owned())) continue;
for(face_id=0; face_id::faces_per_cell; 
face_id++){
if(cell->face(face_id)->at_boundary()) continue;
if(cell->neighbor(face_id)->is_ghost()){
// current face lies at subdomain interface
// add dofs on this face (wrt neighbor) to 
locally relevant dofs

cell->neighbor(face_id)->get_dof_indices(dof_ids_neighbor);

face_id_neighbor = 
cell->neighbor_of_neighbor(face_id);
for(i=0; i vecs;
std::array gh_vecs;
LA::MPI::Vector scaler;
for(uint var=0; var<4; var++){
vecs[var].reinit(locally_owned_dofs, MPI_COMM_WORLD);
gh_vecs[var].reinit(locally_owned_dofs, 
locally_relevant_dofs, MPI_COMM_WORLD);
}
scaler.reinit(locally_owned_dofs, MPI_COMM_WORLD);

for(uint i: locally_owned_dofs){
scaler[i] = 1.0*i;
}
std::vector dof_ids(fe.dofs_per_cell);

// setting ops
for(auto : dof_handler.active_cell_iterators()){
if(!(cell->is_locally_owned())) continue;

cell->get_dof_indices(dof_ids);

pcout << "\tCell " << cell->index() << "\n";
pcout << "\t\tSetting\n";
for(uint var=0; var<4; var++){
for(uint i: dof_ids){
vecs[var][i] = 1.0*i;
}
vecs[var].compress(VectorOperation::insert);
}

// addition ops
pcout << "\t\tAdding\n";
for(uint var=0; var<4; var++){
for(uint i: dof_ids){
vecs[var][i] += 1.0*i;
}
vecs[var].compress(VectorOperation::add);
}

// more ops
pcout << "\t\tMore additions\n";
for(uint var=0; var<4; var++){
for(uint i: dof_ids){
vecs[var][i] += -5.0*i;
}
vecs[var].compress(VectorOperation::add);
}
} // loop over owned cells
// scaling and communicating
pcout << "Scaling and communicating\n";
for(uint var=0; var<4; var++){
vecs[var].scale(scaler);
pcout << "Scaled variable " << var << "\n";
gh_vecs[var] = vecs[var];
pcout << "Communicated variable " << var << "\n";
}
pcout << "Completed all\n";

return 0;
}

-- 
The deal.II project is 

[deal.II] projecting function onto TrilinosWrappers::MPI::BlockVector

2019-11-24 Thread Konrad Simon
Hi all,

I am having a little problem with projecting a function onto (parts of) FE 
spaces. I am getting the error

The violated condition was:  
   (dynamic_cast *>( 
&(dof.get_triangulation())) == nullptr) 
Additional information:  
   You are trying to use functionality in deal.II that is currently not 
implemented. In many cases, this indicates that there simply didn't appear 
much of a need for 
it, or that the author of the original code did not have the time to 
implement a particular case. If you hit this exception, it is therefore 
worth the time to look int
o the code to find out whether you may be able to implement the missing 
functionality. If you do, please consider providing a patch to the deal.II 
development sources 
(see the deal.II website on how to contribute).

I of course get what this error message suggests and I am wondering if I 
could fix this somehow. The funny thing is that when I step through the 
code in debug mode I see that exactly the cast above fails. Funnily, the 
cast dynamic_cast *>( 
&(dof.get_triangulation())) works. 

Now I am asking myself why?  Am I missing something here?

Best regards,
Konrad


This is my function:

{
TrilinosWrappers::MPI::BlockVector locally_relevant_exact_solution;
locally_relevant_exact_solution.reinit(owned_partitioning,
mpi_communicator);

{ // write sigma to Nedelec_0 space

// Quadrature used for projection
QGauss<3> quad_rule (3);

// Setup function
ExactSolutionLin_A_curl exact_sigma(parameter_filename); // Exact solution 
for first FE ---> Nedelec FE

DoFHandler<3> dof_handler_fake (triangulation);
dof_handler_fake.distribute_dofs (fe.base_element(0));

if (parameters.renumber_dofs)
{
DoFRenumbering::Cuthill_McKee (dof_handler_fake);
}

AffineConstraints constraints_fake;
constraints_fake.clear ();
DoFTools::make_hanging_node_constraints (dof_handler_fake, 
constraints_fake);
constraints_fake.close();

VectorTools::project (dof_handler_fake,
constraints_fake,
quad_rule,
exact_sigma,
locally_relevant_exact_solution.block(0));

dof_handler_fake.clear ();
}

{// write u to Raviart-Thomas_0 space

// Quadrature used for projection
QGauss<3> quad_rule (3);

// Setup function
ExactSolutionLin exact_u(parameter_filename); // Exact solution for second 
FE ---> Raviart-Thomas FE

DoFHandler<3> dof_handler_fake (triangulation);
dof_handler_fake.distribute_dofs (fe.base_element(1)); 

if (parameters.renumber_dofs)
{
DoFRenumbering::Cuthill_McKee (dof_handler_fake);
}

AffineConstraints constraints_fake;
constraints_fake.clear ();
DoFTools::make_hanging_node_constraints (dof_handler_fake, 
constraints_fake);
constraints_fake.close();

VectorTools::project (dof_handler_fake,
constraints_fake,
quad_rule,
exact_u,
locally_relevant_exact_solution.block(1));

dof_handler_fake.clear ();
}
}

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7a66b73f-4fe3-449f-b0ba-c4b16e8fbca3%40googlegroups.com.


[deal.II] deal.II Newsletter #102

2019-11-24 Thread Rene Gassmoeller
Hello everyone!

This is deal.II newsletter #102.
It automatically reports recently merged features and discussions about the 
deal.II finite element library.


## Below you find a list of recently proposed or merged features:

#9088: Suppress warning in python-bindings (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/9088

#9087: Restrict some MPI tests requiring p4est (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/9087

#9086: Minor consistency improvements in tensor.h (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/9086

#9085: Avoid C-style workaround in DataOutBase (proposed by masterleinad; 
merged) https://github.com/dealii/dealii/pull/9085

#9083: Make sure all .h files include config.h. (proposed by bangerth) 
https://github.com/dealii/dealii/pull/9083

#9082: typedef LinearAlgebraDealII::BlockSparseMatrix is defined (proposed by 
rezarastak; merged) https://github.com/dealii/dealii/pull/9082

#9081: Comment MSVC 2019 DEAL_II_HAVE_CXX14_CONSTEXPR_CAN_CALL_NONCONSTEXPR 
failure (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9081

#9080: Fix compiling with MSVC (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9080

#9079: Avoid warnings in python-bindings (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9079

#9078: Step 70 (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9078

#9077: Coupling between non matching parallel distributed objects (proposed by 
luca-heltai) https://github.com/dealii/dealii/pull/9077

#9076: Particle interpolation sparsity matrix (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9076

#9075: Get set particle positions (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9075

#9074: Extract index sets from particle handlers (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9074

#9073: Generate particles on support or quadrature points on other grid 
(proposed by luca-heltai) https://github.com/dealii/dealii/pull/9073

#9072: Add simple output for particles. (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9072

#9071: Map dofs to support points (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9071

#9070: Extract dofs per component (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9070

#9069: Insert particles globally. (proposed by luca-heltai) 
https://github.com/dealii/dealii/pull/9069

#9067: hp::Refinement::predict_error: Detangle assertions. (proposed by 
marcfehling) https://github.com/dealii/dealii/pull/9067

#9066: Fix ADOL-C tests (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9066

#9065: Use std::array in Tensor (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/9065

#9063: Avoid workaround for zero-dimensional C-style arrays (proposed by 
masterleinad; merged) https://github.com/dealii/dealii/pull/9063

#9062: Avoid ambiguous unrolled_to_component_indices call (proposed by 
masterleinad; merged) https://github.com/dealii/dealii/pull/9062

#9061: Doc: Minor changes to hp::Refinement. (proposed by marcfehling; merged) 
https://github.com/dealii/dealii/pull/9061

#9059: Fix hp_cell_weights_03/04 (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9059

#9058: Fix compiling python-bindings (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9058

#9057: Fix ADOL-C warnings (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9057

#9056: Clarify documentation. (proposed by bangerth; merged) 
https://github.com/dealii/dealii/pull/9056

#9055: Add python wrappers for MappingQGeneric (proposed by agrayver) 
https://github.com/dealii/dealii/pull/9055

#9054: Defaulted copy constructor for Function (proposed by masterleinad; 
merged) https://github.com/dealii/dealii/pull/9054

#9051: Fix compiling with clang-3.7.1 (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/9051

#9028: Make indent script work for python bindings (proposed by agrayver; 
merged) https://github.com/dealii/dealii/pull/9028


## And this is a list of recently opened or closed discussions:

#9084: MatrixFree: usage of FE_Q and mapping_update_flags_inner/boundary_faces 
(opened) https://github.com/dealii/dealii/issues/9084

#9068: Reserve step-70 (opened) https://github.com/dealii/dealii/issues/9068

#9064: installed dealii on our school's cluster,get error? (opened) 
https://github.com/dealii/dealii/issues/9064

#9060: Compilation on MSVS 2019 dealii.master (opened and closed) 
https://github.com/dealii/dealii/issues/9060


A list of all major changes since the last release can be found at 
https://www.dealii.org/developer/doxygen/deal.II/changes_after_8_5_0.html.


Thanks for being part of the community!


Let us know about questions, problems, bugs or just share your experience by 
writing to dealii@googlegroups.com, or by opening 

Re: [deal.II] small strain (additive strain decomposition) elastoplastic code

2019-11-24 Thread Muhammad Mashhood
Great idea Prof. Bangerth. Thanks! 

On Thursday, November 21, 2019 at 7:02:48 PM UTC+1, Wolfgang Bangerth wrote:
>
> On 11/21/19 10:41 AM, Muhammad Mashhood wrote: 
> >  Hi! I am trying to setup quasi static 
> > thermoelastoplastic code using the step-26 (thermal analysis) and 
> step-42 
> > (elastoplastic dynamics). But there is one limitation after coupling 
> both 
> > physics with these two codes that when the thermal or mechanical loading 
> is 
> > removed (after the already certain cells of the domain are plasticized) 
> the 
> > body comes back into the original state of zero displacement or zero 
> strain 
> > every where. In summary it does not stores the plastic strain at the 
> end. 
> > Does anyone have idea if there is already any other deal.ii code for 
> small 
> > strain elastoplasticity (additive decomposed approach) to be coupled 
> with 
> > thermal analysis? 
> > It would be nice addition in my code to fasten up my project. Thank you 
> in 
> > advance! 
>
> I don't know whether any plasticity codes are publicly available, but you 
> might want to use the list of publications based on deal.II to see what 
> you 
> can find and whether the authors are willing to share their codes with 
> you: 
>https://dealii.org/publications.html#list 
> There is a search box for publications that lists at least 20 publications 
> if 
> you enter "plast". 
>
> I suppose you've already found the code gallery? 
>https://dealii.org/code-gallery.html 
> There are also some codes that might be interesting to you. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/805938c4-946f-42f2-95db-1845c9a2c5bf%40googlegroups.com.


[deal.II] Is a call to compress() required after scale()?

2019-11-24 Thread vachan potluri
Hello,

I am facing a weird problem. At a point in code, I 
have PETScWrappers::VectorBase::scale() called for few distributed vectors. 
Subsequently, I have assignment operator on ghosted versions of these 
vectors for parallel communication. When I launch the code with 2 or 4 
processes, it works fine. But with 3 processes, the code halts after the 
scaling operations and before the first assignment. I am limited to 4 
processes.

   1. Is a compress() required after scale()? With what operation as 
   argument?
   2. Why does this behaviour occur only when 3 processes are launched? Has 
   anyone experienced this before?

Thanks

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/31558d55-94ca-4ab8-b668-9c231788a99d%40googlegroups.com.