Re: [deal.II] some question about the move mesh function

2019-10-07 Thread huyanzhuo
thanks for your reply!
 i got the problem solved. my fault is the misuse 
of communicate_locally_moved_vertices function. 
i got the "shallow hole" because i move the mesh when the mesh is deformed 
, the mpi processor does't know the right place of ghost layer cell.
i solve the problem by correct the whole move vector and return to the 
original mesh , which i refine and coarsen in the later step. after i 
finished the refinement , just add the whole move vector again the there is 
no "shallow hole" any more.


在 2019年10月5日星期六 UTC+8下午11:34:18,Daniel Arndt写道:
>
> huyanzhou,
>
> I try to use the communicate_locally_moved_vertices, it fixes the 
>> departure of mesh belongs to different mesh . but there still remains some 
>> problem 
>>
> [...]
>> you can see that the mesh is consistent on the boundry, but not smooth in 
>> the inner mesh ,there are some shallow hole  on the surface
>>
> Can you explain more precisely what doesn't work as expected? Is this a 
> problem you only see when running with multiple MPI processes? 
> What is the "inner mesh" for you? Do you refer to faces with children? Can 
> you point out which vertices you think are set wrongly? 
>  
>
>> my code is as follows 
>>
> [...]
>>
>
> It seems that your idea is to first move vertices according to some 
> displacement function and then to "correct" hanging vertices to be placed 
> inside the neighboring face.
> That looks OK for standard orientation but might be problematic otherwise. 
> You should be able to achieve the same effect when you only apply hanging 
> node constraints to your
> displacement vector.
>
> Best,
> Daniel
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a300b5ff-fadf-4096-ae87-53d88a29536c%40googlegroups.com.


Re: [deal.II] Implementing variable thermal diffusivity in Step-26

2019-10-07 Thread Daniel Arndt
Muhammad,

Dear Colleagues,
>I am working on thermal analysis where deal.ii
> Step-26 is being considered. In my case the thermal diffusivity
> (Coefficient being multiplied with Laplace Matrix) is temperature dependent
> and hence varying in every cell (or node). That is why I want to make the
> current Step-26 code flexible to cater this feature.
> I have two questions regarding this:
>
> *1) *Instead of directly creating the global Mass and Laplace matrices, I
> am making the cell_matrix as well as the cell_rhs as per following simple
> code (where to keep it simple, so far source term and diffusivity values
> are not yet present) :
> [...]
> The issue is that it is taking very long time to run 
> *VectorTools::point_value(dof_handler_temperature,
> old_solution_thermal, cell_temperature->vertex(j),
> temperature_old_solution_spatial_point); *function for evaluating the 
> *temperature_old_solution_spatial_point.
> *But the results match with the original Step-26 code.
> However as an efficient alternative, I tried to use the
> *temperature_old_solution_qpoint* in cell_rhs instead of
> *temperature_old_solution_spatial_point* which is very fast to compute
> but gives me the correct solution only at first time step after that the
> difference between this *new *and the *old solution* (from original
> Step-26 code) starts increasing i.e. new modified code solution kind of
> lags behind the original code (old) solution as per shared in the result
> snap shots in attachment.
>
> Any suggestion to improve the efficiency of current approach or any
> correction (in case I am mistaking or missing something) would be more than
> welcomed.
>

Yes, VectorTools::point_value is very slow because it searches in the whole
mesh each time it is called. Assuming that you use the same DoFHandler for
assembling the system_matrix and the old_solution_thermal, i.e.
dof_handler_temperature,
you should use FEValues::get_function_values()
https://www.dealii.org/current/doxygen/deal.II/classFEValuesBase.html#a357b422e374f2f2207af3512093f3907
instead. This function gives you the function values evaluated at every
quadrature point.
Currently, you are asking for the value at the vertices of the cell
instead. This can only possibly work if the number of vertices and the
number of degrees of freedom per cell coincide, i.e. for FE_Q(1) elements.
Using FEValues::get_function_values() you also don't need the loop with
running index j for assembling the right-hand side term.


> 2) Rather than making and assembling the local cell matrix and cell rhs,
> Is it more efficient and flexible way in deal.ii to directly modify the
> global Laplace matrix and system rhs in Step-26 for variable diffusivity
> coefficients at different corresponding dof entries?(Hopefully the
> dynamic sparsity pattern and preconditioning etc. might not disturb the
> indexing of dofs in this case)
>

No, just modify assembling the local matrix to take the coefficient into
account.

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAOYDWb%2BjD9KGb116ggqYFYnNAm7OXhHK-h%3D%3DZ0fYCKxMCUN3QQ%40mail.gmail.com.


[deal.II] Re: error during installation with spack on CentOS7

2019-10-07 Thread Alberto Salvadori

For further information, hoping this can be of use,
it turned out that the very same error in slepc installation occurs with 
gcc@7.4.0 .
This strikes me a bit, because I did install deal.ii with such a compiler 
on a ubuntu machine,  rather than CentOS7, via spack.
The issue was reported on Spack Github.
Thank you,

Alberto

Il giorno venerdì 4 ottobre 2019 12:50:24 UTC+2, Alberto Salvadori ha 
scritto:
>
> Dear community
>
> I apologize for this too long bothering on  installing deal.ii on a linux 
> machine equipped with CentOS7. I am having quite a large amount of issues, 
> perhaps related to the gcc compiler(?). 
> The very last, which I was unable to solve up to now, relates to slepc . I 
> wonder if any of you had a similar problem and in case could address its 
> solution.
>
> Here is the outcome of installation via spack:
>
> *==>* *Installing* *slepc*
>
> *==>* Searching for binary cache of slepc
>
> *==>* Warning: No Spack mirrors are currently configured
>
> *==>* No binary for slepc found: installing from source
>
> *==>* Fetching http://slepc.upv.es/download/distrib/slepc-3.12.0.tar.gz
>
>  
> 100.0%
>
> *==>* Staging archive: 
> /tmp/deal.ii/spack-stage/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr/slepc-3.12.0.tar.gz
>
> *==>* Created stage in 
> /tmp/deal.ii/spack-stage/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr
>
> *==>* No patches needed for slepc
>
> *==>* Building slepc [Package]
>
> *==>* Executing phase: 'install'
>
> *==>* Error: ProcessError: Command exited with status 1:
>
> './configure' 
> '--prefix=/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr'
>  
> '--with-arpack-dir=/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/arpack-ng-3.7.0-i5fx7mowpxx7acbasidsfc4r3owcd2vx/lib'
>  
> '--with-arpack-flags=-lparpack,-larpack'
>
> See build log for details:
>
>   
> /tmp/deal.ii/spack-stage/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr/spack-build-out.txt
>
>
> and the log (s):
>
> ==> Executing phase: 'install'
>
> ==> [2019-10-03-20:48:03.194513] './configure' 
> '--prefix=/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr'
>  
> '--with-arpack-dir=/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/arpack-ng-3.7.0-i5fx7mowpxx7acbasidsfc4r3owcd2vx/lib'
>  
> '--with-arpack-flags=-lparpack,-larpack'
>
> Checking environment... done
>
> Checking PETSc installation... 
>
> ERROR: Unable to link with PETSc
>
> ERROR: See "installed-arch-linux2-c-opt/lib/slepc/conf/configure.log" file 
> for details
>
>
>
> 
>
> Starting Configure Run at Thu Oct  3 20:48:03 2019
>
> Configure Options: 
> --prefix=/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr
>  
> --with-arpack-dir=/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/arpack-ng-3.7.0-i5fx7mowpxx7acbasidsfc4r3owcd2vx/lib
>  
> --with-arpack-flags=-lparpack,-larpack
>
> Working directory: 
> /tmp/deal.ii/spack-stage/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr/spack-src
>
> Python version:
>
> 2.7.16 (default, Oct  3 2019, 20:40:41) 
>
> [GCC 9.2.0]
>
> make: /usr/bin/gmake
>
> PETSc source directory: 
> /home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/petsc-3.12.0-7b3mdm63ap32riorneym2mtcmwjlb63s
>
> PETSc install directory: 
> /home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/petsc-3.12.0-7b3mdm63ap32riorneym2mtcmwjlb63s
>
> PETSc version: 3.12.0
>
> SLEPc source directory: 
> /tmp/deal.ii/spack-stage/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr/spack-src
>
> SLEPc install directory: 
> /home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/slepc-3.12.0-5md6u45rynyaqtcta4e5dmecqhkp2jkr
>
> SLEPc version: 3.12.0
>
>
> 
>
> Checking PETSc installation...
>
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
>
> Running command:
>
> cd /tmp/slepc-7TxU8j;/usr/bin/gmake checklink TESTFLAGS=""
>
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
>
> #include "petscsnes.h"
>
> int main() {
>
> Vec v; Mat m; KSP k;
>
> PetscInitializeNoArguments();
>
> VecCreate(PETSC_COMM_WORLD,&v);
>
> MatCreate(PETSC_COMM_WORLD,&m);
>
> KSPCreate(PETSC_COMM_WORLD,&k);
>
> return 0;
>
> }
>
> /home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/openmpi-3.1.4-4lzhe2gtz3nzhffn6efu2fzgochphcix/bin/mpicc
>  
> -o checklink.o -c -fPIC   
> -I/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/petsc-3.12.0-7b3mdm63ap32riorneym2mtcmwjlb63s/include
>  
> -I/home/deal.ii/spack/opt/spack/linux-centos7-ivybridge/gcc-9.2.0/hypre-2.18.0-dbexk2cnwvnsjd5fm6ltw7o7q66ik3hy/include
>  
> -I/home/deal.ii/spack/opt/spack/li

[deal.II] deal.II Newsletter #96

2019-10-07 Thread Rene Gassmoeller
Hello everyone!

This is deal.II newsletter #96.
It automatically reports recently merged features and discussions about the 
deal.II finite element library.


## Below you find a list of recently proposed or merged features:

#8892: Replace std::bind in documentation (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/8892

#8891: Remove std::bind from examples (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/8891

#8890: PERL_PATH removed from options.dox (proposed by rezarastak; merged) 
https://github.com/dealii/dealii/pull/8890

#8889: Small rewording of doc for compute_point_locations() (proposed by 
rezarastak) https://github.com/dealii/dealii/pull/8889

#: Remove some uses of std::bind from thread_management.h (proposed by 
masterleinad; merged) https://github.com/dealii/dealii/pull/

#8887: Avoid including thread_management.h (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/8887

#8886: Minor documentation fix in P::D::SolutionTransfer (proposed by 
rezarastak; merged) https://github.com/dealii/dealii/pull/8886

#8885: FEInterfaceValues: gradients and hessians (proposed by tjhei) 
https://github.com/dealii/dealii/pull/8885

#8882: Overlap communication and computation in CUDA cell_loop [WIP] (proposed 
by peterrum) https://github.com/dealii/dealii/pull/8882

#8881: Fix behavior for CUDA-aware MPI (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/8881

#8880: Synchronize CUDA device in MPI ghost exchange (proposed by kronbichler; 
merged) https://github.com/dealii/dealii/pull/8880

#8879: Fix using MemorySpace::Host with la_parallel_vector.templates.h and 
CUDA-aware MPI (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/8879

#8878: Complex integrate_difference() (proposed by dangars) 
https://github.com/dealii/dealii/pull/8878

#8876: Fix setting host compiler (proposed by masterleinad; merged) 
https://github.com/dealii/dealii/pull/8876

#8875: Bypass vector copy in MatrixFree::cell_loop with MPI (proposed by 
kronbichler; merged) https://github.com/dealii/dealii/pull/8875

#8871: Update step-64 (proposed by peterrum; merged) 
https://github.com/dealii/dealii/pull/8871

#8847: kinematics.h: fix always_inline (proposed by tjhei; merged) 
https://github.com/dealii/dealii/pull/8847

#8813:  Process indices in ComputeIndexOwner by intervals (proposed by 
kronbichler; merged) https://github.com/dealii/dealii/pull/8813

#8736: doxygen: rewrite set_canonical_doxygen.py in Perl (proposed by 
fvanmaele; merged) https://github.com/dealii/dealii/pull/8736


## And this is a list of recently opened or closed discussions:

#8884: FEInterfaceValues (meta) (opened) 
https://github.com/dealii/dealii/issues/8884

#8883: make_hanging_node_constraints() with hp::DoFHandler and FE_Nothing 
(opened) https://github.com/dealii/dealii/issues/8883

#8877: Setting all cells to artificial prior to refinement (opened and closed) 
https://github.com/dealii/dealii/issues/8877

#8873: CMake configuration with CUDA/MPI code (closed) 
https://github.com/dealii/dealii/issues/8873

#8785: Speed up ComputeIndexOwner::Dictionary (closed) 
https://github.com/dealii/dealii/issues/8785

#8757: intel 19.0.5 always inline warning (closed) 
https://github.com/dealii/dealii/issues/8757

#8293: Code optimizations for >100k MPI ranks (closed) 
https://github.com/dealii/dealii/issues/8293


A list of all major changes since the last release can be found at 
https://www.dealii.org/developer/doxygen/deal.II/changes_after_8_5_0.html.


Thanks for being part of the community!


Let us know about questions, problems, bugs or just share your experience by 
writing to dealii@googlegroups.com, or by opening issues or pull requests at 
https://www.github.com/dealii/dealii.
Additional information can be found at https://www.dealii.org/.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5d9b6124.1c69fb81.ececd.e884SMTPIN_ADDED_MISSING%40gmr-mx.google.com.


Re: [deal.II] Arpack solver reports 4294967295 iterations before converging

2019-10-07 Thread Wolfgang Bangerth
On 10/7/19 5:32 AM, 'Maxi Miller' via deal.II User Group wrote:
> I implemented the changes in step-36 suggested in the tutorial for using 
> ARPACK instead of PETSc (as shown in the attachment). Now, the solver reports 
> a convergence in 4294967295 iterations (with correct result), thus I assume a 
> overflow/underflow bug. Is that something which I should report in the github 
> issues?

4294967295=-1 when expressed in unsigned int. This usually indicates some kind 
of error condition. It would be worth figuring out where this number is 
generated because it's pretty clear that that is not the actual number of 
iterations performed.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d481a81f-df0d-88bc-46a2-0cd4d177566d%40colostate.edu.


[deal.II] Arpack solver reports 4294967295 iterations before converging

2019-10-07 Thread 'Maxi Miller' via deal.II User Group
I implemented the changes in step-36 suggested in the tutorial for using 
ARPACK instead of PETSc (as shown in the attachment). Now, the solver 
reports a convergence in 4294967295 iterations (with correct result), thus 
I assume a overflow/underflow bug. Is that something which I should report 
in the github issues?
Thanks!

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c71acebf-2574-4bff-bb24-e34be30bcaa4%40googlegroups.com.
/* -
 *
 * Copyright (C) 2009 - 2019 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * -

 *
 * Authors: Toby D. Young, Polish Academy of Sciences,
 *  Wolfgang Bangerth, Texas A&M University
 */

// @sect3{Include files}

// As mentioned in the introduction, this program is essentially only a
// slightly revised version of step-4. As a consequence, most of the following
// include files are as used there, or at least as used already in previous
// tutorial programs:
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

// IndexSet is used to set the size of each PETScWrappers::MPI::Vector:
#include 

// PETSc appears here because SLEPc depends on this library:
#include 
#include 

// And then we need to actually import the interfaces for solvers that SLEPc
// provides:
#include 

#include 
#include 
#include 
#include 
#include 
#include 

// We also need some standard C++:
#include 
#include 

// Finally, as in previous programs, we import all the deal.II class and
// function names into the namespace into which everything in this program
// will go:
namespace Step36
{
	using namespace dealii;

	// @sect3{The EigenvalueProblem class template}

	// Following is the class declaration for the main class template. It looks
	// pretty much exactly like what has already been shown in step-4:
	template 
	class EigenvalueProblem
	{
	public:
		EigenvalueProblem(const std::string &prm_file);
		void run();

	private:
		void make_grid_and_dofs();
		void assemble_system();
		unsigned int solve();
		void output_results() const;

		Triangulation triangulation;
		FE_Q  fe;
		DoFHandlerdof_handler;

		// With these exceptions: For our eigenvalue problem, we need both a
		// stiffness matrix for the left hand side as well as a mass matrix for
		// the right hand side. We also need not just one solution function, but a
		// whole set of these for the eigenfunctions we want to compute, along
		// with the corresponding eigenvalues:
		//	PETScWrappers::SparseMatrix stiffness_matrix, mass_matrix;
		//	std::vector eigenfunctions;
		//	std::vector eigenvalues;
		SparsityPattern sparsity_pattern;
		SparseMatrixstiffness_matrix, mass_matrix;
		std::vector >eigenfunctions;
		std::vector>   eigenvalues;

		// And then we need an object that will store several run-time parameters
		// that we will specify in an input file:
		ParameterHandler parameters;

		// Finally, we will have an object that contains "constraints" on our
		// degrees of freedom. This could include hanging node constraints if we
		// had adaptively refined meshes (which we don't have in the current
		// program). Here, we will store the constraints for boundary nodes
		// $U_i=0$.
		AffineConstraints constraints;
	};

	// @sect3{Implementation of the EigenvalueProblem class}

	// @sect4{EigenvalueProblem::EigenvalueProblem}

	// First up, the constructor. The main new part is handling the run-time
	// input parameters. We need to declare their existence first, and then read
	// their values from the input file whose name is specified as an argument
	// to this function:
	template 
	EigenvalueProblem::EigenvalueProblem(const std::string &prm_file)
		: fe(1)
		, dof_handler(triangulation)
	{
		// TODO investigate why the minimum number of refinement steps required to
		// obtain the correct eigenvalue degeneracies is 6
		parameters.dec