[deal.II] Re: error encountered while using matrix free in GPU with periodic BCs

2020-12-17 Thread Sambit Das
Hi Bruno,

Thank you for your reply.

Vishal and me are collaborating on implementing hybrid functionals in the 
DFT-FE code, with Vishal taking the lead on this project. Our exploration 
of the GPU ported matrix-free poisson solve in GPU is in that context. I 
did take a look at the 
"include/deal.II/matrix_free/cuda_hanging_nodes_internal.h" but it seems to 
me that detailed understanding the inner workings of the matrix-free 
implementation would be required to extend to periodic BCs. Since we don't 
have an urgent need for this capability, we will wait for this capability 
to be implemented in the future whenever you get a chance.

Best,
Sambit



On Monday, December 14, 2020 at 11:37:06 AM UTC-5 bruno.t...@gmail.com 
wrote:

> Vishal,
>
> I don't think anyone has ever tried to use periodic boundary conditions 
> with GPU. The way we apply constraints on the GPU is very different than 
> what is done on the CPU. So, I am not surprised that it doesn't work. I'll 
> add that to my todo list but I have no idea when I will be able to look at 
> this. If you need this capability any time soon, you will probably need to 
> implement it yourself. If you choose to work on this, I can help you.
>
> Best,
>
> Bruno
>
> On Saturday, December 12, 2020 at 11:35:56 PM UTC-5 vishal...@gmail.com 
> wrote:
>
>> Hello, 
>>
>> I am facing an error while I am using matrix-free in GPU with periodic 
>> boundary conditions. I am attaching a minimal example that illustrates the 
>> issue I am facing. I am using deal.II -9.3.0-pre.
>>
>> The minimal example is derived from step-64 of the tutorials. In this 
>> code, 
>> 1) Create a single element using the hypercube function.
>> 2) Create HEX27 finite element based dof_handler and also create the 
>> constraint matrices
>> 3) Create matrix-free objects on the host and GPU. 
>> 4) Create a host input vector compatible with the constraints ( I set the 
>> values at the unconstrained nodes to be its global Id).
>> 5) Send the input vector from the host to the GPU
>> 6) Perform a single vmult operation with the Laplace operator on the host 
>> and the GPU.
>> 7) Send the output from the GPU to the host
>> 8) Compare the two outputs
>>
>> When I ran the code in debug mode on a single MPI task and compared the 
>> two outputs, the values at the unconstrained nodes do not seem to match. 
>> To ensure there are no bugs in my minimal example, I have a periodicBC 
>> flag. When The periodicBC is set to true, periodic + homogeneous Dirichlet 
>> boundary condition are imposed. If it is set to false, an homogeneous 
>> Dirichlet BC is imposed at the interior node. In this case, the output 
>> values do match. This flag affects how the constraint matrix is created and 
>> nothing else. 
>>
>> I would be very grateful if someone can tell me what mistake I am making. 
>> Any help is greatly appreciated. 
>>
>>
>> thanks and regards,
>> Vishal Subramanian
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b8743043-249a-435c-a2d2-49a4bc5dc604n%40googlegroups.com.


Re: [deal.II] Compilation error in dealii development version (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0 compiler and cuda/9.2

2018-09-19 Thread Sambit Das
Hi Daniel,

The third issue is fixed in https://github.com/dealii/dealii/pull/7213 
> 
> .

Thank you for creating the fix.


the first issue you observed is related to 
> https://gitlab.kitware.com/cmake/cmake/issues/17538.
> In particular, the MPI include directories might not be set correctly for 
> the CUDA compiler.
> This often happens when the CMAKE_CXX_COMPILER does not need additional 
> include directories to
> compiler MPI code, e.g. if the CMAKE_CXX_COMPILER is a MPI wrapper.

Ah, I see. 

Best,
Sambit
On Wednesday, September 19, 2018 at 4:36:46 AM UTC-5, Daniel Arndt wrote:
>
> Sambit,
>
> the first issue you observed is related to 
> https://gitlab.kitware.com/cmake/cmake/issues/17538.
> In particular, the MPI include directories might not be set correctly for 
> the CUDA compiler.
> This often happens when the CMAKE_CXX_COMPILER does not need additional 
> include directories to
> compiler MPI code, e.g. if the CMAKE_CXX_COMPILER is a MPI wrapper.
>
> The third issue is fixed in https://github.com/dealii/dealii/pull/7213.
>
> Best,
> Daniel
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Compilation error in dealii development version (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0 compiler and cuda/9.2

2018-09-18 Thread Sambit Das
It seems the issue is related to -DDEAL_II_WITH_64BIT_INDICES=ON.
If I set -DDEAL_II_WITH_64BIT_INDICES=OFF, the compilation worked fine.

Best,
Sambit

On Tuesday, September 18, 2018 at 11:17:43 PM UTC-5, Sambit Das wrote:
>
> Hi Bruno and Jean,
>
> Based on the discussion in  (https://github.com/dealii/dealii/issues/7204 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>)
>  
> I used the temporary fix of  setting 
> SET(ZLIB_INCLUDE_DIR  "/usr/local/include") in FindZLIB.cmake, which 
> addressed the above compilation issue.
>
> However, I get a new compilation error now:
>
> *[ 59%] Building CXX object 
> source/particles/CMakeFiles/obj_particle_debug.dir/particle.cc.o*
> */gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/matrix_free/cuda_hanging_nodes_internal.h(286):
>  
> error: no suitable user-defined conversion from "std::vector std::allocator>" to "std::vector std::allocator>" exists*
> *  detected during:*
> *instantiation of "void 
> dealii::CUDAWrappers::internal::HangingNodes::setup_constraints(std::vector  
> int, std::allocator> &, const CellIterator &, unsigned int &) 
> const [with dim=2, 
> CellIterator=dealii::FilteredIterator  
> 2>, false>>>]" *
> */gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/matrix_free/cuda_matrix_free.templates.h(288):
>  
> here*
> *instantiation of "void 
> dealii::CUDAWrappers::internal::ReinitHelper Number>::get_cell_data(const CellFilter &, unsigned int) [with dim=2, 
> Number=float, 
> CellFilter=dealii::FilteredIterator  
> 2>, false>>>]" *
> */gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/matrix_free/cuda_matrix_free.templates.h(578):
>  
> here*
> *instantiation of "void dealii::CUDAWrappers::MatrixFree Number>::reinit(const dealii::Mapping &, const 
> dealii::DoFHandler &, const dealii::AffineConstraints &, 
> const dealii::Quadrature<1> &, dealii::CUDAWrappers::MatrixFree Number>::AdditionalData) [with dim=2, Number=float]" *
> */gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/source/matrix_free/cuda_matrix_free.cu
>  
> <http://cuda_matrix_free.cu>(25): here*
>
> The same error occurs with both bundled BOOST as well linking to a 
> manually installed BOOST/1.66.0. 
>
> Do you think the above compilation error is related to the original issue 
> in  (https://github.com/dealii/dealii/issues/7204 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>
> )?
>
> Thank you,
> Sambit
>
> On Tuesday, September 18, 2018 at 1:02:07 PM UTC-5, Sambit Das wrote:
>>
>> Hi Jean-Paul,
>>
>> Thanks for referring me to the github issue (
>> https://github.com/dealii/dealii/issues/7204 
>> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>).
>>  
>> It does seem they are related issues.
>> I will keep following the github issue.
>>
>> Thank you,
>> Sambit
>>
>> On Tuesday, September 18, 2018 at 11:53:17 AM UTC-5, Jean-Paul Pelteret 
>> wrote:
>>>
>>> Hi Sambit,
>>>
>>> Your new error looks like the same one reported (
>>> https://github.com/dealii/dealii/issues/7204 
>>> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>)
>>>  yesterday 
>>> so I don’t think that using another version of CMake will fix it. I’m not 
>>> quite sure what the answer to the problem is, since the discussion is 
>>> ongoing.
>>>
>>> Best,
>>> Jean-Paul
>>>
>>> On 18 Sep 2018, at 18:47, Sambit Das  wrote:
>>>
>>> Hi Bruno,
>>>
>>> Thank you for your reply. Now I tried CMake 3.9.6 but got the following 
>>> compilation error
>>>
>>> [ 54%] Building CUDA object 
>>> source/base/CMakeFiles/obj_base_debug.dir/cuda.cu.o
>>> In file included from 
>>> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/math_functions.h:8853:0,
>>>  from 
>>> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/common_functions.h:257,
>>>  from 
>>> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/common_functions.h:50,
>>>  from 
>&

Re: [deal.II] Compilation error in dealii development version (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0 compiler and cuda/9.2

2018-09-18 Thread Sambit Das
Hi Bruno and Jean,

Based on the discussion in  (https://github.com/dealii/dealii/issues/7204 
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>)
 
I used the temporary fix of  setting 
SET(ZLIB_INCLUDE_DIR  "/usr/local/include") in FindZLIB.cmake, which 
addressed the above compilation issue.

However, I get a new compilation error now:

*[ 59%] Building CXX object 
source/particles/CMakeFiles/obj_particle_debug.dir/particle.cc.o*
*/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/matrix_free/cuda_hanging_nodes_internal.h(286):
 
error: no suitable user-defined conversion from "std::vector>" to "std::vector>" exists*
*  detected during:*
*instantiation of "void 
dealii::CUDAWrappers::internal::HangingNodes::setup_constraints(std::vector> &, const CellIterator &, unsigned int &) 
const [with dim=2, 
CellIterator=dealii::FilteredIterator, false>>>]" *
*/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/matrix_free/cuda_matrix_free.templates.h(288):
 
here*
*instantiation of "void 
dealii::CUDAWrappers::internal::ReinitHelper::get_cell_data(const CellFilter &, unsigned int) [with dim=2, 
Number=float, 
CellFilter=dealii::FilteredIterator, false>>>]" *
*/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/matrix_free/cuda_matrix_free.templates.h(578):
 
here*
*instantiation of "void dealii::CUDAWrappers::MatrixFree::reinit(const dealii::Mapping &, const 
dealii::DoFHandler &, const dealii::AffineConstraints &, 
const dealii::Quadrature<1> &, dealii::CUDAWrappers::MatrixFree::AdditionalData) [with dim=2, Number=float]" *
*/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/source/matrix_free/cuda_matrix_free.cu(25):
 
here*

The same error occurs with both bundled BOOST as well linking to a manually 
installed BOOST/1.66.0. 

Do you think the above compilation error is related to the original issue 
in  (https://github.com/dealii/dealii/issues/7204 
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>
)?

Thank you,
Sambit

On Tuesday, September 18, 2018 at 1:02:07 PM UTC-5, Sambit Das wrote:
>
> Hi Jean-Paul,
>
> Thanks for referring me to the github issue (
> https://github.com/dealii/dealii/issues/7204 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>).
>  
> It does seem they are related issues.
> I will keep following the github issue.
>
> Thank you,
> Sambit
>
> On Tuesday, September 18, 2018 at 11:53:17 AM UTC-5, Jean-Paul Pelteret 
> wrote:
>>
>> Hi Sambit,
>>
>> Your new error looks like the same one reported (
>> https://github.com/dealii/dealii/issues/7204 
>> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>)
>>  yesterday 
>> so I don’t think that using another version of CMake will fix it. I’m not 
>> quite sure what the answer to the problem is, since the discussion is 
>> ongoing.
>>
>> Best,
>> Jean-Paul
>>
>> On 18 Sep 2018, at 18:47, Sambit Das  wrote:
>>
>> Hi Bruno,
>>
>> Thank you for your reply. Now I tried CMake 3.9.6 but got the following 
>> compilation error
>>
>> [ 54%] Building CUDA object 
>> source/base/CMakeFiles/obj_base_debug.dir/cuda.cu.o
>> In file included from 
>> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/math_functions.h:8853:0,
>>  from 
>> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/common_functions.h:257,
>>  from 
>> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/common_functions.h:50,
>>  from 
>> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/cuda_runtime.h:115,
>>  from :0:
>> /gpfs/gpfs0/software/rhel72/packages/gcc/7.2.0/include/c++/7.2.0/cmath:45:15:
>>  
>> fatal error: math.h: No such file or directory
>>  #include_next 
>>    ^~~~
>> compilation terminated.
>>
>> Should I try CMake 3.10/11?
>>
>> Thank you,
>> Sambit
>>
>> On Tuesday, September 18, 2018 at 7:27:38 AM UTC-5, Bruno Turcksin wrote:
>>>
>>> Sambit,
>>>
>>> Can you try with a different version of CMake. We do not support CMake 
>>> 3.12 with CUDA at the moment.
>>>
>>> Best,
>>>
>>> Bruno
>>>
>>> On 

Re: [deal.II] Compilation error in dealii development version (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0 compiler and cuda/9.2

2018-09-18 Thread Sambit Das
Hi Jean-Paul,

Thanks for referring me to the github issue (
https://github.com/dealii/dealii/issues/7204 
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>).
 
It does seem they are related issues.
I will keep following the github issue.

Thank you,
Sambit

On Tuesday, September 18, 2018 at 11:53:17 AM UTC-5, Jean-Paul Pelteret 
wrote:
>
> Hi Sambit,
>
> Your new error looks like the same one reported (
> https://github.com/dealii/dealii/issues/7204 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fissues%2F7204=D=1=AFQjCNFWFhH2gmvYaboY3_Dm_fEIvD4wxQ>)
>  yesterday 
> so I don’t think that using another version of CMake will fix it. I’m not 
> quite sure what the answer to the problem is, since the discussion is 
> ongoing.
>
> Best,
> Jean-Paul
>
> On 18 Sep 2018, at 18:47, Sambit Das > 
> wrote:
>
> Hi Bruno,
>
> Thank you for your reply. Now I tried CMake 3.9.6 but got the following 
> compilation error
>
> [ 54%] Building CUDA object 
> source/base/CMakeFiles/obj_base_debug.dir/cuda.cu.o
> In file included from 
> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/math_functions.h:8853:0,
>  from 
> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/common_functions.h:257,
>  from 
> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/common_functions.h:50,
>  from 
> /gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/cuda_runtime.h:115,
>  from :0:
> /gpfs/gpfs0/software/rhel72/packages/gcc/7.2.0/include/c++/7.2.0/cmath:45:15: 
> fatal error: math.h: No such file or directory
>  #include_next 
>^~~~
> compilation terminated.
>
> Should I try CMake 3.10/11?
>
> Thank you,
> Sambit
>
> On Tuesday, September 18, 2018 at 7:27:38 AM UTC-5, Bruno Turcksin wrote:
>>
>> Sambit,
>>
>> Can you try with a different version of CMake. We do not support CMake 
>> 3.12 with CUDA at the moment.
>>
>> Best,
>>
>> Bruno
>>
>> On Monday, September 17, 2018 at 8:56:31 PM UTC-4, Sambit Das wrote:
>>>
>>> Dear all,
>>>
>>> I am trying to compile the latest development version of dealii 
>>> (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0, 
>>> openmpi/3.0.0/gcc/7.2.0,  cuda/9.2 and cmake-3.12.2. <http://3.10.0.2/>
>>>
>>> During compilation I get the following error:
>>>
>>> [ 57%] Building CUDA object 
>>> source/base/CMakeFiles/obj_base_debug.dir/cuda.cu.o
>>> In file included from 
>>> /gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/base/cuda.h:19:0,
>>>  from 
>>> /gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/source/base/
>>> cuda.cu:16:
>>> /gpfs/gpfs0/groups/gavini/dsambit/software/dealii/build/include/deal.II/base/config.h:421:12:
>>>  
>>> fatal error: mpi.h: No such file or directory
>>>  #  include 
>>> ^~~
>>> compilation terminated.
>>>
>>>
>>> I have attached the detailed.log file. 
>>>
>>> I have used the following configuration line:
>>> cmake  -DDEAL_II_CXX_FLAGS_RELEASE="-O3" -DDEAL_II_WITH_CXX17=OFF 
>>> -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx 
>>>  -DCMAKE_Fortran_COMPILER=mpif90 -DDEAL_II_WITH_CUDA=ON 
>>> -DDEAL_II_COMPONENT_EXAMPLES=OFF
>>>  -DDEAL_II_WITH_MPI=ON -DDEAL_II_WITH_64BIT_INDICES=ON
>>>  
>>> -DP4EST_DIR="/gpfs/gpfs0/groups/gavini/dsambit/software/p4est/installGcc7.2.0"
>>>   
>>> ../dealii
>>>
>>>  Interestingly, if I remove -DDEAL_II_WITH_CUDA=ON, compilation is 
>>> successful. I am wondering if I am missing any cuda related compilation 
>>> flags.
>>>
>>> Thanks a lot in advance,
>>>
>>> Best,
>>> Sambit
>>>
>>
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Compilation error in dealii development version (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0 compiler and cuda/9.2

2018-09-18 Thread Sambit Das
Hi Bruno,

Thank you for your reply. Now I tried CMake 3.9.6 but got the following 
compilation error

[ 54%] Building CUDA object 
source/base/CMakeFiles/obj_base_debug.dir/cuda.cu.o
In file included from 
/gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/math_functions.h:8853:0,
 from 
/gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/crt/common_functions.h:257,
 from 
/gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/common_functions.h:50,
 from 
/gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include/cuda_runtime.h:115,
 from :0:
/gpfs/gpfs0/software/rhel72/packages/gcc/7.2.0/include/c++/7.2.0/cmath:45:15: 
fatal error: math.h: No such file or directory
 #include_next 
   ^~~~
compilation terminated.

Should I try CMake 3.10/11?

Thank you,
Sambit

On Tuesday, September 18, 2018 at 7:27:38 AM UTC-5, Bruno Turcksin wrote:
>
> Sambit,
>
> Can you try with a different version of CMake. We do not support CMake 
> 3.12 with CUDA at the moment.
>
> Best,
>
> Bruno
>
> On Monday, September 17, 2018 at 8:56:31 PM UTC-4, Sambit Das wrote:
>>
>> Dear all,
>>
>> I am trying to compile the latest development version of dealii 
>> (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0, 
>> openmpi/3.0.0/gcc/7.2.0,  cuda/9.2 and cmake-3.12.2. <http://3.10.0.2/>
>>
>> During compilation I get the following error:
>>
>> [ 57%] Building CUDA object 
>> source/base/CMakeFiles/obj_base_debug.dir/cuda.cu.o
>> In file included from 
>> /gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/base/cuda.h:19:0,
>>  from 
>> /gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/source/base/
>> cuda.cu:16:
>> /gpfs/gpfs0/groups/gavini/dsambit/software/dealii/build/include/deal.II/base/config.h:421:12:
>>  
>> fatal error: mpi.h: No such file or directory
>>  #  include 
>> ^~~
>> compilation terminated.
>>
>>
>> I have attached the detailed.log file. 
>>
>> I have used the following configuration line:
>> cmake  -DDEAL_II_CXX_FLAGS_RELEASE="-O3" -DDEAL_II_WITH_CXX17=OFF 
>> -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx 
>>  -DCMAKE_Fortran_COMPILER=mpif90 -DDEAL_II_WITH_CUDA=ON 
>> -DDEAL_II_COMPONENT_EXAMPLES=OFF
>>  -DDEAL_II_WITH_MPI=ON -DDEAL_II_WITH_64BIT_INDICES=ON
>>  
>> -DP4EST_DIR="/gpfs/gpfs0/groups/gavini/dsambit/software/p4est/installGcc7.2.0"
>>   
>> ../dealii
>>
>>  Interestingly, if I remove -DDEAL_II_WITH_CUDA=ON, compilation is 
>> successful. I am wondering if I am missing any cuda related compilation 
>> flags.
>>
>> Thanks a lot in advance,
>>
>> Best,
>> Sambit
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Compilation error in dealii development version (9.1.0-pre, shortrev c6b7876) using gcc/7.2.0 compiler and cuda/9.2

2018-09-17 Thread Sambit Das
Dear all,

I am trying to compile the latest development version of dealii (9.1.0-pre, 
shortrev c6b7876) using gcc/7.2.0, openmpi/3.0.0/gcc/7.2.0,  cuda/9.2 
and cmake-3.12.2. 

During compilation I get the following error:

[ 57%] Building CUDA object 
source/base/CMakeFiles/obj_base_debug.dir/cuda.cu.o
In file included from 
/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/include/deal.II/base/cuda.h:19:0,
 from 
/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/source/base/cuda.cu:16:
/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/build/include/deal.II/base/config.h:421:12:
 
fatal error: mpi.h: No such file or directory
 #  include 
^~~
compilation terminated.


I have attached the detailed.log file. 

I have used the following configuration line:
cmake  -DDEAL_II_CXX_FLAGS_RELEASE="-O3" -DDEAL_II_WITH_CXX17=OFF 
-DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx 
 -DCMAKE_Fortran_COMPILER=mpif90 -DDEAL_II_WITH_CUDA=ON 
-DDEAL_II_COMPONENT_EXAMPLES=OFF
 -DDEAL_II_WITH_MPI=ON -DDEAL_II_WITH_64BIT_INDICES=ON
 -DP4EST_DIR="/gpfs/gpfs0/groups/gavini/dsambit/software/p4est/installGcc7.2.0" 
 
../dealii

 Interestingly, if I remove -DDEAL_II_WITH_CUDA=ON, compilation is 
successful. I am wondering if I am missing any cuda related compilation 
flags.

Thanks a lot in advance,

Best,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
###
#
#  deal.II configuration:
#CMAKE_BUILD_TYPE:   DebugRelease
#BUILD_SHARED_LIBS:  ON
#CMAKE_INSTALL_PREFIX:   /usr/local
#CMAKE_SOURCE_DIR:   
/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii
#(version 9.1.0-pre, shortrev c6b7876)
#CMAKE_BINARY_DIR:   
/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/build
#CMAKE_CXX_COMPILER: GNU 7.2.0 on platform Linux ppc64le
#
/gpfs/gpfs0/software/rhel72/packages/openmpi/3.0.0/gcc-7.2.0/bin/mpicxx
#CMAKE_C_COMPILER:   
/gpfs/gpfs0/software/rhel72/packages/openmpi/3.0.0/gcc-7.2.0/bin/mpicc
#CMAKE_Fortran_COMPILER: 
/gpfs/gpfs0/software/rhel72/packages/openmpi/3.0.0/gcc-7.2.0/bin/mpif90
#CMAKE_GENERATOR:Unix Makefiles
#
#  Base configuration (prior to feature configuration):
#DEAL_II_CXX_FLAGS:-fPIC -Wall -Wextra -Wpointer-arith 
-Wwrite-strings -Wsynth -Wsign-compare -Wswitch -Woverloaded-virtual 
-Wsuggest-override -Wno-placement-new -Wno-deprecated-declarations 
-Wno-literal-suffix -Wno-psabi -fopenmp-simd -std=c++14
#DEAL_II_CXX_FLAGS_RELEASE:-O2 -funroll-loops -funroll-all-loops 
-fstrict-aliasing -Wno-unused-local-typedefs -O3
#DEAL_II_CXX_FLAGS_DEBUG:  -Og -ggdb -Wa,--compress-debug-sections
#DEAL_II_LINKER_FLAGS: -Wl,--as-needed -rdynamic -fuse-ld=gold 
-fopenmp
#DEAL_II_LINKER_FLAGS_RELEASE: 
#DEAL_II_LINKER_FLAGS_DEBUG:   -ggdb
#DEAL_II_DEFINITIONS:  
#DEAL_II_DEFINITIONS_RELEASE:  
#DEAL_II_DEFINITIONS_DEBUG:DEBUG
#DEAL_II_USER_DEFINITIONS: 
#DEAL_II_USER_DEFINITIONS_REL: 
#DEAL_II_USER_DEFINITIONS_DEB: DEBUG
#DEAL_II_INCLUDE_DIRS  
#DEAL_II_USER_INCLUDE_DIRS:
#DEAL_II_BUNDLED_INCLUDE_DIRS: 
#DEAL_II_LIBRARIES:
#DEAL_II_LIBRARIES_RELEASE:
#DEAL_II_LIBRARIES_DEBUG:  
#DEAL_II_COMPILER_VECTORIZATION_LEVEL: 0
#
#  Configured Features (DEAL_II_ALLOW_BUNDLED = ON, DEAL_II_ALLOW_AUTODETECTION 
= ON):
#DEAL_II_WITH_64BIT_INDICES = ON
#  ( DEAL_II_WITH_ADOLC = OFF )
#  ( DEAL_II_WITH_ARPACK = OFF )
#  ( DEAL_II_WITH_ASSIMP = OFF )
#DEAL_II_WITH_BOOST set up with bundled packages
#BOOST_CXX_FLAGS = -Wno-unused-local-typedefs
#BOOST_DEFINITIONS = BOOST_NO_AUTO_PTR
#BOOST_USER_DEFINITIONS = BOOST_NO_AUTO_PTR
#BOOST_BUNDLED_INCLUDE_DIRS = 
/gpfs/gpfs0/groups/gavini/dsambit/software/dealii/dealii/bundled/boost-1.62.0/include
#BOOST_LIBRARIES = rt
#DEAL_II_WITH_CUDA set up with external dependencies
#CUDA_VERSION = 9.2
#CMAKE_CUDA_COMPILER = 
/gpfs/gpfs0/software/rhel72/packages/cuda/9.2/bin/nvcc
#CUDA_COMPUTE_CAPABILITY = 3.5
#DEAL_II_CUDA_FLAGS = -arch=sm_35 -std=c++14
#DEAL_II_CUDA_FLAGS_RELEASE = 
#DEAL_II_CUDA_FLAGS_DEBUG = -G
#CUDA_INCLUDE_DIRS = 
/gpfs/gpfs0/software/rhel72/packages/cuda/9.2/include
# 

[deal.II] Re: Compilation error in dealii development version (version 9.1.0-pre, shortrev 7537ea7) using intel/18.0.2 compiler

2018-08-31 Thread Sambit Das
Dr. Arndt,

Thank you for creating the patch. I am now able to compile and install the 
patched branch with intel/18.0.2 compiler.

Best,
Sambit

On Friday, August 31, 2018 at 4:41:16 AM UTC-5, Daniel Arndt wrote:
>
> Sambit,
>
> Thanks for reporting! 
> Can you try if https://github.com/dealii/dealii/pull/7134 works for you?
>
> Best,
> Daniel
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Compilation error in dealii development version (version 9.1.0-pre, shortrev 7537ea7) using intel/18.0.2 compiler

2018-08-31 Thread Sambit Das
Dear all,

I am trying to compile the latest development version of dealii (version 
9.1.0-pre, shortrev 7537ea7) using intel/18.0.2, intelmpi and cmake/3.10.2.

During compilation I get the following error:

/work/05316/dsambit/publicSharedSoftware/dealiiDevLatest/dealii/source/multigrid/mg_transfer_matrix_free.cc(356):
 
error: expression must have a constant value
constexpr unsigned int three_to_dim = Utilities::pow(3, dim);

I have attached the detailed.log file. 

 I am wondering if I am missing any compilation flags, or whether this is 
an intel compiler issue.

Thanks a lot in advance,

Best,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
###
#
#  deal.II configuration:
#CMAKE_BUILD_TYPE:   DebugRelease
#BUILD_SHARED_LIBS:  ON
#CMAKE_INSTALL_PREFIX:   
/work/05316/dsambit/publicSharedSoftware/dealiiDevLatest/intel_18.0.2_scalapack_64Bit
#CMAKE_SOURCE_DIR:   
/work/05316/dsambit/publicSharedSoftware/dealiiDevLatest/dealii
#(version 9.1.0-pre, shortrev 7537ea7)
#CMAKE_BINARY_DIR:   
/work/05316/dsambit/publicSharedSoftware/dealiiDevLatest/buildScalapackIntel18.0.2
#CMAKE_CXX_COMPILER: Intel 18.0.2.20180210 on platform Linux x86_64
#/opt/apps/intel18/impi/18.0.2/bin/mpicxx
#CMAKE_C_COMPILER:   /opt/apps/intel18/impi/18.0.2/bin/mpicc
#CMAKE_Fortran_COMPILER: 
/opt/intel/compilers_and_libraries_2018.2.199/linux/mpi/intel64/bin/mpiifort
#CMAKE_GENERATOR:Unix Makefiles
#
#  Base configuration (prior to feature configuration):
#DEAL_II_CXX_FLAGS:-fpic -ansi -w2 -diag-disable=remark 
-wd21 -wd68 -wd135 -wd175 -wd177 -wd191 -wd193 -wd279 -wd327 -wd383 -wd981 
-wd1418 -wd1478 -wd1572 -wd2259 -wd2536 -wd2651 -wd3415 -wd15531 -wd111 -wd128 
-wd185 -wd186 -wd280 -qopenmp-simd -std=c++14 -xMIC-AVX512
#DEAL_II_CXX_FLAGS_RELEASE:-O2 -no-ansi-alias -ip -funroll-loops -O3
#DEAL_II_CXX_FLAGS_DEBUG:  -O0 -g -gdwarf-2 -grecord-gcc-switches
#DEAL_II_LINKER_FLAGS: -Wl,--as-needed -shared-intel -qopenmp 
-rdynamic -fuse-ld=gold
#DEAL_II_LINKER_FLAGS_RELEASE: 
#DEAL_II_LINKER_FLAGS_DEBUG:   
#DEAL_II_DEFINITIONS:  
#DEAL_II_DEFINITIONS_RELEASE:  
#DEAL_II_DEFINITIONS_DEBUG:DEBUG
#DEAL_II_USER_DEFINITIONS: 
#DEAL_II_USER_DEFINITIONS_REL: 
#DEAL_II_USER_DEFINITIONS_DEB: DEBUG
#DEAL_II_INCLUDE_DIRS  
#DEAL_II_USER_INCLUDE_DIRS:
#DEAL_II_BUNDLED_INCLUDE_DIRS: 
#DEAL_II_LIBRARIES:
#DEAL_II_LIBRARIES_RELEASE:
#DEAL_II_LIBRARIES_DEBUG:  
#DEAL_II_COMPILER_VECTORIZATION_LEVEL: 3
#
#  Configured Features (DEAL_II_ALLOW_BUNDLED = ON, DEAL_II_ALLOW_AUTODETECTION 
= ON):
#DEAL_II_WITH_64BIT_INDICES = ON
#  ( DEAL_II_WITH_ADOLC = OFF )
#  ( DEAL_II_WITH_ARPACK = OFF )
#  ( DEAL_II_WITH_ASSIMP = OFF )
#DEAL_II_WITH_BOOST set up with bundled packages
#BOOST_BUNDLED_INCLUDE_DIRS = 
/work/05316/dsambit/publicSharedSoftware/dealiiDevLatest/dealii/bundled/boost-1.62.0/include
#BOOST_LIBRARIES = rt
#  ( DEAL_II_WITH_CUDA = OFF )
#DEAL_II_WITH_CXX14 = ON
#  ( DEAL_II_WITH_CXX17 = OFF )
#  ( DEAL_II_WITH_GMSH = OFF )
#  ( DEAL_II_WITH_GSL = OFF )
#  ( DEAL_II_WITH_HDF5 = OFF )
#DEAL_II_WITH_LAPACK set up with external dependencies
#LAPACK_DIR = 
/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64
#LAPACK_LINKER_FLAGS = -liomp5 -lpthread -lm -ldl
#LAPACK_LIBRARIES = 
/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_intel_lp64.so;/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_core.so;/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_intel_thread.so
#  ( DEAL_II_WITH_METIS = OFF )
#DEAL_II_WITH_MPI set up with external dependencies
#MPI_VERSION = 3.1
#MPI_C_COMPILER = /opt/apps/intel18/impi/18.0.2/bin/mpicc
#MPI_CXX_COMPILER = /opt/apps/intel18/impi/18.0.2/bin/mpicxx
#MPI_Fortran_COMPILER = 
/opt/intel/compilers_and_libraries_2018.2.199/linux/mpi/intel64/bin/mpiifort
#MPI_CXX_FLAGS = 
#MPI_LINKER_FLAGS = 
#MPI_INCLUDE_DIRS = 
#MPI_USER_INCLUDE_DIRS = 
#MPI_LIBRARIES = 
#DEAL_II_WITH_MUPARSER set up with bundled packages
#

Re: [deal.II] Question on resolving chains of constraints containing both periodic and hanging node constraints

2018-08-13 Thread Sambit Das
Thank you, Denis.

Best,
Sambit

On Monday, August 13, 2018 at 1:36:44 AM UTC-5, Denis Davydov wrote:
>
> Thanks for the MWE, Sambit.
>
> I created a Github issue to track this further 
> https://github.com/dealii/dealii/issues/7053 
>
> Denis.
>
> On Monday, August 13, 2018 at 12:01:07 AM UTC+2, Sambit Das wrote:
>>
>> Dear Prof. Bangerth,
>>
>> I have now reproduced the above issue in the attached minimal example.
>>
>> Below is the algorithm of the minimal example 
>>
>> 1) Create a hypercube (-20,20) with origin at the center
>>
>> 2) Set periodic boundary conditions on all faces of the hypercube
>>
>> 3) Refine mesh by first doing global refinement once to get 8 cells and 
>> then refine 
>> the cell containing the corner (-20,-20,-20) two times iteratively. 
>> Finally I get 71 cells (see attached image)
>> with hanging nodes on three faces.
>>
>> 4) Create constraint matrix with both hanging node and periodic 
>> constraints, and call close().
>>
>> 5) Print the constraint equation (j,a_ij) for global dof id-52 on 
>> processors for which global dof id-52 is relevant, when run on two mpi 
>> tasks:
>>
>>$ mpirun -n 2 ./minimalExample
>>
>>number of elements: 71
>>taskId: 1, globalDofId-i: 52, coordinates-i: -20 20 -10, 
>> globalDofId-j: 16, coordinates-j: -20 -20 -10, scalarCoeff-aij: 1
>>taskId: 0, globalDofId-i: 52, coordinates-i: -20 20 -10, 
>> globalDofId-j: 32, coordinates-j: 20 -20 -10, scalarCoeff-aij: 1
>>   
>>Clearly "j" in the constraint equation is different across processors 
>> for the same constrained global dof id.
>>
>> Thank you,
>> Sambit
>>
>> On Friday, August 10, 2018 at 9:39:58 AM UTC-5, Sambit Das wrote:
>>>
>>> Dear Prof. Bangerth,
>>>
>>> Yes, they should really be the same. Or, more correctly, if two 
>>>> processors 
>>>> both store the constraints for a node, they better be the same. On the 
>>>> other 
>>>> hand, of course not every processor will know every constraint. 
>>>>
>>>
>>> Thanks you for clarifying this.
>>>
>>>  Can you try to construct a minimal testcase for what you observe? 
>>>
>>>
>>> Yes, I am going to construct a minimal test case .
>>>
>>> Best,
>>> Sambit 
>>>
>>> On Thursday, August 9, 2018 at 11:33:13 PM UTC-5, Wolfgang Bangerth 
>>> wrote:
>>>>
>>>>
>>>> > I created a ConstraintMatrix with both periodic and hanging node 
>>>> constraints, 
>>>> > and called close(). 
>>>> > 
>>>> > Then I pickeda constrained degree of freedom, lets say with global 
>>>> dof id = 
>>>> > “i” and printed the constraint equation pairs (j,a_ij) corresponding 
>>>> to “i” on 
>>>> > the processor for which “i” is locally owned as well as the 
>>>> processors for 
>>>> > which “i” is a ghost. I expected the constraint equation to be the 
>>>> same for 
>>>> > owning processor and the ghost processor, howeverwe have encountered 
>>>> a case 
>>>> > where printing the constraint equation entries shows different “j” 
>>>> for owning 
>>>> > and ghost processorsalthough a_ij are same. When we printed the 
>>>> coordinates of 
>>>> > “j” which were different, we found that those two nodes to be 
>>>> periodic images 
>>>> > of each other. 
>>>> > 
>>>> > 
>>>> > Should I except the constraint equation to be the same for the owning 
>>>> > processor and the ghost processors? 
>>>>
>>>> Yes, they should really be the same. Or, more correctly, if two 
>>>> processors 
>>>> both store the constraints for a node, they better be the same. On the 
>>>> other 
>>>> hand, of course not every processor will know every constraint. 
>>>>
>>>> Can you try to construct a minimal testcase for what you observe? 
>>>>
>>>> Best 
>>>>   Wolfgang 
>>>>
>>>>
>>>> -- 
>>>>  
>>>>
>>>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>>> www: 
>>>> http://www.math.colostate.edu/~bangerth/ 
>>>>
>>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Question on resolving chains of constraints containing both periodic and hanging node constraints

2018-08-12 Thread Sambit Das
Dear Prof. Bangerth,

I have now reproduced the above issue in the attached minimal example.

Below is the algorithm of the minimal example 

1) Create a hypercube (-20,20) with origin at the center

2) Set periodic boundary conditions on all faces of the hypercube

3) Refine mesh by first doing global refinement once to get 8 cells and 
then refine 
the cell containing the corner (-20,-20,-20) two times iteratively. 
Finally I get 71 cells (see attached image)
with hanging nodes on three faces.

4) Create constraint matrix with both hanging node and periodic 
constraints, and call close().

5) Print the constraint equation (j,a_ij) for global dof id-52 on 
processors for which global dof id-52 is relevant, when run on two mpi 
tasks:

   $ mpirun -n 2 ./minimalExample

   number of elements: 71
   taskId: 1, globalDofId-i: 52, coordinates-i: -20 20 -10, globalDofId-j: 
16, coordinates-j: -20 -20 -10, scalarCoeff-aij: 1
   taskId: 0, globalDofId-i: 52, coordinates-i: -20 20 -10, globalDofId-j: 
32, coordinates-j: 20 -20 -10, scalarCoeff-aij: 1
  
   Clearly "j" in the constraint equation is different across processors 
for the same constrained global dof id.

Thank you,
Sambit

On Friday, August 10, 2018 at 9:39:58 AM UTC-5, Sambit Das wrote:
>
> Dear Prof. Bangerth,
>
> Yes, they should really be the same. Or, more correctly, if two processors 
>> both store the constraints for a node, they better be the same. On the 
>> other 
>> hand, of course not every processor will know every constraint. 
>>
>
> Thanks you for clarifying this.
>
>  Can you try to construct a minimal testcase for what you observe? 
>
>
> Yes, I am going to construct a minimal test case .
>
> Best,
> Sambit 
>
> On Thursday, August 9, 2018 at 11:33:13 PM UTC-5, Wolfgang Bangerth wrote:
>>
>>
>> > I created a ConstraintMatrix with both periodic and hanging node 
>> constraints, 
>> > and called close(). 
>> > 
>> > Then I pickeda constrained degree of freedom, lets say with global dof 
>> id = 
>> > “i” and printed the constraint equation pairs (j,a_ij) corresponding to 
>> “i” on 
>> > the processor for which “i” is locally owned as well as the processors 
>> for 
>> > which “i” is a ghost. I expected the constraint equation to be the same 
>> for 
>> > owning processor and the ghost processor, howeverwe have encountered a 
>> case 
>> > where printing the constraint equation entries shows different “j” for 
>> owning 
>> > and ghost processorsalthough a_ij are same. When we printed the 
>> coordinates of 
>> > “j” which were different, we found that those two nodes to be periodic 
>> images 
>> > of each other. 
>> > 
>> > 
>> > Should I except the constraint equation to be the same for the owning 
>> > processor and the ghost processors? 
>>
>> Yes, they should really be the same. Or, more correctly, if two 
>> processors 
>> both store the constraints for a node, they better be the same. On the 
>> other 
>> hand, of course not every processor will know every constraint. 
>>
>> Can you try to construct a minimal testcase for what you observe? 
>>
>> Best 
>>   Wolfgang 
>>
>>
>> -- 
>>  
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Include all deal.II header file
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
//Include generic C++ headers
#include 
#include 


using namespace dealii;
int main (int argc, char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);

  const double L=20;
  dealii::parallel::distributed::Triangulation<3> triangulation(MPI_COMM_WORLD);
  GridGenerato

Re: [deal.II] Question on resolving chains of constraints containing both periodic and hanging node constraints

2018-08-10 Thread Sambit Das
Dear Prof. Bangerth,

Yes, they should really be the same. Or, more correctly, if two processors 
> both store the constraints for a node, they better be the same. On the 
> other 
> hand, of course not every processor will know every constraint. 
>

Thanks you for clarifying this.

 Can you try to construct a minimal testcase for what you observe? 


Yes, I am going to construct a minimal test case .

Best,
Sambit 

On Thursday, August 9, 2018 at 11:33:13 PM UTC-5, Wolfgang Bangerth wrote:
>
>
> > I created a ConstraintMatrix with both periodic and hanging node 
> constraints, 
> > and called close(). 
> > 
> > Then I pickeda constrained degree of freedom, lets say with global dof 
> id = 
> > “i” and printed the constraint equation pairs (j,a_ij) corresponding to 
> “i” on 
> > the processor for which “i” is locally owned as well as the processors 
> for 
> > which “i” is a ghost. I expected the constraint equation to be the same 
> for 
> > owning processor and the ghost processor, howeverwe have encountered a 
> case 
> > where printing the constraint equation entries shows different “j” for 
> owning 
> > and ghost processorsalthough a_ij are same. When we printed the 
> coordinates of 
> > “j” which were different, we found that those two nodes to be periodic 
> images 
> > of each other. 
> > 
> > 
> > Should I except the constraint equation to be the same for the owning 
> > processor and the ghost processors? 
>
> Yes, they should really be the same. Or, more correctly, if two processors 
> both store the constraints for a node, they better be the same. On the 
> other 
> hand, of course not every processor will know every constraint. 
>
> Can you try to construct a minimal testcase for what you observe? 
>
> Best 
>   Wolfgang 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Question on resolving chains of constraints containing both periodic and hanging node constraints

2018-08-08 Thread Sambit Das


Hi All,


I have a question about resolving chains of constraints in a 
ConstraintMatrix class object after calling close() in a parallel 
distributed case:


I created a ConstraintMatrix with both periodic and hanging node 
constraints, and called close().

Then I picked a constrained degree of freedom, lets say with global dof id 
= “i” and printed the constraint equation pairs (j,a_ij) corresponding to 
“i” on the processor for which “i” is locally owned as well as the 
processors for which “i” is a ghost. I expected the constraint equation to 
be the same for owning processor and the ghost processor, however we have 
encountered a case where printing the constraint equation entries shows 
different 
“j” for owning and ghost processorsalthough a_ij are same. When we printed 
the coordinates of “j” which were different, we found that those two nodes 
to be periodic images of each other.


Should I except the constraint equation to be the same for the owning 
processor and the ghost processors?


Thank you,

Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: multiply constrained dofs (hanging nodes+periodic) fails a simple test case

2018-04-11 Thread Sambit Das
I understand my mistake now: there are extra hanging nodes constraints in 
the constraint matrix with (hanging nod constraints + PBC) compared to the 
constraint matrix with only hanging node constraints.
That is why the minimal example fails.

Sambit

On Wednesday, April 11, 2018 at 1:00:25 PM UTC-5, Sambit Das wrote:
>
>
>>> true, but I don't see why you would have the same norms if you 
>> distribute with constraints from hanging nodes only or constraints from 
>> hanging nodes+ PBC.
>> I think we can agree that the two ConstraintMatrix objects should be 
>> different as in the case of PBC you additionally need to make sure that FE 
>> space on the refined boundary matches that on the opposite, non-refined 
>> side. 
>>
>
> Yes I agree the ConstraintMatrix objects are different but the 
> coefficients (a_{ij}) of the hanging node constraint equations 
>
> x_{i} = a_{ij} x_{j}
>
>  would be the same in both cases, only x_{j}'s would be different in both 
> cases. Now x_{j}'s are nodes without any constraints which are set to the 
> correct values explicitly in both cases:
>
> if(!constraints.is_constrained(globalDofIndex))
>vec1[globalDofIndex]=nodalCoor.norm();
>
> if(!onlyHangingNodeConstraints.is_constrained(globalDofIndex))
>vec2[globalDofIndex]=nodalCoor.norm();
>
> So the hanging nodes in both cases should have the same value after 
> calling distribute. 
>  
>  
>
>> If you suspect that there is a bug in constraints, you could check this 
>> by simply choosing some more-or-less random vector, distribute and 
>> plot-over-line in Paraview / Visit. 
>> More cumbersome comparison would be to evaluate random field at the 
>> opposite points.
>> You can use FEField function and then choose   L/2-\delta  and 
>> -L/2+\delta   with \delta = 1e-8 or so for X coordinate and then 
>> whatever you want to Y/Z. This should give you the same value anywhere on 
>> the two periodically matching points for a random input vector after 
>> constraints are distributed.
>>
>>  I will try doing this. 
>
> Best,
> Sambit
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: multiply constrained dofs (hanging nodes+periodic) fails a simple test case

2018-04-11 Thread Sambit Das
Hi Denis,


> I don't think that's the case. The domain is indeed periodic, but this is 
> completely detached from location of support/nodal points. 
> Same applies to geometry, you will have different coordinates of vertices 
> across the PBC so
>
>
I agree, the location of nodal points is detached from the periodicity of 
the domain, but in this case the origin is at the center of the hypercube. 
This artificially enforces that the nodal_coordinate.norm() is periodic. 

Best,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Error in writing and reading cell based data for restart

2018-04-09 Thread Sambit Das

>
>
> *An error occurred in line <3236> of file 
> 
>  
> in function*
> *void dealii::parallel::distributed::Triangulation spacedim>::notify_ready_to_unpack(unsigned int, const std::function (const dealii::Triangulation::cell_iterator &, 
> dealii::Triangulation::CellStatus, const void *)> &) [with 
> int dim = 3, int spacedim = 3]*
> *The violated condition was: *
> *offset < sizeof(CellStatus)+attached_data_size*
> *Additional information: *
> *invalid offset in notify_ready_to_unpack()*
>

I tried to debug this using ddt debugger- the reason the above condition is 
violated is due to attached_data_size=0, which is strange as the 
*triangulationChk*.*info *file mentions one object to be attached:

* triangulationChk.info *
*version nproc attached_bytes n_attached_objs n_coarse_cells*
*2 1 12 1 1* 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Error in writing and reading cell based data for restart

2018-04-09 Thread Sambit Das
Hi all,

I have written the following failing minimal example (also attached) where 
I create a single element parallel distributed triangulation and try to 
write and read a double. 

*using namespace dealii;*
*int main (int argc, char *argv[])*
*{*
*  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);*

*  const double L=20;*
*  parallel::distributed::Triangulation<3> triangulation(MPI_COMM_WORLD);*
*  GridGenerator::hyper_cube (triangulation, -L, L);*

*  unsigned int offset = triangulation.register_data_attach*
*   (sizeof(double),*
*[&](const typename 
dealii::parallel::distributed::Triangulation<3>::cell_iterator ,*
*const typename 
dealii::parallel::distributed::Triangulation<3>::CellStatus status,*
*void * data) -> void*
*{*

*  double* dataStore = reinterpret_cast(data);*
*  *dataStore=0.0;*
*}*
*);*

*  std::cout<< "offset=" << offset << std::endl;*
*  std::string filename="triangulationChk";*
*  triangulation.save(filename.c_str());*

*  triangulation.load(filename.c_str());*
*  triangulation.notify_ready_to_unpack*
*   (offset,[&](const typename 
dealii::parallel::distributed::Triangulation<3>::cell_iterator ,*
* const typename 
dealii::parallel::distributed::Triangulation<3>::CellStatus status,*
* const void * data) -> void*
* {*
* }*
*);*
*}*


I get the following error:

*An error occurred in line <3236> of file 

 
in function*
*void dealii::parallel::distributed::Triangulation::notify_ready_to_unpack(unsigned int, const std::function::cell_iterator &, 
dealii::Triangulation::CellStatus, const void *)> &) [with 
int dim = 3, int spacedim = 3]*
*The violated condition was: *
*offset < sizeof(CellStatus)+attached_data_size*
*Additional information: *
*invalid offset in notify_ready_to_unpack()*

I am wondering if I am missing something in my implementation.

Thanks,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Include all deal.II header file
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
//Include generic C++ headers
#include 
#include 


using namespace dealii;
int main (int argc, char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);

  const double L=20;
  parallel::distributed::Triangulation<3> triangulation(MPI_COMM_WORLD);
  GridGenerator::hyper_cube (triangulation, -L, L);

  unsigned int offset = triangulation.register_data_attach
	  (sizeof(double),
		   [&](const typename dealii::parallel::distributed::Triangulation<3>::cell_iterator ,
		   const typename dealii::parallel::distributed::Triangulation<3>::CellStatus status,
		   void * data) -> void
		   {

			 double* dataStore = reinterpret_cast(data);
			 *dataStore=0.0;
		   }
		   );

  std::cout<< "offset=" << offset << std::endl;
  std::string filename="triangulationChk";
  triangulation.save(filename.c_str());

  triangulation.load(filename.c_str());
  triangulation.notify_ready_to_unpack
	  (offset,[&](const typename dealii::parallel::distributed::Triangulation<3>::cell_iterator ,
	const typename dealii::parallel::distributed::Triangulation<3>::CellStatus status,
	const void * data) -> void
	{
	}
	   );
}


Re: [deal.II] TBB error inside FESystem constructor when using development version dealii (dealii-8.5.1 works fine) in debug mode

2018-03-22 Thread Sambit Das
Hi Matthias,

I reinstalled dealii to use external tbb library and that resolved my issue 
:) 
I used the following additional flags:

-DDEAL_II_WITH_THREADS=ON 
-DTBB_INCLUDE_DIRS="/sw/arcts/centos7/intel/18.1/compilers_and_libraries_2018.1.163/linux/tbb/include/tbb"
 
-DTBB_USER_INCLUDE_DIRS="/sw/arcts/centos7/intel/18.1/compilers_and_libraries_2018.1.163/linux/tbb/include/tbb"
 
-DTBB_LIBRARIES="/sw/arcts/centos7/intel/18.1/compilers_and_libraries_2018.1.163/linux/tbb/lib/intel64_lin/gcc4.4/libtbb.so"

Thanks a lot for your suggestion.

Best,
Sambit

On Wednesday, March 21, 2018 at 2:44:54 PM UTC-5, Sambit Das wrote:
>
> Hi Matthias,
>
> Thanks for your reply.
>
>>
>>
>> Out of curiosity, can you please attach the detailed.log file so that we 
>> can have a look at the full link interface? :-) I am curious where tbb 
>> comes in. 
>>
>> I have attached below*dealiiDetailed.log*. There it is picking up the 
> bundled tbb, but when i took a look at the CMakeOutput.log  (attached below 
> *dealiiInstallationCMakeOutput.log) *- it has picked up intel's tbb.
>  
>
>> You have to recompile deal.II to use the external TBB library 
>> instead. [1]
>>
> Is there a dealii cmake flag to force dealii to use external tbb library?
>
> Best,
> Sambit 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Hanging node constraints and periodic constraints together causing an issue

2018-01-23 Thread Sambit Das
Hello Dr. Arndt,

The above fix resolved the issue in the minimal example. Thanks a lot for 
providing the fix.

Best,
Sambit

On Tuesday, January 23, 2018 at 6:16:02 AM UTC-6, Daniel Arndt wrote:
>
> Sambit,
>
> Please try if https://github.com/dealii/dealii/pull/5779 fixes the issue 
> for you.
>
> Best,
> Daniel
>  
>
> Am Dienstag, 16. Januar 2018 22:06:55 UTC+1 schrieb Sambit Das:
>>
>> Thank you, Dr. Arndt.
>>
>> Best,
>> Sambit
>>
>> On Tuesday, January 16, 2018 at 11:16:08 AM UTC-6, Daniel Arndt wrote:
>>>
>>> Sambit,
>>>
>>> I created an issue at https://github.com/dealii/dealii/issues/5741 with 
>>> a modified version of your example.
>>>
>>> Best,
>>> Daniel
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Hanging node constraints and periodic constraints together causing an issue

2018-01-16 Thread Sambit Das
Thank you, Dr. Arndt.

Best,
Sambit

On Tuesday, January 16, 2018 at 11:16:08 AM UTC-6, Daniel Arndt wrote:
>
> Sambit,
>
> I created an issue at https://github.com/dealii/dealii/issues/5741 with a 
> modified version of your example.
>
> Best,
> Daniel
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Hanging node constraints and periodic constraints together causing an issue

2018-01-13 Thread Sambit Das
Hello,

I am facing an issue when I use hanging nodes with periodic boundary 
conditions. I have reproduced the error in the attached minimal example 
where I do the following steps:

1) Create a hypercube
2) Set appropriate boundary_ids on the faces of the hypercube for periodic 
boundary conditions, call collect_periodic_faces and 
triangulation.add_periodicity.
3) Perform two levels of mesh refinement:- a) first one step uniform 
refinement hypercube to get 8 cells ,and b) then pick one of the 8 cells 
and refine only that cell which creates hanging nodes on the faces. Finally 
I get 15 cells (see attached image).
4) Create constraintMatrix which includes both hanging node and periodic 
constraints.

*The issue:*
 If I flip the boundary ids of the face pairs while marking the faces of 
the hypercube in step-2 (change Ltemp=L in line 62 to Ltemp=-L), the size 
of the constraint matrix created in step-4 changes. Further investigation 
of the constraint matrix entries in the two cases revealed
that the number of identity_constraints are different- in one case it gives 
19 identity_constraints and 17 in the other case. Manually counting the 
numbering of identity_constraints using the attached image gives 19. 

Is it correct to expect the size of the constraint matrix to remain 
unchanged in the above case?

Additional note: The minimal example doesn't throw any error in the debug 
mode. I am using deal ii version 8.5.1 and serial triangulation.

Thanks,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Include all deal.II header file
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
//Include generic C++ headers
#include 
#include 


using namespace dealii;
int main (int argc, char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);

  const double L=20;
  Triangulation<3,3> triangulation;  
  GridGenerator::hyper_cube (triangulation, -L, L);

  const double Ltemp=L;//FIXME: Size of constraint matrix changes between Ltemp=L, and Ltemp=-L;
  //mark faces
  typename Triangulation<3,3>::active_cell_iterator cell = triangulation.begin_active(), endc = triangulation.end();
  for(; cell!=endc; ++cell) 
  {
  for(unsigned int f=0; f < GeometryInfo<3>::faces_per_cell; ++f)
	{
	  const Point<3> face_center = cell->face(f)->center();
	  if(cell->face(f)->at_boundary())
	{
	  if (std::abs(face_center[0]-Ltemp)<1.0e-5)
		cell->face(f)->set_boundary_id(1);
	  else if (std::abs(face_center[0]+Ltemp)<1.0e-5)
		cell->face(f)->set_boundary_id(2);
	  else if (std::abs(face_center[1]-Ltemp)<1.0e-5)
		cell->face(f)->set_boundary_id(3);
	  else if (std::abs(face_center[1]+Ltemp)<1.0e-5)
		cell->face(f)->set_boundary_id(4);
	  else if (std::abs(face_center[2]-Ltemp)<1.0e-5)
		cell->face(f)->set_boundary_id(5);
	  else if (std::abs(face_center[2]+Ltemp)<1.0e-5)
		cell->face(f)->set_boundary_id(6);
	}
	}
  }
 
  std::vector > periodicity_vector;
  for (int i = 0; i < 3; ++i)
  {
  GridTools::collect_periodic_faces(triangulation, /*b_id1*/ 2*i+1, /*b_id2*/ 2*i+2,/*direction*/ i, periodicity_vector);
  }
  triangulation.add_periodicity(periodicity_vector);

  //two level mesh refinement 
  
  triangulation.refine_global(1);

  typename Triangulation<3,3>::active_cell_iterator cellBegin = triangulation.begin_active();
  cellBegin->set_refine_flag();
  triangulation.execute_coarsening_and_refinement(); 
  
  std::cout << "number of elements: "
	<< triangulation.n_global_active_cells()
	<< std::endl;   
  /
  
  FESystem<3> FE(FE_Q<3>(QGaussLobatto<1>(2)), 1); //linear shape function
  DoFHandler<3> dofHandler (triangulation);
  dofHandler.distribute_dofs(FE);

  ///creating constraint matrix
  ConstraintMatrix constraints;
  constraints.clear();
  std::cout<< "Adding hanging node constraints... "<< std::endl;
  DoFTools::make_hanging_node_constraints(dofHandler, constraints);
  std::cout<< "Adding periodicity constraints... "<< std::endl;
  std::vector > periodicity_vectorDof;
  for (int i = 0; i < 3; ++i)
{
  GridTools::collect_periodic_faces(dofHandler, /*b_id1*/ 2*i+1, /*b_id2*/ 

Re: [deal.II] Re: Strange error in 9.0.0-pre version: the size of two component support points is not twice the single component support points

2017-12-18 Thread Sambit Das
Hello Prof. Bangerth,

Thank you for your reply. I have trimmed the minimal example to just 
reproduce the error in the debug mode and created a github issue.

Thanks,
Sambit

On Sunday, December 17, 2017 at 4:02:21 PM UTC-6, Wolfgang Bangerth wrote:
>
> On 12/16/2017 06:40 PM, Sambit Das wrote: 
> > Just to add to my above post when I ran in debug mode I get the 
> following error 
>
> Everything that happens after this error can definitely not be relied upon 
> any 
> more. So I'm not surprised that the information you get is not correct. 
>
> Can you create a github issue with the minimal testcase that reproduces 
> this 
> error (and remove any code that would run after the call to 
> distribute_dofs 
> that triggers this)? 
>
> Once that is fixed, we can go back to the original issue and see whether 
> that 
> is fixed as well. 
>
> Thanks 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-11 Thread Sambit Das
Thanks a lot for providing the patch, Dr. Arndt.


> Still, the check for orthogonal_equality doesn't need to suceed even if 
> the vertices were moved correctly. We are only updating vertices active 
> cells while the PeriodicFacePairs
> store CellIterators for the coarsest level. Not all of the vertices of 
> these coarse cells are necessarily part of any active cell. Hence, their 
> location might not be updated.
> Of course the topological information stored in PeriodicFacePairs is not 
> changed by moving the mesh, so checking that all the vertices (on ghost 
> cells) have been moved correctly should be all you need.
> This is also what thet test in PR #5612 does.
>
> I now understand why I was still failing orthogonal equality even after I 
moved all ghost nodes consistently (I tried this separately).  

Thank you,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: DoFTools::make_hanging_node_constraints affecting DoFTools::make_periodicity_constraints for a single element triangulation with no hanging nodes

2017-12-11 Thread Sambit Das
Dear Dr. Arndt,

It seems I was accidentally running the minimal example in the non debug 
mode. After running in debug mode, I am indeed getting the error message. 


> An error occurred in line <1510> of file 
> <../include/deal.II/lac/constraint_matrix.h> in function
> void dealii::ConstraintMatrix::add_line(const 
> dealii::ConstraintMatrix::size_type)
> The violated condition was: 
> sorted==false
> Additional information: 
> (none)
>
> Thanks for also pointing out my incorrect use of FE_Q. The debug mode also 
returned an error there. I have now used QGaussLobatto as suggested by the 
documentation.

Best,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] DoFTools::make_hanging_node_constraints affecting DoFTools::make_periodicity_constraints for a single element triangulation with no hanging nodes

2017-12-10 Thread Sambit Das
Hi All,

I am facing the following issue: I am setting periodic boundary conditions 
in all directions on a dofHandler object attached to a single element 
triangulation. When I print the ConstraintMatrix I observe that a trivial 
call to DoFTools::make_hanging_node_constraints(..) which I call prior to 
calling DoFTools::make_periodicity_constraints gives an erroneous 
ConstraintMatrix as shown below for the 8 degrees of freedom corresponding 
to the 8 nodes.

0 1:  1
2 3:  1
4 5:  1
6 7:  1
1 3:  1
5 7:  1
3 7:  1

However, if the don't call ConstraintMatrix.close() (line 102 in the 
minimal example) after the trivial call to 
DoFTools::make_hanging_node_constraints(..), I get the correct 
ConstraintMatrix. Likewise, if I don't use 
make_hanging_node_constraints(..) the ConstraintMatrix is correct too
0 7:  1
1 7:  1
2 7:  1
3 7:  1
4 7:  1
5 7:  1
6 7:  1

I also checked that if I printed the ConstraintMatrix after just setting 
the hanging_node_constraints, it doesn't print anything.
I wonder if I am making a mistake here. I have provided a minimal working 
example file which reproduces this error. 

Thanks,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//Include all deal.II header file
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
//Include generic C++ headers
#include 
#include 


using namespace dealii;
int main (int argc, char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);

  const double L=20;
  parallel::distributed::Triangulation<3> triangulation(MPI_COMM_WORLD);  
  GridGenerator::hyper_cube (triangulation, -L, L);


  //mark faces
  typename parallel::distributed::Triangulation<3>::active_cell_iterator cell = triangulation.begin_active(), endc = triangulation.end();
  for(; cell!=endc; ++cell) 
  {
  for(unsigned int f=0; f < GeometryInfo<3>::faces_per_cell; ++f)
	{
	  const Point<3> face_center = cell->face(f)->center();
	  if(cell->face(f)->at_boundary())
	{
	  if (std::abs(face_center[0]+(L))<1.0e-8)
		cell->face(f)->set_boundary_id(1);
	  else if (std::abs(face_center[0]-(L))<1.0e-8)
		cell->face(f)->set_boundary_id(2);
	  else if (std::abs(face_center[1]+(L))<1.0e-8)
		cell->face(f)->set_boundary_id(3);
	  else if (std::abs(face_center[1]-(L))<1.0e-8)
		cell->face(f)->set_boundary_id(4);
	  else if (std::abs(face_center[2]+(L))<1.0e-8)
		cell->face(f)->set_boundary_id(5);
	  else if (std::abs(face_center[2]-(L))<1.0e-8)
		cell->face(f)->set_boundary_id(6);
	}
	}
  }
  
  std::vector > periodicity_vector;
  for (int i = 0; i < 3; ++i)
  {
  GridTools::collect_periodic_faces(triangulation, /*b_id1*/ 2*i+1, /*b_id2*/ 2*i+2,/*direction*/ i, periodicity_vector);
  }
  triangulation.add_periodicity(periodicity_vector);
  
  FESystem<3> FE(FE_Q<3>(QGauss<1>(2)), 1); //linear shape function
  DoFHandler<3> dofHandler (triangulation);
  dofHandler.distribute_dofs(FE);

  ConstraintMatrix perConstraints;
  perConstraints.clear();
  DoFTools::make_hanging_node_constraints(dofHandler, perConstraints);
  perConstraints.close();

  std::cout<< "Adding periodicity constraints to dofHandler "<< std::endl;
  std::vector > periodicity_vectorDof;
  for (int i = 0; i < 3; ++i)
{
  GridTools::collect_periodic_faces(dofHandler, /*b_id1*/ 2*i+1, /*b_id2*/ 2*i+2,/*direction*/ i, periodicity_vectorDof);
}
  DoFTools::make_periodicity_constraints >(periodicity_vectorDof, perConstraints);
  perConstraints.close();
  std::cout<< "printing constraint matrix"<

[deal.II] Re: Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-05 Thread Sambit Das
Dear Dr. Arndt,

I am using the GridTools::collect_periodic_faces(..) as a sanity check 
after moving the triangulation. I donot again set periodicity constraints. 
The documentation also mentions "it is possible to call this function 
several times with different boundary ids to generate a vector with all 
periodic pairs". Moreover, in my actual application I never do refinement 
as I read a pre-generated mesh file, where the similar error occurs after 
moving the triangulation.

I did two more checks in the minimal example-
1) by not calling GridTools::collect_periodic_faces(..)  after refinement, 
I do not get any error messages. But is there a way to check whether the 
periodic match still holds in the moved triangulation without calling 
GridTools::collect_periodic_faces(..)?
2) by placing the GridTools::collect_periodic_faces(..) before moving the 
triangulation but after refinement, it worked fine on serial and parallel, 
which suggests something breaking after movement. 

Best,
Sambit

>
>
> your minimal example fails, because you are calling 
>   GridTools::collect_periodic_faces(triangulation, /*b_id1*/ 2*i+1, 
> /*b_id2*/ 2*i+2,/*direction*/ i, periodicity_vector);
> after 
>   triangulation.refine_global(2);
> again. As explained in the documentation 
> 
>  
> this is not unexpected.
>
> Does your issue pertain after making sure to call
>   GridTools::collect_periodic_faces(triangulation, /*b_id1*/ 2*i+1, 
> /*b_id2*/ 2*i+2,/*direction*/ i, periodicity_vector);
> only before mesh refinement?
>
> Best,
> Daniel
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-04 Thread Sambit Das
The approach which i discussed in the last post worked on 2 processors but 
didn't work for 16 processors- periodic face pair match failed, but I am no 
longer getting segfaults. I think the ghost values are still not being set 
correctly.
The parallel distributed diplacement vector constructor takes the locally_owned 
nodes and the ghost nodes as arguments.I will create another MWE and post in 
the group.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-04 Thread Sambit Das
Hello,

I wanted to update that I have resolved the issue by using a custom 
parallel partitioned displacement vector initialized with all the ghost 
indices I would need for moving the ghost elements of the triangulation.

parallel::distributed::Vector 
<https://www.dealii.org/8.4.0/doxygen/deal.II/classparallel_1_1distributed_1_1Vector.html><
 
Number >::Vector 
<https://www.dealii.org/8.4.0/doxygen/deal.II/classparallel_1_1distributed_1_1Vector.html>
 
( const IndexSet 
<https://www.dealii.org/8.4.0/doxygen/deal.II/classIndexSet.html> &  
local_range,
const IndexSet 
<https://www.dealii.org/8.4.0/doxygen/deal.II/classIndexSet.html> &  
ghost_indices,
const MPI_Comm  communicator 

)

 I no longer use triangulation.communicate_locally_moved_vertices(..).

Best,
Sambit

On Monday, December 4, 2017 at 5:46:23 PM UTC-5, Sambit Das wrote:
>
> Hello Rajat,
>
> Thanks for the suggestion. It worked for the simple case of moving all 
> nodes of the triangulation by a constant displacement. In the actual 
> application, I want to move the mesh using the values from a 
> parallel:distributed displacement field which also has periodic boundary 
> conditions. For example
>
> for (int d=0; d<3; ++d)
>   
> vertexDisplacement[d]=diisplacementVector(cell->vertex_dof_index(vertex_no,d));
>
> However, this leads to illegal memory access on the displacement vector 
> when iterating over the ghost cells of the triangulation. 
>
> I have also attached below a minimal non-working example of my issue. It 
> would be very helpful if I could use a function on the lines of 
> triangulation.communicate_locally_moved_vertices(..).
>
> Thank you,
> Sambit
>
>
> On Monday, December 4, 2017 at 2:22:03 PM UTC-5, RAJAT ARORA wrote:
>>
>> Hello Sambit,
>>
>> Can you  try doing this?
>> Also move the vertices of the ghost cell and avoid calling   
>> dftPtr->triangulation.communicate_locally_moved_vertices(locally_owned_vertices);
>>
>> When I tried to use 
>> triangulation.communicate_locally_moved_vertices(locally_owned_vertices) 
>> last time, something wierd (Dont exactly remember what, Code was not doing 
>> what I was expecting) happened with me. But Since then, I have avoided 
>> using it.
>>
>>
>>
>> Thanks.
>>
>> On Monday, December 4, 2017 at 1:22:37 PM UTC-5, Sambit Das wrote:
>>>
>>> Hello Dr. Arndt,
>>>
>>> Thank you for your reply. My apologies for not being clear on the " 
>>> breaks the periodic face pairs match". Following is the error message I get 
>>> when I run on parallel.
>>>  
>>> *An error occurred in line <3699> of file 
>>> 
>>>  
>>> in function*
>>> *void 
>>> dealii::GridTools::match_periodic_face_pairs(std::set<std::pair<CellIterator,
>>>  
>>> unsigned int>, std::less<std::pair<CellIterator, unsigned int>>, 
>>> std::allocator<std::pair<CellIterator, unsigned int>>> &, 
>>> std::set<std::pair<dealii::identity::type, unsigned int>, 
>>> std::less<std::pair<dealii::identity::type, unsigned int>>, 
>>> std::allocator<std::pair<dealii::identity::type, unsigned 
>>> int>>> &, int, 
>>> std::vector<dealii::GridTools::PeriodicFacePair, 
>>> std::allocator<dealii::GridTools::PeriodicFacePair>> &, const 
>>> dealii::Tensor<1, CellIterator::AccessorType::space_dimension, double> &, 
>>> const dealii::FullMatrix &) [with CellIterator = 
>>> dealii::TriaIterator<dealii::CellAccessor<3, 3>>]*
>>> *The violated condition was: *
>>> *n_matches == pairs1.size() && pairs2.size() == 0*
>>> *Additional information: *
>>> *Unmatched faces on periodic boundaries*
>>>
>>>
>>> I suspect this has something to do with the ghost nodes across the 
>>> periodic boundary not being handled correctly. I am right now creating a 
>>> minimal working example of my bug. I will post that soon.
>>>
>>> Best,
>>> Sambit
>>>
>>>
>>> On Monday, December 4, 2017 at 8:53:51 AM UTC-5, Daniel Arndt wrote:
>>>>
>>>> Sambit,
>>>>
>>>> I am trying to move all parallel triangulation nodes by a constant 
>>>>> displacement, but that breaks the periodic face pairs match when I 
>>>>> call GridTools::collect_periodic_faces(...). I use the following code for 
>>>>> the mesh movement. The dftPtr->triangulation has periodicity constraints 
>>>>> using add_periodicity(...). 
>>>>>
>>>> What exactly do you mean by "that breaks the periodic face pairs 
>>>> match"? What is the error you are observing?
>>>> Can you provide us with a minimal example that shows the problem so we 
>>>> can check?
>>>>
>>>> Best,
>>>> Daniel
>>>>
>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-04 Thread Sambit Das
Hello Rajat,

Thanks for the suggestion. It worked for the simple case of moving all 
nodes of the triangulation by a constant displacement. In the actual 
application, I want to move the mesh using the values from a 
parallel:distributed displacement field which also has periodic boundary 
conditions. For example

for (int d=0; d<3; ++d)
  
vertexDisplacement[d]=diisplacementVector(cell->vertex_dof_index(vertex_no,d));

However, this leads to illegal memory access on the displacement vector 
when iterating over the ghost cells of the triangulation. 

I have also attached below a minimal non-working example of my issue. It 
would be very helpful if I could use a function on the lines of 
triangulation.communicate_locally_moved_vertices(..).

Thank you,
Sambit


On Monday, December 4, 2017 at 2:22:03 PM UTC-5, RAJAT ARORA wrote:
>
> Hello Sambit,
>
> Can you  try doing this?
> Also move the vertices of the ghost cell and avoid calling   
> dftPtr->triangulation.communicate_locally_moved_vertices(locally_owned_vertices);
>
> When I tried to use 
> triangulation.communicate_locally_moved_vertices(locally_owned_vertices) 
> last time, something wierd (Dont exactly remember what, Code was not doing 
> what I was expecting) happened with me. But Since then, I have avoided 
> using it.
>
>
>
> Thanks.
>
> On Monday, December 4, 2017 at 1:22:37 PM UTC-5, Sambit Das wrote:
>>
>> Hello Dr. Arndt,
>>
>> Thank you for your reply. My apologies for not being clear on the " 
>> breaks the periodic face pairs match". Following is the error message I get 
>> when I run on parallel.
>>  
>> *An error occurred in line <3699> of file 
>> 
>>  
>> in function*
>> *void 
>> dealii::GridTools::match_periodic_face_pairs(std::set<std::pair<CellIterator,
>>  
>> unsigned int>, std::less<std::pair<CellIterator, unsigned int>>, 
>> std::allocator<std::pair<CellIterator, unsigned int>>> &, 
>> std::set<std::pair<dealii::identity::type, unsigned int>, 
>> std::less<std::pair<dealii::identity::type, unsigned int>>, 
>> std::allocator<std::pair<dealii::identity::type, unsigned 
>> int>>> &, int, 
>> std::vector<dealii::GridTools::PeriodicFacePair, 
>> std::allocator<dealii::GridTools::PeriodicFacePair>> &, const 
>> dealii::Tensor<1, CellIterator::AccessorType::space_dimension, double> &, 
>> const dealii::FullMatrix &) [with CellIterator = 
>> dealii::TriaIterator<dealii::CellAccessor<3, 3>>]*
>> *The violated condition was: *
>> *n_matches == pairs1.size() && pairs2.size() == 0*
>> *Additional information: *
>> *Unmatched faces on periodic boundaries*
>>
>>
>> I suspect this has something to do with the ghost nodes across the 
>> periodic boundary not being handled correctly. I am right now creating a 
>> minimal working example of my bug. I will post that soon.
>>
>> Best,
>> Sambit
>>
>>
>> On Monday, December 4, 2017 at 8:53:51 AM UTC-5, Daniel Arndt wrote:
>>>
>>> Sambit,
>>>
>>> I am trying to move all parallel triangulation nodes by a constant 
>>>> displacement, but that breaks the periodic face pairs match when I 
>>>> call GridTools::collect_periodic_faces(...). I use the following code for 
>>>> the mesh movement. The dftPtr->triangulation has periodicity constraints 
>>>> using add_periodicity(...). 
>>>>
>>> What exactly do you mean by "that breaks the periodic face pairs match"? 
>>> What is the error you are observing?
>>> Can you provide us with a minimal example that shows the problem so we 
>>> can check?
>>>
>>> Best,
>>> Daniel
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
//
C++ headers

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

using namespace dealii;
int main (int argc, char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);

  const double L=20;
  parallel::distributed::Triangulation<3> triangulation(MPI_COMM_WORLD);  
  GridGenerator::hyper_cube (triangulation, -

[deal.II] Re: Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-04 Thread Sambit Das
Hello Dr. Arndt,

Thank you for your reply. My apologies for not being clear on the " breaks 
the periodic face pairs match". Following is the error message I get when I 
run on parallel.
 
*An error occurred in line <3699> of file 

 
in function*
*void 
dealii::GridTools::match_periodic_face_pairs(std::set, std::less>, 
std::allocator>> &, 
std::set, 
std::less>, 
std::allocator>> &, int, 
std::vector> &, const 
dealii::Tensor<1, CellIterator::AccessorType::space_dimension, double> &, 
const dealii::FullMatrix &) [with CellIterator = 
dealii::TriaIterator>]*
*The violated condition was: *
*n_matches == pairs1.size() && pairs2.size() == 0*
*Additional information: *
*Unmatched faces on periodic boundaries*


I suspect this has something to do with the ghost nodes across the periodic 
boundary not being handled correctly. I am right now creating a minimal 
working example of my bug. I will post that soon.

Best,
Sambit


On Monday, December 4, 2017 at 8:53:51 AM UTC-5, Daniel Arndt wrote:
>
> Sambit,
>
> I am trying to move all parallel triangulation nodes by a constant 
>> displacement, but that breaks the periodic face pairs match when I 
>> call GridTools::collect_periodic_faces(...). I use the following code for 
>> the mesh movement. The dftPtr->triangulation has periodicity constraints 
>> using add_periodicity(...). 
>>
> What exactly do you mean by "that breaks the periodic face pairs match"? 
> What is the error you are observing?
> Can you provide us with a minimal example that shows the problem so we can 
> check?
>
> Best,
> Daniel
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-04 Thread Sambit Das
I forgot to add that this works on serial but fails for multiple processors

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Moving vertices of parallel triangulation breaks periodic face pair match

2017-12-04 Thread Sambit Das
Hi All,

I am trying to move all parallel triangulation nodes by a constant 
displacement, but that breaks the periodic face pairs match when I 
call GridTools::collect_periodic_faces(...). I use the following code for 
the mesh movement. The dftPtr->triangulation has periodicity constraints 
using add_periodicity(...). 
  

  std::vector vertex_moved(dftPtr->triangulation.n_vertices(), false);
  const std::vector locally_owned_vertices = 
GridTools::get_locally_owned_vertices(dftPtr->triangulation);  
  for (typename DoFHandler::active_cell_iterator
 cell=d_dofHandlerForce.begin_active(); cell!=d_dofHandlerForce.end(); 
++cell){
 if (cell->is_locally_owned())
 {
for (unsigned int vertex_no=0; 
vertex_novertex_index(vertex_no);

 if (vertex_moved[global_vertex_no]
|| !locally_owned_vertices[global_vertex_no])
   continue;
   Point vertexDisplacement; vertexDisplacement[0]=1e-4; 
vertexDisplacement[1]=0; vertexDisplacement[2]=0;
 cell->vertex(vertex_no) += vertexDisplacement;
 vertex_moved[global_vertex_no] = true;
  }
  }
  }
  
dftPtr->triangulation.communicate_locally_moved_vertices(locally_owned_vertices);
}

Any ideas if I am doing anything wrong? Thanks in advance.

Best,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Wrong result in use of FEEvaluation with different ConstraintMatrix objects

2017-12-02 Thread Sambit Das
Dear Martin,

Thank you for your quick reply. I understand my mistake now- the issue 
stems from "matrix_free_data.initialize_dof_vector(VECTOR_NAME, 1);" in the 
second 
case.  I have used matrix_free_data.initialize_dof_vector(VECTOR_NAME, 0) 
in both cases which causes a mismatch with the argument in the FEEvaluation 
in the second case. I was not running on debug
mode, which is why it didn't throw an error. I will use the dealii debug 
mode for such issues in the future.

Thank you again,
Best,
Sambit

On Saturday, December 2, 2017 at 6:57:26 AM UTC-6, Martin Kronbichler wrote:
>
> Dear Sambit, 
>
> If the result of the two cases is different and 
> ConstraintMatrix::distribute() was called in both cases, I expect there 
> to be some confusion regarding the indices of ghost entries. In debug 
> mode, there should be a check that the parallel partitioner of the 
> vector inside FEEvaluation::read_dof_values* does match with the 
> expected index numbering. Did you run in debug mode? To localize the 
> issue, can you check whether you called 
> "matrix_free_data.initialize_dof_vector(VECTOR_NAME, 1);" in the second 
> case? Note the optional argument "1" that must match with the "1" passed 
> to FEEvaluation. If the issue still appears, it must be because 
> ConstraintMatrix::distribute() does not do all updates. In that case, I 
> would appreciate if you can give us a workable example. 
>
> Best, 
> Martin 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Wrong result in use of FEEvaluation with different ConstraintMatrix objects

2017-12-01 Thread Sambit Das
Just to clarify my observations on serial and parallel discrepancy- The 
value from case 1 remains same for serial and parallel, while case 2 gives 
different values for serial and parallel.

On Friday, December 1, 2017 at 3:39:20 PM UTC-6, Sambit Das wrote:
>
> Hi All,
>
> I have reduced my bug to the following minimal example: I create a 
> MatrixFree object for two different ConstraintMatrices (provided as vector 
> of ConstraintMatrices to the reinit(..)). 
>
> matrix_free_data.reinit(dofHandlerVector, d_constraintsVector, 
> quadratureVector, additional_data);
>
> The dofHandlers are the same for both the ConstraintMatrices. Only 
> difference- the first ConstraintMatrix has periodic constraints while the 
> second one has no constraints. Next I run two cases
>
> *  Case1: *
>
>   FEEvaluation<3,FEOrder,FEOrder+1,1> phiTotEval(matrix_free_data,0, 0);
>   const unsigned int 
> numSubCells=matrix_free_data.n_components_filled(cell);
>   const int numQuadPoints=phiTotEval.n_q_points;
>
>   double phiTotRhoOutQuadSum=0.0;
>   for (unsigned int cell=0; cell<matrix_free_data.n_macro_cells(); ++cell){
>
> phiTotEval.reinit(cell); 
> phiTotEval.read_dof_values_plain(dftPtr->poissonPtr->phiTotRhoOut); 
> //phiTotRhoOut is a  parallel::distributed::Vector where I have previously 
> called ConstraintMatrix(the one with periodic constraints)::distribute() 
> and update_ghost_values()
>   
>   // I have also called 
> matrix_free_data.initialize_dof_vector(phiTotRhoOut) prior
> phiTotEval.evaluate(true,true);
>
> for (unsigned int q=0; q<numQuadPoints; ++q){
>VectorizedArray phiTot_q= phiTotEval.get_value(q);   
>
>for (unsigned int iSubCell=0; iSubCell<numSubCells; ++iSubCell){
>   phiTotRhoOutQuadSum+=phiTot_q[iSubCell];
>}
> }
>   }
>   
>  double phiTotRhoOutQuadSumTotal=Utilities::MPI::sum(phiTotRhoOutQuadSum, 
> mpi_communicator);
>
>   if (this_mpi_process == 0){
>std::cout << "phiTotRhoOutQuadSumTotal_vectorized " << 
> phiTotRhoOutQuadSumTotal<<std::endl;  
>   }
>
>
> *  Case2: *
>   Same as Case 1- only first line is changed to (The only difference in 
> both cases is the fe_id argument to FEEvaluation constructor)
>   FEEvaluation<3,FEOrder,FEOrder+1,1> phiTotEval(matrix_free_data,1, 0);
>
>
>
> I get different values of phiTotRhoOutQuadSumTotal for the two cases when 
> run on multiple processors but same value when run on single processor. 
> Comparing with a non-vectorized loop, case 1 which uses the periodic 
> constraints gives the correct answer. However going by the manual, I 
> expected the same value as I am using read_dof_values_plain(..) which 
> doesn't take any constraints into account. I am wondering if I am doing 
> anything wrong here.
>
> Best,
> Sambit
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Wrong result in use of FEEvaluation with different ConstraintMatrix objects

2017-12-01 Thread Sambit Das
Hi All,

I have reduced my bug to the following minimal example: I create a 
MatrixFree object for two different ConstraintMatrices (provided as vector 
of ConstraintMatrices to the reinit(..)). 

matrix_free_data.reinit(dofHandlerVector, d_constraintsVector, 
quadratureVector, additional_data);

The dofHandlers are the same for both the ConstraintMatrices. Only 
difference- the first ConstraintMatrix has periodic constraints while the 
second one has no constraints. Next I run two cases

*  Case1: *

  FEEvaluation<3,FEOrder,FEOrder+1,1> phiTotEval(matrix_free_data,0, 0);
  const unsigned int numSubCells=matrix_free_data.n_components_filled(cell);
  const int numQuadPoints=phiTotEval.n_q_points;

  double phiTotRhoOutQuadSum=0.0;
  for (unsigned int cell=0; cellpoissonPtr->phiTotRhoOut); 
//phiTotRhoOut is a  parallel::distributed::Vector where I have previously 
called ConstraintMatrix(the one with periodic constraints)::distribute() 
and update_ghost_values()

// I have also called 
matrix_free_data.initialize_dof_vector(phiTotRhoOut) prior
phiTotEval.evaluate(true,true);

for (unsigned int q=0; q phiTotEval(matrix_free_data,1, 0);



I get different values of phiTotRhoOutQuadSumTotal for the two cases when 
run on multiple processors but same value when run on single processor. 
Comparing with a non-vectorized loop, case 1 which uses the periodic 
constraints gives the correct answer. However going by the manual, I 
expected the same value as I am using read_dof_values_plain(..) which 
doesn't take any constraints into account. I am wondering if I am doing 
anything wrong here.

Best,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.