Re: [deal.II] Re: Periodic boundary conditions : Error using GridTools::collect_periodic_facepairs

2020-12-08 Thread Aaditya Lakshmanan
Hi Daniel, The serial implementation of the code was exiting with error in the function call to *GridTools::collect_periodic_faces()* so I was looking in documentation for details on specifically that function. I see now that the role of the '*direction*' argument is mentioned explicitly i

Re: [deal.II] Re: Periodic boundary conditions : Error using GridTools::collect_periodic_facepairs

2020-12-08 Thread Daniel Arndt
Aaditya, the documentation says "[...] More precisely, faces with coordinates only differing in the direction component are identified." Hence, it is crucial to choose this correctly. For a cube with standard coloring faces 2 and 3 share the x-coordinates so direction must be 1. Best, Daniel Am

[deal.II] Re: Periodic boundary conditions : Error using GridTools::collect_periodic_facepairs

2020-12-08 Thread Aaditya Lakshmanan
Hi Mark, Quick update. I tried running a simulation with an implementation using the 'Triangulation' object instead and I face the same issue as earlier. It returns the same error message as in the parallel case. Also, making the change in the *direction* argument from 0 to 1 for the constrai

[deal.II] Re: Periodic boundary conditions : Error using GridTools::collect_periodic_facepairs

2020-12-08 Thread Aaditya Lakshmanan
Hi Mark, Thank you for responding. I haven't tried using a standard 'Triangulation' object instead of its parallel counterpart. I will try another implementation with the appropriate substitutions. Based on some trials I ran since I posted the question, my code runs without issues(even on

[deal.II] Re: Periodic boundary conditions : Error using GridTools::collect_periodic_facepairs

2020-12-08 Thread Marc Fehling
Hi Aaditya, on first look your implementation looks good to me. Does the same error occur when you are using a standard `Triangulation` object instead of a `parallel::distributed::Triangulation`? As far as I know, the direction parameter does not matter for scalar fields (see also step-45). W

Re: [deal.II] dealii installation Mac OS 10.15.5 (Catalina)

2020-12-08 Thread Deepak Kukrety
Dear Praveen & David, Thanks but I may need little more direction in trying to make this run. I deeply appreciate your help. Regards Deepak From: "dealii@googlegroups.com" on behalf of Praveen C Reply to: "dealii@googlegroups.com" Date: Tuesday, 8 December 2020 at 10:10 PM To: "Deal. II Go

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Marc Fehling
Hi Kaushik, the `p::d::SolutionTransfer` class should be able to deal with `FENothing` elements in your example. The tricky cases are when you're coarsening a `FENothing` element with others as Bruno already pointed out (h-coarsening), or if you change a `FENothing` element to a different elem

Re: [deal.II] Calculating local PETSC matrix

2020-12-08 Thread Wolfgang Bangerth
On 12/8/20 2:08 PM, Zachary 42! wrote: I want to create a PETSC matrix across multiple nodes using MPI. This should be simply finding the appropriate local index within my loops and build up the matrix. What are some of the functionalities I should look into for this? There are multiple “p

[deal.II] Calculating local PETSC matrix

2020-12-08 Thread Zachary 42!
Hi everyone, I want to create a PETSC matrix across multiple nodes using MPI. This should be simply finding the appropriate local index within my loops and build up the matrix. What are some of the functionalities I should look into for this? There are multiple “partition” so I thought it wo

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Kaushik Das
Hi Bruno: Thanks for pointing that out. I tried to not refine FE_nothing cells by modifying the refine loop: (The modified test is attached). for (auto &cell : dgq_dof_handler.active_cell_iterators()) if (cell->is_locally_owned() && cell->active_fe_index () != 0) { if (coun

Re: [deal.II] dealii installation Mac OS 10.15.5 (Catalina)

2020-12-08 Thread Praveen C
I think this should work arch -x86_64 ./step-1 > On 08-Dec-2020, at 7:33 PM, Wells, David wrote: > > Hi Deepak, > > You may be the first person to ever run deal.II on the new M1 Mac, > congratulations :) > > I don't have an M1 machine so I will have to make some educated guesses on > how to

[deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Bruno Turcksin
Hi, Are you sure that your test makes sense? You randomly assign FE indices to cells then you refine and coarsen cells. But what does it mean to coarsen 4 cells together when one of them is FE_Nothing? What would you expect to happen? Best, Bruno On Monday, December 7, 2020 at 5:54:10 PM UTC

[deal.II] DG Euler eqautions (step-33 and step-67)

2020-12-08 Thread giuseppe orlando
Dear Deal.II community, I'm currently working on DG formulation for Euler equations with implicit time-stepping, trying to combine in some sense the two approaches of ste-33 and step-67. I'm wondering if there is a way to employ automatic differentiation in the matrix-free approach; I think tha