[deal.II] Re: Working with fully distributed triangulation: example dealii-pft/step-5

2021-10-27 Thread 'peterrum' via deal.II User Group
This is needed to guarantee that the levels of neighboring cells differ at most by 1 also the periodic boundaries. This is guaranteed by default within the domain but not at the boundaries (only if pbcs have been applied). PM On Wednesday, 27 October 2021 at 11:23:15 UTC+2 aditya@gmail.com

[deal.II] Re: Working with fully distributed triangulation: example dealii-pft/step-5

2021-10-20 Thread 'peterrum' via deal.II User Group
The reason why you need to add periodicity twice (independent of the type of the base tria) is that during copying the mesh the periodicity is not applied. When I implemented p:f:T I did not find an easy way to do it. Hope this helps, PM On Wednesday, 20 October 2021 at 16:19:28 UTC+2 aditya..

[deal.II] Re: Working with fully distributed triangulation: example dealii-pft/step-5

2021-10-19 Thread 'peterrum' via deal.II User Group
Hi Aditya, what you found is the old development (and proof of concept) code. Since that time, we have integrated p:f:T into deal.II. All the tests there have been converted to proper deal.II tests in the folder https://github.com/dealii/dealii/tree/master/tests/fullydistributed_grids. I sugg

[deal.II] Re: MatrixFree & complex-valued Helmholtz equation

2021-10-17 Thread 'peterrum' via deal.II User Group
Hi Hermes, I would suggest that you first take a look at step-67 (https://www.dealii.org/developer/doxygen/deal.II/step_67.html): it deals with systems with multiple components. Although I am not particular familiar with step-29, I guess you can express your complex system as two component sys

[deal.II] Re: Issues related to usage of parallel::fullydistributed::triangulation

2021-09-30 Thread 'peterrum' via deal.II User Group
Just replace `dealii::parallel::fullydistributed::Triangulation` by `dealii::Triangulation` within `Check_meshPeriodicity()` . I think that should work. https://www.dealii.org/developer/doxygen/deal.II/namespaceTriangulationDescription_1_1Utilities.html#af575881c2cf233fe6f85d1a3a65a73f6 and h

[deal.II] Re: PETSc vs Trilinos for step41

2021-09-28 Thread 'peterrum' via deal.II User Group
Hi Reza, I don't think there is a particular reason. Trilinos was simply chosen. Making PETSc work like in spep-40 should be not an issue. There are other tutorials that use PETSc and others that support both PETSc and Trilinos. PM On Wednesday, 29 September 2021 at 03:42:04 UTC+2 yagh...@umic

[deal.II] Re: Exception when using MGTransferMatrixFree::interpolate_to_mg with periodic BCs

2021-09-06 Thread 'peterrum' via deal.II User Group
> I tried the same merge of constraints as you but for some reason, this does not seem to be the right approach with local smoothing. It should somehow work. Let's see if we get some input from others! > For global coarsening things seem to be simpler because you assemble dof_handlers and const

[deal.II] Re: Exception when using MGTransferMatrixFree::interpolate_to_mg with periodic BCs

2021-09-06 Thread 'peterrum' via deal.II User Group
Hi Guilhem, > I just used your PR as a patch in my candi-deal-ii setup, and I am happy to report that indeed the bug was solved, thanks a lot for your reactivity! Thanks for the feedback! > But definitely, I am already extremely impressed with Matrix-Free performance: back in 2016 I wrote a co

[deal.II] Re: Exception when using MGTransferMatrixFree::interpolate_to_mg with periodic BCs

2021-09-03 Thread 'peterrum' via deal.II User Group
Hi Guilhem, Thanks for reporting the bug and providing a failing test. I have opened a PR (hopefully) fixing the issue (https://github.com/dealii/dealii/pull/12738) and simplified your code a bit (since the issue was not related to MatrixFree). Maybe you could verify that it indeed works for y

Re: [deal.II] point_value, Real&Imaginary parts, step-29

2021-08-27 Thread 'peterrum' via deal.II User Group
r: >> >> error: no matching function for call to 'point_values' >> >> VectorTools::point_values(mapping, dof_handler, >> locally_relevant_solution, Point<2>(0.2, 0.2), evaluation_cache); >> >> >> >> I checked the parameters

Re: [deal.II] point_value, Real&Imaginary parts, step-29

2021-08-24 Thread &#x27;peterrum&#x27; via deal.II User Group
pping, dof_handler, > locally_relevant_solution, Point<2>(0.2, 0.2), evaluation_cache); > > > > I checked the parameters and seems to be ok. What can be producing the > error? > > Thank you again > > H > > El mar, 24 ago 2021 a las 14:59, 'peterrum&#x

Re: [deal.II] point_value, Real&Imaginary parts, step-29

2021-08-24 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi Hermes, You don't need to do anything regarding the setup (it is done within the function). Just take at a look at: https://github.com/dealii/dealii/blob/44b6aadb35aca2333e4bfb6c9ce29c613f4dc9e9/tests/remote_point_evaluation/vector_tools_evaluate_at_points_01.cc#L214-L216 I'll extend the do

Re: [deal.II] finite element with shape functions having delta_ij property at quadrature points

2021-08-16 Thread &#x27;peterrum&#x27; via deal.II User Group
> I asked myself why the constructor of FE_Q wants a quadrature object whos first and last qp is at zero and one, respectively? Because it is a continuous element and dofs at 0 and 1 are assigned to shared entities (vertices, lines, quads). Peter On Monday, 16 August 2021 at 22:03:44 UTC+2 Si

[deal.II] Re: Grid deformation after load-balancing

2021-08-04 Thread &#x27;peterrum&#x27; via deal.II User Group
I must admit I don't know how to transform manifolds, let alone if it is possible! Maybe someone else has an idea? Separate if this, could you try out what I have described and see if it resolves the problem. At least than, we have an idea what the cause is what is at least a better stage we ha

[deal.II] Re: Grid deformation after load-balancing

2021-08-04 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi Shahab, My best guess is that the attached manifold is not consistent to what you need after the grid transformations. You have to be aware that the attached manifold is used for generating new vertices (as it is needed during AMR/load-balancing in regions that is newly owned by a rank). May

Re: [deal.II] Use a coarse grid solution as initial condition for a finer grid

2021-08-03 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi Vachan, I don't think you need to use RPE plane. My guess is that you can use VectorTools::point_values(), which internally uses PRE (see https://github.com/dealii/dealii/blob/e413cbf8ac77606cf4a04e1e2fa75232c08533b4/include/deal.II/numerics/vector_tools_evaluate.h#L230-L343). Here you coul

[deal.II] Re: Using periodic boundary conditions with parallell processing

2021-07-03 Thread &#x27;peterrum&#x27; via deal.II User Group
Dear Raghunandan, I don't quite understand what you want to accomplish. You are setting PBC constraint between the left and right face and after that you apply DBC on the left face? Is that about right? My first guess would be that you constraints are somewhere not globally consistent. Maybe y

Re: [deal.II] Printing git version from my deal.ii-based code

2021-07-01 Thread &#x27;peterrum&#x27; via deal.II User Group
me the deal.II git version. > > Is there a way to put the git version of my application code also ? > > I suppose this has to be done to cmake stage I suppose. > > Thanks > praveen > > > On 01-Jul-2021, at 2:52 PM, 'peterrum' via deal.II User Group < > d

[deal.II] Re: Printing git version from my deal.ii-based code

2021-07-01 Thread &#x27;peterrum&#x27; via deal.II User Group
There are some macros: https://www.dealii.org/developer/doxygen/deal.II/revision_8h.html#a6a61bbd5c9074273a721450df150d9f5. You can use it like this: https://github.com/hyperdeal/hyperdeal/blob/60091feb59f98789bc082395150c7537f6b3dc07/examples/vlasov_poisson/vlasov_poisson.cc#L228-L231 Hope t

[deal.II] Re: Unique FE evaluation on arbitrary points

2021-05-13 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi David, > I want to evaluate each point out of the cloud only once in case of a distributed triangulation. a quick (and dirty) solution would be to perform the communication "non-uniquely" (i.e. evaluate multiple times) and pick out one specific value. If you use VectorTools::evaluate_at_poi

Re: [deal.II] Setting manifold for a truncated cone mesh

2021-04-07 Thread &#x27;peterrum&#x27; via deal.II User Group
I am not sure. But it is worth a try! Peter On Wednesday, 7 April 2021 at 13:30:59 UTC+2 vachanpo...@gmail.com wrote: > Peter, > > Thanks for your response. I am using GridOut because I only require > printing the triangulation (like in group manifold >

Re: [deal.II] Setting manifold for a truncated cone mesh

2021-04-07 Thread &#x27;peterrum&#x27; via deal.II User Group
My guess is that you did not specify the last argument in https://www.dealii.org/developer/doxygen/deal.II/classDataOut.html#a04d491be143f2672b076622e7abd32b8. The default is that only the boundary is plotted curved. Hope this helps! Peter On Wednesday, 7 April 2021 at 12:46:04 UTC+2 vachanp

[deal.II] Re: Saving particles for checkpointing

2020-12-09 Thread &#x27;peterrum&#x27; via deal.II User Group
Do the following links help? - https://github.com/dealii/dealii/blob/master/tests/serialization/particle_handler_01.cc - https://github.com/dealii/dealii/blob/master/tests/serialization/particle_handler_02.cc Peter On Wednesday, 9 December 2020 at 23:15:09 UTC+1 blais...@gmail.com wrote: > Dea

[deal.II] Re: Creating a SOA with nice OOP like interface

2020-11-22 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi Zachary, Regarding your two TODOs. I am explaining how we do it in deal.II and then you can decide if this approach also fits your needs. *Index handling* The simple case is that the indices (level, index within the level) are passed to `TriaAccessorBase` in the constructor (like here https

[deal.II] Re: Creating a SOA with nice OOP like interface

2020-11-21 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi Zachary, I haven't watched the lecture, but I think I can, nevertheless, answer this question since I have been refactoring and generalizing the relevant data structures in the last months. The key aspect is that the data is separated from the class that gives coordinated access to the dat

[deal.II] Re: Reading in higher order meshes from GMSH

2020-09-15 Thread &#x27;peterrum&#x27; via deal.II User Group
Dear Sepehr, If I understand you correctly you would like to have support for quad9 and hex27. We have an open pull request for this issue (see: https://github.com/dealii/dealii/pull/10163). Hopefully we get that PR (or some version of it) merged soon. Regards, Peter On Tuesday, 15 September

[deal.II] Re: Error while installing

2020-07-01 Thread &#x27;peterrum&#x27; via deal.II User Group
The error message already tells you what to do. You need to run the commend with admi privileges, i.e., `sudo make install`. Hope that helps, Peter On Wednesday, 1 July 2020 13:21:12 UTC+2, ME20S001 Bardawal Prakash wrote: > > Hello, > Someone help me solving this issue, here I'm attaching s

[deal.II] Re: Is it possible to copy_triangulation for fullydistributed with periodic face?

2020-06-11 Thread &#x27;peterrum&#x27; via deal.II User Group
Dear Heena, may I ask you to be more specific regarding to parallel::fullydistributed::Triangualation (p:f:t) error. In the case of p:f:t you can copy indeed refined meshes, however users need to deal with periodicity on their own by applying the periodicy once again. See the following test:

[deal.II] Re: Broadcasting packed objects

2020-06-08 Thread &#x27;peterrum&#x27; via deal.II User Group
What you could also do is to turn compression off. Peter On Monday, 8 June 2020 14:19:25 UTC+2, peterrum wrote: > > Dear Maurice, > > The problem is that the size of `auto buffer = > dealii::Utilities::pack(r1);` is not the same on all processes, which is a > requirement if you use `MPI_Bcast`.

Re: [deal.II] Step-4 1D problem

2020-06-08 Thread &#x27;peterrum&#x27; via deal.II User Group
Indeed! Christoph, you seem to be right! Feel free to create a pull request on GitHub for this inconsistency! We will help you if you need some assistance! Amazing that there are still errors in the first tutorials although - probably - all deal.II user have had a look at these... Thanks, Pete

[deal.II] Re: Broadcasting packed objects

2020-06-08 Thread &#x27;peterrum&#x27; via deal.II User Group
Dear Maurice, The problem is that the size of `auto buffer = dealii::Utilities::pack(r1);` is not the same on all processes, which is a requirement if you use `MPI_Bcast`. My suggestion would to split the procedure into two steps: 1) bcast the size on rank 1; 2) bcast the actual data. Peter

[deal.II] Re: hp fem error assigning Fourier

2020-06-08 Thread &#x27;peterrum&#x27; via deal.II User Group
Dear Ihsan, is the issue solved now? I have compiled your code with the current version of deal.II and it works. Peter On Monday, 8 June 2020 09:56:21 UTC+2, A.Z Ihsan wrote: > > Oops, i was wrong. I followed the deal.ii 9.2.0 tutorial meanwhile in my > local deal.ii version is 9.1. > There i

[deal.II] Re: triangulation save not working for 1D domain

2020-06-06 Thread &#x27;peterrum&#x27; via deal.II User Group
What type of triangulation are you using? Peter On Saturday, 6 June 2020 17:52:53 UTC+2, Amaresh B wrote: > > Dear all, > > I am trying to save my triangulation and using the lines below. > 'triangulation.save' commands works and saves the mesh if my domain is > either 2D or 3D (i.e. dim=2 or

[deal.II] Re: hp fem error assigning Fourier

2020-06-05 Thread &#x27;peterrum&#x27; via deal.II User Group
Dear Ihsan, I have no problem to compile the following code (your code with minor adjustments): #include #include #include #include using namespace dealii; template class HPSolver { public: HPSolver( const unsigned int max_fe_degree); //virtual ~HPSolver(); const hp::FE

[deal.II] Re: LinearOperator MPI Parallel Vector

2020-04-23 Thread &#x27;peterrum&#x27; via deal.II User Group
Dear Doug, Could you post a short code how you want to use the LinearOperator so that I know what actually is not working. Regarding Trilinos + LA::dist::Vectror: there is an open PR ( https://github.com/dealii/dealii/pull/9925) which adds the instantiations (hope I did not miss any). Regardin

[deal.II] Re: Assembling sparse matrix from Matrix-free vmult with constrains

2020-03-09 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi Michal, any chance that you post or send me a small runnable test program. By the way, there is an open PR (https://github.com/dealii/dealii/pull/9343) for the computation of the diagonal in a matrix-free manner. Once this is merged, I will work the matrix-free assembly of sparse matrices.

[deal.II] Re: Application of inverse mass matrix in matrix-free context using cell_loop instead of scale()

2020-01-18 Thread &#x27;peterrum&#x27; via deal.II User Group
Yes, like here https://github.com/dealii/dealii/blob/b84270a1d4099292be5b3d43c2ea65f3ee005919/tests/matrix_free/pre_and_post_loops_01.cc#L100-L121 On Saturday, 18 January 2020 12:57:24 UTC+1, Maxi Miller wrote: > > In step-48 the inverse mass matrix is applied by moving the inverse data > into

[deal.II] Re: Application of inverse mass matrix in matrix-free context using cell_loop instead of scale()

2020-01-18 Thread &#x27;peterrum&#x27; via deal.II User Group
Hi Maxi, I guess I am not the correct person to explain you the reason for that assert. But what you are doing is that while calling scale you are messing with the ghost values (which prevents the compress step). You should do it only locally. What you might want to check it out are the new `