[deal.II] Re: Reading in higher order meshes from GMSH

2020-09-15 Thread 'peterrum' via deal.II User Group
Dear Sepehr, If I understand you correctly you would like to have support for quad9 and hex27. We have an open pull request for this issue (see: https://github.com/dealii/dealii/pull/10163). Hopefully we get that PR (or some version of it) merged soon. Regards, Peter On Tuesday, 15

[deal.II] Re: Error while installing

2020-07-01 Thread 'peterrum' via deal.II User Group
The error message already tells you what to do. You need to run the commend with admi privileges, i.e., `sudo make install`. Hope that helps, Peter On Wednesday, 1 July 2020 13:21:12 UTC+2, ME20S001 Bardawal Prakash wrote: > > Hello, > Someone help me solving this issue, here I'm attaching

[deal.II] Re: Is it possible to copy_triangulation for fullydistributed with periodic face?

2020-06-11 Thread 'peterrum' via deal.II User Group
Dear Heena, may I ask you to be more specific regarding to parallel::fullydistributed::Triangualation (p:f:t) error. In the case of p:f:t you can copy indeed refined meshes, however users need to deal with periodicity on their own by applying the periodicy once again. See the following test:

[deal.II] Re: Broadcasting packed objects

2020-06-08 Thread 'peterrum' via deal.II User Group
What you could also do is to turn compression off. Peter On Monday, 8 June 2020 14:19:25 UTC+2, peterrum wrote: > > Dear Maurice, > > The problem is that the size of `auto buffer = > dealii::Utilities::pack(r1);` is not the same on all processes, which is a > requirement if you use

Re: [deal.II] Step-4 1D problem

2020-06-08 Thread 'peterrum' via deal.II User Group
Indeed! Christoph, you seem to be right! Feel free to create a pull request on GitHub for this inconsistency! We will help you if you need some assistance! Amazing that there are still errors in the first tutorials although - probably - all deal.II user have had a look at these... Thanks,

[deal.II] Re: Broadcasting packed objects

2020-06-08 Thread 'peterrum' via deal.II User Group
Dear Maurice, The problem is that the size of `auto buffer = dealii::Utilities::pack(r1);` is not the same on all processes, which is a requirement if you use `MPI_Bcast`. My suggestion would to split the procedure into two steps: 1) bcast the size on rank 1; 2) bcast the actual data. Peter

[deal.II] Re: hp fem error assigning Fourier

2020-06-08 Thread 'peterrum' via deal.II User Group
Dear Ihsan, is the issue solved now? I have compiled your code with the current version of deal.II and it works. Peter On Monday, 8 June 2020 09:56:21 UTC+2, A.Z Ihsan wrote: > > Oops, i was wrong. I followed the deal.ii 9.2.0 tutorial meanwhile in my > local deal.ii version is 9.1. > There

[deal.II] Re: triangulation save not working for 1D domain

2020-06-06 Thread 'peterrum' via deal.II User Group
What type of triangulation are you using? Peter On Saturday, 6 June 2020 17:52:53 UTC+2, Amaresh B wrote: > > Dear all, > > I am trying to save my triangulation and using the lines below. > 'triangulation.save' commands works and saves the mesh if my domain is > either 2D or 3D (i.e. dim=2

[deal.II] Re: hp fem error assigning Fourier

2020-06-05 Thread 'peterrum' via deal.II User Group
Dear Ihsan, I have no problem to compile the following code (your code with minor adjustments): #include #include #include #include using namespace dealii; template class HPSolver { public: HPSolver( const unsigned int max_fe_degree); //virtual ~HPSolver(); const

[deal.II] Re: LinearOperator MPI Parallel Vector

2020-04-23 Thread 'peterrum' via deal.II User Group
Dear Doug, Could you post a short code how you want to use the LinearOperator so that I know what actually is not working. Regarding Trilinos + LA::dist::Vectror: there is an open PR ( https://github.com/dealii/dealii/pull/9925) which adds the instantiations (hope I did not miss any).

[deal.II] Re: Assembling sparse matrix from Matrix-free vmult with constrains

2020-03-10 Thread 'peterrum' via deal.II User Group
Hi Michal, any chance that you post or send me a small runnable test program. By the way, there is an open PR (https://github.com/dealii/dealii/pull/9343) for the computation of the diagonal in a matrix-free manner. Once this is merged, I will work the matrix-free assembly of sparse matrices.

[deal.II] Re: Application of inverse mass matrix in matrix-free context using cell_loop instead of scale()

2020-01-18 Thread 'peterrum' via deal.II User Group
Yes, like here https://github.com/dealii/dealii/blob/b84270a1d4099292be5b3d43c2ea65f3ee005919/tests/matrix_free/pre_and_post_loops_01.cc#L100-L121 On Saturday, 18 January 2020 12:57:24 UTC+1, Maxi Miller wrote: > > In step-48 the inverse mass matrix is applied by moving the inverse data > into

[deal.II] Re: Application of inverse mass matrix in matrix-free context using cell_loop instead of scale()

2020-01-18 Thread 'peterrum' via deal.II User Group
Hi Maxi, I guess I am not the correct person to explain you the reason for that assert. But what you are doing is that while calling scale you are messing with the ghost values (which prevents the compress step). You should do it only locally. What you might want to check it out are the new