Re: [deal.II] extrude triangulation with n_slices = 1

2023-04-06 Thread Greg Wang
Hi Wolfgang, Thanks a lot for clarifying! I decided to modify the code to adopt a p::shared::T model with METIS and realized anisotropic refinement doesn’t seem to work for this case either. The error comes from AffineConstraints’ unacceptance of any refinement cases that are not isotropic.

Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Wolfgang Bangerth
On 4/6/23 10:18, Wasim Niyaz Munshi ce21d400 wrote: How do I get the no.of cells owned by the processor? Triangulation::n_locally_owned_active_cells(). Best W. -- Wolfgang Bangerth email:

Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
I appreciate the clarification. I thought that global indexing was no longer present as the solution vector is distributed. I have one more doubt. I want to create a vector (H_vector) that stores some value for each Gauss point in the domain. For a serial problem, I was doing something like

Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wolfgang Bangerth
On 4/6/23 10:06, Wasim Niyaz Munshi ce21d400 wrote: ** Yes, I also had the same feeling. But, when I look at the plot in the tutorial of step-40 for 52M Dofs, I see that they have solved the problem using just 32 processors also. Can you kindly let me know how much memory is available when

Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
Yes, I also had the same feeling. But, when I look at the plot in the tutorial of step-40 for 52M Dofs, I see that they have solved the problem using just 32 processors also. Can you kindly let me know how much memory is available when you you run the problem on 32 processors? I get the memory

Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Wolfgang Bangerth
On 4/6/23 06:02, Wasim Niyaz Munshi ce21d400 wrote: I don't have a solution_vector for a parallel code, but a locally_relevant_solution. I want to know that, given this locally_relevant_solution and the cell, how do I get the element_sol? The global_dof will not be helpful here, as the

Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wolfgang Bangerth
On 4/6/23 01:31, Wasim Niyaz Munshi ce21d400 wrote: ** I tried to run step-40 with 52M DOFs on 32 processors. I am using *GridGenerator::subdivided_hyper_rectangle *to create a mesh with 5000*5000 elements. I have a single cycle in my simulation. However, I am running into some memory

Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Daniel Arndt
Wasim, The answer depends very much on what you actually want to do with that solution vector. Do you want a representation of the solution (assuming you are using Q1? nodal elements) on a single process/all processes or are you just interested in the partial solution on every process separately?

Re: [deal.II] Understanding MeshWorker::mesh_loop order with adaptive refinement

2023-04-06 Thread Corbin Foucart
I also think there may be a small typo in the documentation ; "If the flag AssembleFlags::assemble_own_cells is passed, then the default behavior is to first loop over faces and do the work

[deal.II] Extracting element solution in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
Hello everyone. I want to extract the element solution vector from the global solution once the problem is solved in step-40. For a serial code, I would do something like this: *int i=0;* *for (const auto vertex : cell->vertex_indices()) { int a = (cell->vertex_dof_index(vertex, 0));

[deal.II] Trouble installing p4est from candi M1 mac

2023-04-06 Thread Matteo Malvestiti
Good afternoon. I’m truly sorry to bother you, but I’ve spent a lot of time trying to fix this problem, without any success. I’m trying to install dealii on my M1 macbook air with MacOs Ventura. Ive been following the guide on https://github.com/dealii/dealii/wiki/Apple-ARM-M1-OSX I installed

Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
I tried to run step-40 with 52M DOFs on 32 processors. I am using *GridGenerator::subdivided_hyper_rectangle *to create a mesh with 5000*5000 elements. I have a single cycle in my simulation. However, I am running into some memory issues. I am getting the following error: *Running with PETSc