[deal.II] Coarse Cell IDs Distributed Triangulation

2021-11-11 Thread Jonathan Russ
Hello - I really would like to have a unique ID for each coarse cell regardless of the number of MPI processes. I have a distributed (not fullydistributed) triangulation. Testing the coarse_cell->index

[deal.II] Re: Coarse Cell IDs Distributed Triangulation

2021-11-11 Thread Marc Fehling
Hi Jonathan, there is a unique way to identify cells, even in parallel::distributed::Triangulation objects! For this, have a look at the CellId class. You can create an object of this class from a cell accessor using this

[deal.II] Re: Coarse Cell IDs Distributed Triangulation

2021-11-11 Thread Jonathan Russ
Hi Marc - Thank you for your reply. I saw that before and I see why it is useful during a single analysis with a set number of MPI ranks. This seems useful for communicating "ghost cell" data between MPI ranks. However, it doesn't seem to guarantee the following: For example: Say you have a

Re: [deal.II] Re: Coarse Cell IDs Distributed Triangulation

2021-11-11 Thread Wolfgang Bangerth
On 11/11/21 1:08 PM, Jonathan Russ wrote: Does the same cell in the bottom left corner of the domain have exactly the same CellId as the CellId it had when 2 processors were used? Yes :-) -- Wolfgang Bangerth

Re: [deal.II] Re: Coarse Cell IDs Distributed Triangulation

2021-11-11 Thread Jonathan Russ
Amazing. Thank you very much! Jonathan On Thursday, November 11, 2021 at 4:33:47 PM UTC-5 Wolfgang Bangerth wrote: > On 11/11/21 1:08 PM, Jonathan Russ wrote: > > Does the same cell in the bottom left corner of the domain have exactly > > the same CellId as the CellId it had when 2 processors

[deal.II] Problem about cell iterator

2021-11-11 Thread Toddy Liu
Dear Deal.II community, I'm programming on modifying step 35 using Parallel computing with multiple processors using distributed memory. In step35, the tutorial program used "synchronous" iterator which consists of two iterators, one for velocity and the other for pressure. Now I'm struggling