Re: [deal.II] How to set material id with MPI

2019-09-03 Thread Richard Schussnig
Hi Pham!
>From your description I do not really get why you are specifically doing 
this, so maybe consider the following:
I assume, you are flagging cells material ids on one locally owned part due 
to some custom condition - lets say some stress or function, 
you cannot formulate in the global coordinate system, so that you cannot 
decide on the material id simply using cell->center() or similar strategies.

The workaround i came up with is as follows:
Every cell is only flagged from the owning processor and needs to be 
communicated to other parts of the 
p::d:tria afterwards, where those elements are ghosts.
->As a consequence, ghost cells have different material ids depending on 
the side, from which they are viewed from, 
which is what we want to get rid of.

So firstly, create - in my case I already had such a space in use - the 
simplest possible discontinuous (!) function space,
e.g. with FE_DGQ(degree=0). 

Then, you loop over the locally owned cells and assemble simply the 
material id. - similar to 
assembling e.g. the right hand side. (Here the discontinuity of the 
function space in use comes into play, since you do not 
want to mix contributions from different elements (or you consider the 
internal nodes for C0, but at least 2nd order functions).
 
After that, you of course compress and convert to a ghosted vector - 
meaning, that you now have access to the entries of the vectors
in the ghost cells as well - which now contain your material ids.

So finally, you loop over locally owned AND ghosted cells - and set the 
material id in both from the vector you got, 
which is now accessible from the cells you need.

The above approach might not be the fastest one, but if you can reuse some 
space it might not be too bad.
If someone reading this sees any flaw I am currently not aware of, please 
let me know! - I am new to both C++ and dealii ; )

Kind regards & good luck coding that up!
Richard


Am Dienstag, 3. September 2019 04:49:03 UTC+2 schrieb Phạm Ngọc Kiên:
>
> Dear Prof. Wolfgang Bangerth,
> As the file is too large, I send it again in a compressed file in the 
> attachment.
> I am sorry for my mistake.
> Best regards,
> Kien
>
> Vào Th 3, 3 thg 9, 2019 vào lúc 10:50 Phạm Ngọc Kiên <
> ngockie...@gmail.com > đã viết:
>
>> Dear Prof. Wolfgang Bangerth,
>> The attachment is my codes and the mesh for loading grid.
>> I think that when I run the codes on a single computer, it might take 
>> longer time than run the codes on cluster.
>>
>> I would like to thank you very much for your great guidance.
>> Best regards,
>> Kien
>>
>> Vào Th 6, 30 thg 8, 2019 vào lúc 12:35 Wolfgang Bangerth <
>> bang...@colostate.edu > đã viết:
>>
>>> On 8/29/19 6:31 PM, Phạm Ngọc Kiên wrote:
>>> > 
>>> > When I run the codes in my computer, it takes  a lot of time for p4est 
>>> to load 
>>> > the grid.
>>> > The loading grid  step is more time consuming than solving the system 
>>> of 
>>> > equations with a mesh containing about 100,000 cells.
>>>
>>> It *shouldn't* take that long, but who knows what exactly is happening 
>>> with 
>>> such big meshes. Do you think you can create a small testcase that 
>>> demonstrates this? It should really only contain of the code to read the 
>>> mesh, 
>>> and the file with the mesh itself.
>>>
>>> Best
>>>   W.
>>>
>>> -- 
>>> 
>>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>> 
>>> www: 
>>> http://www.math.colostate.edu/~bangerth/
>>>
>>> -- 
>>> The deal.II project is located at http://www.dealii.org/
>>> For mailing list/forum options, see 
>>> https://groups.google.com/d/forum/dealii?hl=en
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "deal.II User Group" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to dea...@googlegroups.com .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/dealii/246cfed4-41cb-d051-022e-6b9c7f1d8e91%40colostate.edu
>>> .
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9a839162-2c7f-4b32-befa-4a0f2a703a52%40googlegroups.com.


Re: [deal.II] How to set material id with MPI

2019-08-29 Thread Wolfgang Bangerth
On 8/29/19 6:31 PM, Phạm Ngọc Kiên wrote:
> 
> When I run the codes in my computer, it takes  a lot of time for p4est to 
> load 
> the grid.
> The loading grid  step is more time consuming than solving the system of 
> equations with a mesh containing about 100,000 cells.

It *shouldn't* take that long, but who knows what exactly is happening with 
such big meshes. Do you think you can create a small testcase that 
demonstrates this? It should really only contain of the code to read the mesh, 
and the file with the mesh itself.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/246cfed4-41cb-d051-022e-6b9c7f1d8e91%40colostate.edu.


Re: [deal.II] How to set material id with MPI

2019-08-29 Thread Phạm Ngọc Kiên
Dear all,
I think we have two ways to do this.
The first one is the way Prof. Wolfgang Bangerth suggested.
The second one is to load the grid in a Triangulation in all processor,
then we set the material id before copying
parallel::distributed::Triangulation from the Triangulation.

When I run the codes in my computer, it takes  a lot of time for p4est to
load the grid.
The loading grid  step is more time consuming than solving the system of
equations with a mesh containing about 100,000 cells.

I would like to thank you very much for your help.
Best,
Kien

Vào Th 7, 24 thg 8, 2019 vào lúc 01:29 Wolfgang Bangerth <
bange...@colostate.edu> đã viết:

> On 8/22/19 11:58 PM, Phạm Ngọc Kiên wrote:
> >
> > I have a question for parallel::distributed::Triangulation
> > When 2 cells share 1 edge, but they are living in 2 different MPI
> processes,
> > how can I choose only 1 cell containing the common edge from them.
>
> Is your goal to make sure that only one of the two processors does some
> work
> on these edges? If that's the case, then you need a "tie breaker" -- for
> example, if the subdomain id of a locally owned cell is lower than the
> subdomain of a neighboring ghost cell, then the current processor does the
> work. If the locally owned cell's subdomain id is larger, then the
> neighboring
> processor is in charge of the edge.
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/a8ecb0ed-359d-fb05-c2b2-e3da8c9218be%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAAo%2BSZfwoztYKLyFvwRJnwCUKEUf3jM7oF2Fd%2B35F2E4nBRWJw%40mail.gmail.com.


Re: [deal.II] How to set material id with MPI

2019-08-23 Thread Wolfgang Bangerth
On 8/22/19 11:58 PM, Phạm Ngọc Kiên wrote:
> 
> I have a question for parallel::distributed::Triangulation
> When 2 cells share 1 edge, but they are living in 2 different MPI processes, 
> how can I choose only 1 cell containing the common edge from them.

Is your goal to make sure that only one of the two processors does some work 
on these edges? If that's the case, then you need a "tie breaker" -- for 
example, if the subdomain id of a locally owned cell is lower than the 
subdomain of a neighboring ghost cell, then the current processor does the 
work. If the locally owned cell's subdomain id is larger, then the neighboring 
processor is in charge of the edge.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a8ecb0ed-359d-fb05-c2b2-e3da8c9218be%40colostate.edu.


Re: [deal.II] How to set material id with MPI

2019-08-23 Thread Daniel Arndt
KIen,

Hi colleagues,
> I have a question for parallel::distributed::Triangulation
> When 2 cells share 1 edge, but they are living in 2 different MPI
> processes, how can I choose only 1 cell containing the common edge from
> them.
> I think I have to set material id for the cell in the first process, and
> then tell the other one do not set it.
> However, I don't know how to send the cell iterator in MPI communication.
>

Just set the material id on both processes. That is much simpler and likely
faster.

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAOYDWbL_JMF7VsRp-AiQraqth8X7MXjTim807LEtfC%3Dyiicp_A%40mail.gmail.com.


Re: [deal.II] How to set material id with MPI

2019-08-22 Thread Phạm Ngọc Kiên
Hi colleagues,
I have a question for parallel::distributed::Triangulation
When 2 cells share 1 edge, but they are living in 2 different MPI
processes, how can I choose only 1 cell containing the common edge from
them.
I think I have to set material id for the cell in the first process, and
then tell the other one do not set it.
However, I don't know how to send the cell iterator in MPI communication.

Could you please help me to address this issue?
Thank you very much.

Best regards,
Kien



Vào Th 5, 18 thg 7, 2019 vào lúc 17:47 Wolfgang Bangerth <
bange...@colostate.edu> đã viết:

> On 7/17/19 7:46 PM, Phạm Ngọc Kiên wrote:
> > I am trying to write codes to find a subset of cells that I want to set
> their
> > material id.
> > The codes run well with 1 processor.
> > However, when testing with more than 1 processor, the codes did wrong
> things.
> > This is because each processor only owns a subset of cells with
> distributed
> > triangulation.
> > Do we have a way to address this issue in deal.II?
>
> In addition to Daniel's questions, take a look at the documentation of the
> parallel::distributed::Triangulation class documentation. It talks about
> similar issues with boundary ids. I would imagine that setting material
> ids
> poses similar challenges, and has similar solutions.
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/9afe5b13-a5db-ba22-81d1-acd9eb952d7d%40colostate.edu
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAAo%2BSZfi2S1S2OodmkS8Y-mbnVoSxq361WY9m_r5hFsxdysA6w%40mail.gmail.com.


Re: [deal.II] How to set material id with MPI

2019-07-18 Thread Wolfgang Bangerth
On 7/17/19 7:46 PM, Phạm Ngọc Kiên wrote:
> I am trying to write codes to find a subset of cells that I want to set their 
> material id.
> The codes run well with 1 processor.
> However, when testing with more than 1 processor, the codes did wrong things.
> This is because each processor only owns a subset of cells with distributed 
> triangulation.
> Do we have a way to address this issue in deal.II?

In addition to Daniel's questions, take a look at the documentation of the 
parallel::distributed::Triangulation class documentation. It talks about 
similar issues with boundary ids. I would imagine that setting material ids 
poses similar challenges, and has similar solutions.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9afe5b13-a5db-ba22-81d1-acd9eb952d7d%40colostate.edu.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] How to set material id with MPI

2019-07-17 Thread Daniel Arndt
Kien,

It is impossible for us to tell what is going wrong with this little
information. Please provide us with some more details.
What does the part of the code that is responsible for setting the material
id look like?
Are you trying to set the material id on all cells or only on the locally
owned (or locally relevant) ones?
How do you notice that the code produces wrong results?

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAOYDWb%2BBmn9AogoR8SW_Da9vT6dOLSkA6Zv2Kc_2gOYD_7PaWQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] How to set material id with MPI

2019-07-17 Thread Phạm Ngọc Kiên
Hi colleagues,
I am trying to write codes to find a subset of cells that I want to set 
their material id.
The codes run well with 1 processor.
However, when testing with more than 1 processor, the codes did wrong 
things.
This is because each processor only owns a subset of cells with distributed 
triangulation.
Do we have a way to address this issue in deal.II?
Thank you very much.

Best regards,
Kien

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2e200ba2-cc84-48ff-b776-c6b1bf144b2e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.