[deal.II] Minimum of dealii:Vector in MPI parallel application

2023-07-26 Thread 'maurice rohracker' via deal.II User Group
Dear all,

In a parallel distributed code, similar to step-42, we use a* 
dealii::Vector* of *n_active_cells()* to compute the error indicators of 
the locally owned cells. (This can easily be used in e.g. 
*dealii::parallel::distributed::GridRefinement::refine_and_coarsen_fixed_number()*
)
As we want to add an additional criterion based on the minimum and maximum 
error indicator to mark cells, we need the global minimum and maximum value.
Is there a smart way to get the global minimum based on the minimum of all 
processes of all locally owned cells? Using *std::min_element* does not 
care about locally owned cells and so the results are wrong.

Thanks and best regards,
Maurice

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3dee724f-19d1-48e2-9dc1-56ed645f817bn%40googlegroups.com.


Re: [deal.II] Setting boundary IDs in a parallel distributed triangulation

2022-08-23 Thread 'maurice rohracker' via deal.II User Group
Thanks, Timo for your response.

I want to clarify a few things:
- since our material IDs come from the* .inp* mesh file and the material 
IDs will not change due to some physics, they should always be inherited 
during coarsening and refinement and we do not need to take care of them. 
Is it also ensured that locally owned cells will always know their 
neighboring material IDs, or is there some ghost exchange needed using the 
`dealii::GridTools::exchange_cell_data_to_ghosts()` function?

- we are currently making use of the `triangulation_
.signals.post_distributed_refinement.connect()` functionality which we 
attach a `set_boundary_ids` function looping over all active cells (do not 
check if a cell is locally owned) and call it right after the mesh is read 
in. Is this enough to ensure that the boundary IDs are set properly, or is 
it safer to use `triangulation_.signals.any_change.connect()`?

Best regards,
Maurice

Timo Heister schrieb am Freitag, 19. August 2022 um 03:48:09 UTC+2:

> Maurice,
>
> Reassigning boundary IDs (or manifold IDs, material IDs) is somewhat 
> tricky in parallel as they need to stay consistent.
>
> While you would expect the IDs to only matter for locally owned cells, 
> this is not enough because each process needs to know the correct 
> constraints for all "locally active" DoFs. This requires at least the ghost 
> layer to have the correct boundary conditions. Example: a hanging node 
> constraint near a process boundary and the coarse DoFs have boundary values.
>
> There following cases are safe:
> 1. Create coarse mesh, assign IDs the same way for all cells, then do not 
> change IDs. Any refinement/coarsening will keep IDs updated consistently.
> 2. Go over all ghost and local cells and reassign IDs after any mesh 
> change (refine/coarsen). This can be a bit tricky to get right in all cases.
>
> Best,
> Timo
>
>
> On Thu, Aug 18, 2022, 03:59 'maurice rohracker' via deal.II User Group <
> dea...@googlegroups.com> wrote:
>
>> Dear all,
>>
>> We are wondering about the following:
>> In step-42 the boundary IDs of all active cells of a parallel distributed 
>> triangulation are set manually. Since we faced some issues while only 
>> setting them on locally owned ones, we would like to know the reason, 
>> because most other operations are only done on locally owned cells.
>> Is this important for the inheritance and the redistribution of cells in 
>> case of spatial adaptivity as well as the interpolate boundary functions?
>>
>> Thanks for your help.
>>
>> Best, Maurice 
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/ 
>> <https://urldefense.com/v3/__http://www.dealii.org/__;!!PTd7Sdtyuw!SvVyrnekxuaAMdEHjC9FffEHp0J9Hs3p3QOHYXk-pzOd7QgyVIOSAzL8Jx59q3i_Z0zGYMZwpbGFOo_sANw7$>
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en 
>> <https://urldefense.com/v3/__https://groups.google.com/d/forum/dealii?hl=en__;!!PTd7Sdtyuw!SvVyrnekxuaAMdEHjC9FffEHp0J9Hs3p3QOHYXk-pzOd7QgyVIOSAzL8Jx59q3i_Z0zGYMZwpbGFOoVwWh3n$>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/89923d3d-e81a-4e36-9fd5-640522508959n%40googlegroups.com
>>  
>> <https://urldefense.com/v3/__https://groups.google.com/d/msgid/dealii/89923d3d-e81a-4e36-9fd5-640522508959n*40googlegroups.com?utm_medium=email_source=footer__;JQ!!PTd7Sdtyuw!SvVyrnekxuaAMdEHjC9FffEHp0J9Hs3p3QOHYXk-pzOd7QgyVIOSAzL8Jx59q3i_Z0zGYMZwpbGFOrTe_o-h$>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f32f363b-8250-4d91-9cba-4ddad8d3ab9an%40googlegroups.com.


[deal.II] Setting boundary IDs in a parallel distributed triangulation

2022-08-18 Thread 'maurice rohracker' via deal.II User Group
Dear all,

We are wondering about the following:
In step-42 the boundary IDs of all active cells of a parallel distributed 
triangulation are set manually. Since we faced some issues while only 
setting them on locally owned ones, we would like to know the reason, 
because most other operations are only done on locally owned cells.
Is this important for the inheritance and the redistribution of cells in 
case of spatial adaptivity as well as the interpolate boundary functions?

Thanks for your help.

Best, Maurice

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/89923d3d-e81a-4e36-9fd5-640522508959n%40googlegroups.com.


Re: [deal.II] Different resulting meshes in spatial adaptivity for d::p::d::triangulation and d::triangulation

2022-06-01 Thread 'maurice rohracker' via deal.II User Group
Thanks for your suggestions, especially the different underlying concepts 
for the serial and p::d::triangulation, which makes it hard to compare the 
results 1:1; that clarified my doubts.
Best, Maurice

Wolfgang Bangerth schrieb am Mittwoch, 25. Mai 2022 um 02:52:23 UTC+2:

> On 5/24/22 09:23, 'maurice rohracker' via deal.II User Group wrote:
> > 
> > Is such a behavior expected, or is it difficult to compare the result 
> for the 
> > serial and the distributed triangulation one by one?
>
> The latter. The p::d::T uses p4est, and p4est makes different assumptions 
> than 
> deal.II which cells need to be refined to satisfy internal invariants. (An 
> example of an invariant both libraries enforce is that there is only one 
> hanging node per edge in 2d, but there are others.) I believe that it is 
> meaningful to compare the p::d::T meshes you get with different numbers of 
> MPI 
> processes, but I don't think that we want to guarantee that different 
> triangulation classes result in the same numbers of cells.
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5e73e8f6-294a-4ca1-8d7c-0ccc063d3a0an%40googlegroups.com.


Re: [deal.II] No printing of TimerOutput after catch of exception in MPI based code

2022-05-10 Thread 'maurice rohracker' via deal.II User Group
Thanks, Daniel that helped. Attached is the adapted mwe for others.

d.arnd...@gmail.com schrieb am Montag, 9. Mai 2022 um 12:37:35 UTC+2:

> Maurice,
>
> You could try to wrap the calls that might throw in a try-catch block to 
> deal with the exception yourself (including ignoring it).
>
> Best,
> Daniel
>
> On Mon, May 9, 2022 at 11:14 AM 'maurice rohracker' via deal.II User Group 
>  wrote:
>
>> Dear all,
>>
>> In the attached mwe we are facing the issue, that the TimerOutput object 
>> does not print the resulting table to the screen when an exception is 
>> thrown and caught within in a MPI based code.
>>
>> We are aware that some collective routine might be needed.
>>
>> However, is there a way which can handle this easily, such that 
>> TimerOutput object prints the resulting table to the screen even if an 
>> exception is thrown or are we missing something?
>>
>> We were already able to do so in the serial case.
>>
>> Thanks in advance.
>>
>> Best regards,
>> Maurice
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/c68d2663-d1a7-4670-a632-9a51abae3d6fn%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/c68d2663-d1a7-4670-a632-9a51abae3d6fn%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/53c7efae-5559-4fb4-b229-0db8c764bb59n%40googlegroups.com.
#include 
#include 
#include 
#include 
#include 

using UnsignedIntType = unsigned int;


class MyException : public dealii::ExceptionBase
{
public:
		MyException(const std::string )
		: failureLoc_(failureLocation)
		{
		if ((failureLoc_ != "pf") && (failureLoc_ != "disp") &&
		  (failureLoc_ != "outer"))
		Assert(false, dealii::ExcMessage("Invalid failure location string!"));
		}

		virtual ~MyException() noexcept = default;

		virtual void
		print_info(std::ostream ) const
		{
			if (failureLoc_ == "pf")
			outStream << "Non convergence in the pf loop" << std::endl;
			else if (failureLoc_ == "disp")
			outStream << "Non convergence in the disp loop" << std::endl;
			else if (failureLoc_ == "outer")
			outStream << "Non convergence in the outer loop" << std::endl;
		}

std::string failureLoc_;
};

void Solve_timeStep(dealii::ConditionalOStream )
{
	static UnsignedIntType timeStepCtr = 0;
	
// do some nonlinear Newton iteration
	outStream << "local time-step counter: " << timeStepCtr << std::endl;

// the procedure fails for 4-th time-step
	if(timeStepCtr ==  4)
	{
		++timeStepCtr;
		AssertThrow(false, MyException("pf")); 
	}

	//if(timeStepCtr == 6)
	//	Assert(false, MyException("disp"));

	++timeStepCtr;
}

void Simulate(dealii::TimerOutput *timerOutput, dealii::ConditionalOStream )
{
	const UnsignedIntType nTimeSteps = 10;
	for(UnsignedIntType timeStepCtr=0; timeStepCtrenter_subsection("Solve Time-Step");
		Solve_timeStep(outStream);
		timerOutput->leave_subsection();
	}

}

int main(int argc, char *argv[])
{

	dealii::Utilities::MPI::MPI_InitFinalize mpiInitialization(argc, argv, 1);
MPI_Comm const & mpiCommunicator(MPI_COMM_WORLD);
UnsignedIntType  processorRank = dealii::Utilities::MPI::this_mpi_process(mpiCommunicator);
dealii::ConditionalOStream pCout(std::cout, processorRank == 0);
dealii::TimerOutput timerOutput(mpiCommunicator, pCout, dealii::TimerOutput::summary, dealii::TimerOutput::wall_times);
dealii::TimerOutput::Scope timer_section(timerOutput, "Simulation");
try
{
	Simulate(, pCout);
	}
	catch(dealii::ExceptionBase& exception)
	{
	pCout << "Exception thrown to main!!" << std::endl;
pCout << exception.what() << std::endl;
	pCout << "Stack trace should be printed below!" << std::endl;
exception.print_stack_trace(std::cout);
pCout << "End programm" << std::endl;
	}
	
	return 0;
}


[deal.II] No printing of TimerOutput after catch of exception in MPI based code

2022-05-09 Thread 'maurice rohracker' via deal.II User Group
Dear all,

In the attached mwe we are facing the issue, that the TimerOutput object 
does not print the resulting table to the screen when an exception is 
thrown and caught within in a MPI based code.

We are aware that some collective routine might be needed.

However, is there a way which can handle this easily, such that TimerOutput 
object prints the resulting table to the screen even if an exception is 
thrown or are we missing something?

We were already able to do so in the serial case.

Thanks in advance.

Best regards,
Maurice

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c68d2663-d1a7-4670-a632-9a51abae3d6fn%40googlegroups.com.
#include 
#include 
#include 
#include 
#include 

using UnsignedIntType = unsigned int;


class MyException : public dealii::ExceptionBase
{
public:
		MyException(const std::string )
		: failureLoc_(failureLocation)
		{
		if ((failureLoc_ != "pf") && (failureLoc_ != "disp") &&
		  (failureLoc_ != "outer"))
		Assert(false, dealii::ExcMessage("Invalid failure location string!"));
		}

		virtual ~MyException() noexcept = default;

		virtual void
		print_info(std::ostream ) const
		{
			if (failureLoc_ == "pf")
			outStream << "Non convergence in the pf loop" << std::endl;
			else if (failureLoc_ == "disp")
			outStream << "Non convergence in the disp loop" << std::endl;
			else if (failureLoc_ == "outer")
			outStream << "Non convergence in the outer loop" << std::endl;
		}

std::string failureLoc_;
};

void Solve_timeStep(dealii::ConditionalOStream )
{
	static UnsignedIntType timeStepCtr = 0;
	
// do some nonlinear Newton iteration
	outStream << "local time-step counter: " << timeStepCtr << std::endl;

// the procedure fails for 4-th time-step
	if(timeStepCtr ==  4)
	{
		++timeStepCtr;
		AssertThrow(false, MyException("pf")); 
	}

	//if(timeStepCtr == 6)
	//	Assert(false, MyException("disp"));

	++timeStepCtr;
}

void Simulate(dealii::ConditionalOStream )
{
	const UnsignedIntType nTimeSteps = 10;
	for(UnsignedIntType timeStepCtr=0; timeStepCtrSET(TARGET "mwe-timerOutput-mpi")
SET(TARGET_SRC ${TARGET}.cc)

cmake_minimum_required(VERSION 3.16)

FIND_PACKAGE(deal.II 9.0.0 QUIET HINTS ${deal.II_DIR} ${DEAL_II_DIR} 
$ENV{DEAL_II_DIR})

DEAL_II_INITIALIZE_CACHED_VARIABLES()

PROJECT(${TARGET})
DEAL_II_INVOKE_AUTOPILOT()



[deal.II] Refinement on a parallel distributed triangulation with periodic BC

2020-11-16 Thread 'maurice rohracker' via deal.II User Group
Dear deal.II community,

For a software project at FAU Erlangen-Nuremberg, I implemented a 
distributed memory parallel version of a homogenization problem.

Since we handle periodicity with an offset value, I implemented the 
periodic BC in the following manner:


   1. Read mesh
   2. Collect periodic faces (triangulation)
   3. Add periodic faces to triangulation
   4. Collect periodic faces (dofHandler)
   5. Make periodic constraint and set inhomogeneity to periodic DOFs on 
   the boundary.

Next step would be to add a mesh refinement. My first try was adding 
`triangulation.refine_global(nGlobalRefinement);` after the third step.

Unfortunately, when we try to access cells with `!cell.is_artificial()` 
from the resulting vector of `collect_periodic_faces()` to get all locally 
relevant boundary cells to add the inhomogeneity, the following assertion 
is raised:

The violated condition was: 
this->active()
Additional information: 
is_artificial() can only be called on active cells!

Looking at the doc of `collect_periodic_faces` (This function will collect 
periodic face pairs on the coarsest mesh level of the given mesh (a 
Triangulation 
 or 
DoFHandler 
) and 
add them to the vector matched_pairs leaving the original contents 
intact.)  the resulting vector will only contain parent cells, on which the 
call `is_artifiical()` is not possible. 

So we thought if we change the `cell_iterator` in `collect_periodic_faces` 
to `active_cell_iterator` we get a vector containing the child cells (on 
which `is_artifical()` is allowed)? Because we would assume that child 
cells are active and therefore, our previous approach would work again. But 
this did not help either.

If this question might be a bit to complex, I would also prepare a minimal 
example.

Is there a going round to get a similar results as with 
`collect_periodic_faces`, i.e. a vector containing all refined periodic 
cells. 

Thanks for your support in advance.

Best, Maurice


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8ba32635-dbf3-41ca-b2f3-b44760702e60n%40googlegroups.com.


Re: [deal.II] Periodic BC in parallel distributed homogenization problem

2020-10-23 Thread 'maurice rohracker' via deal.II User Group
Dear Daniel,

Thanks for your reply. 

We are having some offsets at the periodic boundary and therefore set it by 
ourself.

Adding the periodicity with 
`triangulation.add_periodicity(periodicFacePairVector)` before distributing 
the dofs as in step-45 helped.

We are using DoFTools::make_periodicity_constraints() later on and merge a 
new constraint into the existing one.

Best,
Maurice

d.arnd...@gmail.com schrieb am Mittwoch, 21. Oktober 2020 um 18:04:57 UTC+2:

> Maurice,
>
> Is there any reason that you don't use 
> DoFTools::make_periodicity_constraints() as shown in 
> https://www.dealii.org/current/doxygen/deal.II/step_45.html?
>
> Best,
> Daniel
>
> Am Mi., 21. Okt. 2020 um 11:16 Uhr schrieb 'maurice rohracker' via deal.II 
> User Group :
>
>> Dear all,
>>
>>
>> For a project, I try to implement a parallel distributed homogenization 
>> problem with periodic BC. A serial version is already implemented.
>>
>> The feature of the periodic BC in the serial case is that it is not 
>> purely periodic, but periodic with some offset.
>>
>>
>> Therefore the periodic boundary pairs are collected 
>> (`dealii::GridTools::collect_periodic_faces*(*
>>
>>  doFHandler, ...)`) and after that stored together with the DOF indices 
>> to handle the periodic BC by oneself afterwards.
>>
>>
>> For the parallel implementation, I go through the following steps:
>>
>>1. gather periodic face pairs for all directions using 
>>`dealii::GridTools::collect_periodic_faces(triangulation, ..., 
>>periodic_vector)`
>>2. add periodicity by `triangulation.add_periodicity(periodic_vector)`
>>3. collect periodic boundary pairs using 
>>`dealii::GridTools::collect_periodic_faces*(*
>>4.  doFHandler, ..., periodic_vector2)`
>>5. store the pairs with the DOFs by looping over the 
>>`periodic_vector2`
>>
>>
>>- to access the DOFs I tried the following
>>
>> `*for **(**const auto * :
>>
>>  periodic*_vector2)*
>>
>> *{*
>>
>>  (...)
>>
>>
>>  // loop over the DOFs of the boundary periodic pair cells
>>
>>  *if **(*facePair.cell*[*0*]*->is_locally_owned*() *&& facePair.cell*[*1
>> *]*->is_locally_owned*())*
>>
>> * {*
>>
>>  facePair.cell*[*0*]*->get_dof_indices*(*localDOFIndicesPlus*)*;
>>
>>  facePair.cell*[*1*]*->get_dof_indices*(*localDOFIndicesMinus*)*;
>>
>>  *}*
>>
>>  *else if **(*facePair.cell*[*0*]*->is_locally_owned*())*
>>
>> * {*
>>
>>  facePair.cell*[*0*]*->get_dof_indices*(*localDOFIndicesPlus*)*;
>>
>>  *(*facePair.cell*[*0*]*->periodic_neighbor*(*facePair.face_idx*[*0*]))*
>> ->get_dof_indices*(*localDOFIndicesMinus*)*;
>>
>>  *}*
>>
>>  *else if **(*facePair.cell*[*1*]*->is_locally_owned*())*
>>
>> * {*
>>
>> * (*facePair.cell*[*1*]*->periodic_neighbor*(*facePair.face_idx*[*1*]))*
>> ->get_dof_indices*(*localDOFIndicesPlus*)*;
>>
>>  facePair.cell*[*1*]*->get_dof_indices*(*localDOFIndicesMinus*)*;
>>
>>  *}*
>>
>>
>> (...) // store plus and minus point of the boundary together with dof if 
>> the periodicity of plus and minus point is fulfilled
>>
>> }`
>>
>>
>> Unfortunately, the resulting localDOFIndicesPlus/Minus of type 
>> `std::vector*<*dealii::types::global_dof_index*> *localDOFIndicesPlus*(*
>>
>>  nDOFsPerCell*)*;` only contain garbage values (very high numbers).
>>
>>
>> My first question would be, is it possible to access the DOF indices of 
>> ghost cells, as I did?
>>
>> Is our procedure enough to ensure that the periodic boundary cells of a 
>> triangulation owned by one processor are ghost cells of the corresponding 
>> triangulation owned by another processor?
>>
>>
>> Thanks in advance for your help. If there are any unclarities in my 
>> explanation, feel free to ask.
>>
>>
>> Best, Maurice
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/ce15100c-7daa-410b-bc91-6cd0288c9c3an%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/ce15100c-7daa-410b-bc91-6cd0288c9c3an%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b8b9d32f-13e7-4275-b461-0d5b93ff1008n%40googlegroups.com.


[deal.II] Periodic BC in parallel distributed homogenization problem

2020-10-21 Thread 'maurice rohracker' via deal.II User Group


Dear all,


For a project, I try to implement a parallel distributed homogenization 
problem with periodic BC. A serial version is already implemented.

The feature of the periodic BC in the serial case is that it is not purely 
periodic, but periodic with some offset.


Therefore the periodic boundary pairs are collected 
(`dealii::GridTools::collect_periodic_faces*(*

 doFHandler, ...)`) and after that stored together with the DOF indices to 
handle the periodic BC by oneself afterwards.


For the parallel implementation, I go through the following steps:

   1. gather periodic face pairs for all directions using 
   `dealii::GridTools::collect_periodic_faces(triangulation, ..., 
   periodic_vector)`
   2. add periodicity by `triangulation.add_periodicity(periodic_vector)`
   3. collect periodic boundary pairs using 
   `dealii::GridTools::collect_periodic_faces*(*
   4.  doFHandler, ..., periodic_vector2)`
   5. store the pairs with the DOFs by looping over the `periodic_vector2`


   - to access the DOFs I tried the following
   
`*for **(**const auto * :

 periodic*_vector2)*

*{*

 (...)


 // loop over the DOFs of the boundary periodic pair cells

 *if **(*facePair.cell*[*0*]*->is_locally_owned*() *&& facePair.cell*[*1*]*
->is_locally_owned*())*

* {*

 facePair.cell*[*0*]*->get_dof_indices*(*localDOFIndicesPlus*)*;

 facePair.cell*[*1*]*->get_dof_indices*(*localDOFIndicesMinus*)*;

 *}*

 *else if **(*facePair.cell*[*0*]*->is_locally_owned*())*

* {*

 facePair.cell*[*0*]*->get_dof_indices*(*localDOFIndicesPlus*)*;

 *(*facePair.cell*[*0*]*->periodic_neighbor*(*facePair.face_idx*[*0*]))*
->get_dof_indices*(*localDOFIndicesMinus*)*;

 *}*

 *else if **(*facePair.cell*[*1*]*->is_locally_owned*())*

* {*

* (*facePair.cell*[*1*]*->periodic_neighbor*(*facePair.face_idx*[*1*]))*
->get_dof_indices*(*localDOFIndicesPlus*)*;

 facePair.cell*[*1*]*->get_dof_indices*(*localDOFIndicesMinus*)*;

 *}*


(...) // store plus and minus point of the boundary together with dof if 
the periodicity of plus and minus point is fulfilled

}`


Unfortunately, the resulting localDOFIndicesPlus/Minus of type `std::vector
*<*dealii::types::global_dof_index*> *localDOFIndicesPlus*(*

 nDOFsPerCell*)*;` only contain garbage values (very high numbers).


My first question would be, is it possible to access the DOF indices of 
ghost cells, as I did?

Is our procedure enough to ensure that the periodic boundary cells of a 
triangulation owned by one processor are ghost cells of the corresponding 
triangulation owned by another processor?


Thanks in advance for your help. If there are any unclarities in my 
explanation, feel free to ask.


Best, Maurice

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ce15100c-7daa-410b-bc91-6cd0288c9c3an%40googlegroups.com.


[deal.II] Re: Broadcasting packed objects

2020-06-09 Thread Maurice Rohracker
Thanks, Peter, that helped so far!

Another question would be, how would then the serialize function look like 
if one has another class as a member of a class. So, in this case, how 
would the serialize function look like if one has a class House and two of 
its member are of the type Room?
A serializing of nested objects so to speak.


Am Montag, 8. Juni 2020 14:19:25 UTC+2 schrieb peterrum:
>
> Dear Maurice,
>
> The problem is that the size of `auto buffer = 
> dealii::Utilities::pack(r1);` is not the same on all processes, which is a 
> requirement if you use `MPI_Bcast`. My suggestion would to split the 
> procedure into two steps: 1) bcast the size on rank 1; 2) bcast the actual 
> data.
>
> Peter
>
> On Monday, 8 June 2020 12:46:53 UTC+2, Maurice Rohracker wrote:
>>
>> Dear deal.II community,
>>
>> For a distributed implementation, we would like to broadcast packed data 
>> objects.
>>
>> For the beginning, we would like to understand how the serialize function 
>> for packing an object is working in the sense of the deal.II way, before it 
>> is getting more complicated.
>>
>> I created a small test with a class, which should be broadcasted. As a 
>> comparison, I took the dealii:Point, which can obviously be packed and 
>> broadcasted without any issues.
>>
>> Unfortunately, the broadcast for our class does not work. We assume that 
>> our serialize function is not in the correct manner. Is it possible to pack 
>> own objects using the dealii::Utilities:pack() function? What is the proper 
>> way defining the serialize function?
>>
>> Attached you'll find our small example code. We are using deal.II 9.0.1.
>>
>> Thank you very much in advance.
>>
>> Best regards,
>> Maurice Rohracker
>> Master Student Computational Engineering
>> FAU Erlange-Nürnberg
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/31811f32-55a7-41d6-8a71-376e2a07cd76o%40googlegroups.com.


[deal.II] Broadcasting packed objects

2020-06-08 Thread Maurice Rohracker
Dear deal.II community,

For a distributed implementation, we would like to broadcast packed data 
objects.

For the beginning, we would like to understand how the serialize function 
for packing an object is working in the sense of the deal.II way, before it 
is getting more complicated.

I created a small test with a class, which should be broadcasted. As a 
comparison, I took the dealii:Point, which can obviously be packed and 
broadcasted without any issues.

Unfortunately, the broadcast for our class does not work. We assume that 
our serialize function is not in the correct manner. Is it possible to pack 
own objects using the dealii::Utilities:pack() function? What is the proper 
way defining the serialize function?

Attached you'll find our small example code. We are using deal.II 9.0.1.

Thank you very much in advance.

Best regards,
Maurice Rohracker
Master Student Computational Engineering
FAU Erlange-Nürnberg

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/cc9c1b57-3c12-4417-9935-6e65f569aeb6o%40googlegroups.com.
cmake_minimum_required(VERSION 3.10)
set(CMAKE_BUILD_TYPE Release)

find_package(MPI)
include_directories(SYSTEM ${MPI_INCLUDE_PATH})

# Ensure dealii is present
SET(DEAL_II_DIR /usr/local/bin/dealii/)
FIND_PACKAGE(deal.II 9.0.0 QUIET
HINTS ${deal.II_DIR} ${DEAL_II_DIR} $ENV{DEAL_II_DIR}
)
IF(NOT ${deal.II_FOUND})
MESSAGE(FATAL_ERROR "\n"
"*** Could not locate a (sufficiently recent) version of deal.II. 
***\n\n"
"You may want to either pass a flag -DDEAL_II_DIR=/path/to/deal.II 
to cmake\n"
"or set an environment variable \"DEAL_II_DIR\" that contains this 
path."
)
ENDIF()
SET(CMAKE_CXX_COMPILER mpicxx)

add_executable(serializationTest serializationTestMain.cpp)

target_link_libraries(serializationTest ${MPI_CPP_LIBRARIES})
DEAL_II_SETUP_TARGET(serializationTest)
//
// Created by Maurice on 04.06.2020.
//

#ifndef SERIALIZATIONTEST_ROOM_H
#define SERIALIZATIONTEST_ROOM_H

#include 

class Room {
private:
int nWindows;
int nDoors;

public:
Room() : nWindows(0), nDoors(0) { }

Room(int nW, int nD) : nWindows(nW), nDoors(nD) { }

~Room() { }

Room(const Room ) : nWindows(other.nWindows), nDoors(other.nDoors) { }

Room& operator=(const Room ) {
if (this != ) {
nWindows = other.nWindows;
nDoors = other.nDoors;
}
return *this;
}

void setWindows(int n) {
nWindows = n;
}

void setDoors(int n) {
nDoors = n;
}

std::string print() const {
return "number of windows: " + std::to_string(nWindows) + " | number of doors: " + std::to_string(nDoors);
}

template
void serialize(Archive , const unsigned int version);
};

template
inline
void
Room::serialize(Archive & ar, const unsigned int version) {
ar & nWindows;
ar & nDoors;
}

#endif //SERIALIZATIONTEST_ROOM_H
//
// Created by Maurice on 03.06.2020.
//

#include "mpi.h"
#include 
#include 
#include 
#include 
#include "Room.h"


int main(int argc, char *argv[]) {
  dealii::Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
  MPI_Comm const & mpi_communicator(MPI_COMM_WORLD);

  Room r1;
  const unsigned dim = 5;
  dealii::Point p1;
  if (dealii::Utilities::MPI::this_mpi_process(mpi_communicator) == 0) {
  r1.setWindows(4);
  r1.setDoors(6);
  for(int i = 0; i < dim; ++i)
  p1[i] = i + 2;
}

  // pack objects
  auto buffer = dealii::Utilities::pack(r1);
  auto bufferPoint = dealii::Utilities::pack(p1);

  // broadcast objects
  MPI_Bcast([0], buffer.size(), MPI_CHAR, 0, mpi_communicator); // with commenting this line the code works without errors
  MPI_Bcast([0], bufferPoint.size(), MPI_CHAR, 0, mpi_communicator);


  // unpack objects (if not root process)
  if (dealii::Utilities::MPI::this_mpi_process(mpi_communicator) != 0) {
  r1 = dealii::Utilities::unpack(buffer);
  p1 = dealii::Utilities::unpack >(bufferPoint);
}

  // print to see, if all worked
  std::cout << "Hello form "
<< dealii::Utilities::MPI::this_mpi_process(mpi_communicator)
<< " of "
<< dealii::Utilities::MPI::n_mpi_processes(mpi_communicator)
<< std::endl;
  std::cout << "Process: "
<< dealii::Utilities::MPI::this_mpi_process(mpi_communi