Re: [DuMux] Step post-processing in MPI mode

2021-11-18 Thread Dmitry Pavlov

Timo,

Thank you very much, that worked like a charm.

On another note, did you have a chance to look into MR 2914 and my 
message from October 28?


Best regards,

Dmitry


On 18.11.2021 17:19, Timo Koch wrote:

Hi Dmitry,

you want to only iterate over the internal cells to not sum up things 
twice on the process boundary, to this end use


for (const auto& element : elements(this->gridGeometry().gridView(), 
|Dune::Partitions::interior|))


After the loop you actually have to sum up over the process if you 
want the summed result, e.g. try


const auto& comm = this->gridGeometry().gridView().comm();
for (int i = 0; i < FluidSystem::numComponents; ++i)
fluxes[i] = comm.sum(fluxes[i]);

Best wishes
Timo


On 18. Nov 2021, at 13:40, Dmitry Pavlov > wrote:


Hello,

I have the following code to compute and add up fluxes through 
boundary cells of certain type after each step:



  Scalar fluxes[FluidSystem::numComponents] = {0};

  for (const auto& element : 
elements(this->gridGeometry().gridView()))

  {
  auto fvGeometry = localView(this->gridGeometry());
  fvGeometry.bindElement(element);

  auto elemVolVars = 
localView(gridVariables.curGridVolVars());

  elemVolVars.bind(element, fvGeometry, curSol);

  auto elemFluxVarsCache = 
localView(gridVariables.gridFluxVarsCache());


  for (auto&& scvf : scvfs(fvGeometry))
  {
  if (scvf.boundary())
  {
  const auto boundaryMarkerId = 
gridDataPtr_->getBoundaryDomainMarker(scvf.boundaryFlag());


  if (boundaryMarkerId == 888)
  {
  auto neumannFluxes = neumann(element, 
fvGeometry, elemVolVars, elemFluxVarsCache, scvf);
  neumannFluxes *= scvf.area() * 
elemVolVars[scvf.insideScvIdx()].extrusionFactor();


  for (int i = 0; i < 
FluidSystem::numComponents; i++)

  fluxes[i] += neumannFluxes[i];
  }
  }
  }
  }


It works in single-process mode but not in multi-process. I think 
this has to do with the grid being divided between processes? How do 
I adapt this post-processing code to this scheme?



Best regards,

Dmitry


___
DuMux mailing list
DuMux@listserv.uni-stuttgart.de 
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


___
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


Re: [DuMux] Step post-processing in MPI mode

2021-11-18 Thread Timo Koch
Hi Dmitry,

you want to only iterate over the internal cells to not sum up things twice on 
the process boundary, to this end use

for (const auto& element : elements(this->gridGeometry().gridView(), 
Dune::Partitions::interior))

After the loop you actually have to sum up over the process if you want the 
summed result, e.g. try

const auto& comm = this->gridGeometry().gridView().comm();
for (int i = 0; i < FluidSystem::numComponents; ++i)
fluxes[i] = comm.sum(fluxes[i]);

Best wishes
Timo


> On 18. Nov 2021, at 13:40, Dmitry Pavlov  wrote:
> 
> Hello,
> 
> I have the following code to compute and add up fluxes through boundary cells 
> of certain type after each step:
> 
> 
>   Scalar fluxes[FluidSystem::numComponents] = {0};
> 
>   for (const auto& element : 
> elements(this->gridGeometry().gridView()))
>   {
>   auto fvGeometry = localView(this->gridGeometry());
>   fvGeometry.bindElement(element);
> 
>   auto elemVolVars = localView(gridVariables.curGridVolVars());
>   elemVolVars.bind(element, fvGeometry, curSol);
> 
>   auto elemFluxVarsCache = 
> localView(gridVariables.gridFluxVarsCache());
> 
>   for (auto&& scvf : scvfs(fvGeometry))
>   {
>   if (scvf.boundary())
>   {
>   const auto boundaryMarkerId = 
> gridDataPtr_->getBoundaryDomainMarker(scvf.boundaryFlag());
> 
>   if (boundaryMarkerId == 888)
>   {
>   auto neumannFluxes = neumann(element, fvGeometry, 
> elemVolVars, elemFluxVarsCache, scvf);
>   neumannFluxes *= scvf.area() * 
> elemVolVars[scvf.insideScvIdx()].extrusionFactor();
> 
>   for (int i = 0; i < FluidSystem::numComponents; i++)
>   fluxes[i] += neumannFluxes[i];
>   }
>   }
>   }
>   }
> 
> 
> It works in single-process mode but not in multi-process. I think this has to 
> do with the grid being divided between processes? How do I adapt this 
> post-processing code to this scheme?
> 
> 
> Best regards,
> 
> Dmitry
> 
> 
> ___
> DuMux mailing list
> DuMux@listserv.uni-stuttgart.de
> https://listserv.uni-stuttgart.de/mailman/listinfo/dumux

___
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


[DuMux] Step post-processing in MPI mode

2021-11-18 Thread Dmitry Pavlov

Hello,

I have the following code to compute and add up fluxes through boundary 
cells of certain type after each step:



  Scalar fluxes[FluidSystem::numComponents] = {0};

  for (const auto& element : 
elements(this->gridGeometry().gridView()))

  {
  auto fvGeometry = localView(this->gridGeometry());
  fvGeometry.bindElement(element);

  auto elemVolVars = localView(gridVariables.curGridVolVars());
  elemVolVars.bind(element, fvGeometry, curSol);

  auto elemFluxVarsCache = 
localView(gridVariables.gridFluxVarsCache());


  for (auto&& scvf : scvfs(fvGeometry))
  {
  if (scvf.boundary())
  {
  const auto boundaryMarkerId = 
gridDataPtr_->getBoundaryDomainMarker(scvf.boundaryFlag());


  if (boundaryMarkerId == 888)
  {
  auto neumannFluxes = neumann(element, 
fvGeometry, elemVolVars, elemFluxVarsCache, scvf);
  neumannFluxes *= scvf.area() * 
elemVolVars[scvf.insideScvIdx()].extrusionFactor();


  for (int i = 0; i < 
FluidSystem::numComponents; i++)

  fluxes[i] += neumannFluxes[i];
  }
  }
  }
  }


It works in single-process mode but not in multi-process. I think this 
has to do with the grid being divided between processes? How do I adapt 
this post-processing code to this scheme?



Best regards,

Dmitry


___
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux