Re: [DuMux] DuMux Digest, Vol 128, Issue 12

2021-11-22 Thread Timo Koch


> On 22. Nov 2021, at 19:09, Ana Carolina Loyola  
> wrote:
> 
> Dera Timo Koch,
> 
> Yes, this example runs fine ( test_1p_incompressible_tpfa_quad )  here. I 
> don't know if this issue is related to the Box method, but so far when I run 
> a test using tpfa and mpfa the  BlockDiagAMGBiCGSTABSolver works fine.
> 
> I am not confident the problem is precision-related also. But when I add up 
> the flux (residuals) at the boundary nodes, I get a sum with a much smaller 
> precision than 1e-16. It's when I try to evaluate the pressure gradients at 
> the boundary faces and then use them to obtain the boundary flux that the sum 
> becomes way more important.

Hi Ana,

Do you evaluate boundary fluxes via gradients at boundary faces next to 
Dirichlet nodes with Box? I’m not sure what you can expect from fluxes computes 
that way at Dirichlet nodes.
You will only get local mass conservation at inner nodes/boxes/vertices but not 
necessarily for the (boundary) Box/control-volume around Dirichlet nodes.
Maybe this is an issue here?

Best wishes
Timo


> And as I need directional flow information, I think I have to work at the 
> faces. Another test I did was to observe that as the fracture permeability 
> gets closer to the matrix permeability this precision improves, so this might 
> be a precision issue since the pressure gradient at the fracture-matrix 
> interface is supposed to be very small (because the fracture is way more 
> permeable). Well, if I can make this solver work I can verify this 
> possibility. 
> 
> Thanks for your help
> 
> Best regards
> 
> 
> Ana 
> 
> Em seg., 22 de nov. de 2021 às 17:53, Timo Koch  > escreveu:
> Dear Ana,
> 
> can you try the 1p quad precision test first?
> https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/blob/master/test/porousmediumflow/1p/incompressible/CMakeLists.txt
>  
> 
> 
> in the dumux build folder, e.g. dumux/build-cmake run
> make test_1p_incompressible_tpfa_quad
> ctest -R test_1p_incompressible_tpfa_quad -VV
> 
> BTW I don’t suspect that your original problem is related to precision (but 
> it could be). The double precision is indeed something like 15 significant 
> digits but that doesn’t mean that we can’t compute with much smaller or much 
> bigger floating point numbers at all. A problem only occurs if you for 
> example add 1.0 and 1e-16, because then the precision is not enough to 
> represent the difference between 1.0 and 1.0+1e-16.
> As we don’t usually do something like that with permeability it should be 
> fine. But of course I don’t know all of the code.
> 
> It’s quite difficult to help with the original issue as I’m not familiar with 
> the paper or know exactly how the setup looks like.
> 
> Best wishes
> Timo
> 
> 
>> On 22. Nov 2021, at 17:30, Ana Carolina Loyola > > wrote:
>> 
>> Following up on my question on the precision error of my boundary fluxes, I 
>> am trying to introduce the quad precision for my multidomain problem with 
>> facets. For that, I have to use another Linear Solver. Following the test 
>> case in dumux/test/multidomain/facet/1p_1p/analytical, I tried to use 
>> BlockDiagAMGBiCGSTABSolver, but the following error occurs:
>> 
>> Solve: M deltax^k = rNewton: Caught exception: "NumericalProblem 
>> [solveLinearSystem:/home/analoyola/dumux/dumux/dumux/nonlinear/newtonsolver.hh:383]:
>>  Linear solver did not converge"
>> Dune reported error: NumericalProblem 
>> [.../dumux/dumux/dumux/nonlinear/newtonsolver.hh:260]: Newton solver didn't 
>> converge after 0 iterations.
>>  ---> Abort!
>> 
>> When I try to run the test in dumux/test/multidomain/facet/1p_1p/analytical 
>> for the box method, a crash also occurs:
>> 
>> Solve: M deltax^k = rNewton: Caught exception: "SolverAbort 
>> [apply:/home/analoyola/dumux3.4/dune-istl/dune/istl/solvers.hh:494]: 
>> breakdown in BiCGSTAB - rho 0 <= EPSILON 1e-80 after 93.5 iterations"
>> terminate called after throwing an instance of 'Dumux::NumericalProblem'
>>   what():  NumericalProblem [.../dumux/dumux/nonlinear/newtonsolver.hh:397]: 
>> Newton solver didn't converge after 0 iterations.
>> 
>> The same does not occur for the cell-centered schemes.
>> 
>> Any idea on what may be causing the crash and how to solve this ?
>> 
>> Thanks a lot
>> 
>> Ana 
>> 
>> Em sex., 19 de nov. de 2021 às 00:02, 
>> > > escreveu:
>> Send DuMux mailing list submissions to
>> dumux@listserv.uni-stuttgart.de 
>> 
>> 
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://listserv.uni-stuttgart.de/mailman/listinfo/dumux 
>> 
>> or, via email, send a message with subject or body 'help' to
>> dumux-requ...@listserv.uni-

Re: [DuMux] DuMux Digest, Vol 128, Issue 12

2021-11-22 Thread Ana Carolina Loyola
Dera Timo Koch,

Yes, this example runs fine ( test_1p_incompressible_tpfa_quad )  here. I
don't know if this issue is related to the Box method, but so far when I
run a test using tpfa and mpfa the  BlockDiagAMGBiCGSTABSolver works fine.

I am not confident the problem is precision-related also. But when I add up
the flux (residuals) at the boundary nodes, I get a sum with a much smaller
precision than 1e-16. It's when I try to evaluate the pressure gradients at
the boundary faces and then use them to obtain the boundary flux that the
sum becomes way more important. And as I need directional flow information,
I think I have to work at the faces. Another test I did was to observe that
as the fracture permeability gets closer to the matrix permeability this
precision improves, so this might be a precision issue since the pressure
gradient at the fracture-matrix interface is supposed to be very small
(because the fracture is way more permeable). Well, if I can make this
solver work I can verify this possibility.

Thanks for your help

Best regards


Ana

Em seg., 22 de nov. de 2021 às 17:53, Timo Koch 
escreveu:

> Dear Ana,
>
> can you try the 1p quad precision test first?
>
> https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/blob/master/test/porousmediumflow/1p/incompressible/CMakeLists.txt
>
> in the dumux build folder, e.g. dumux/build-cmake run
> make test_1p_incompressible_tpfa_quad
> ctest -R test_1p_incompressible_tpfa_quad -VV
>
> BTW I don’t suspect that your original problem is related to precision
> (but it could be). The double precision is indeed something like 15
> significant digits but that doesn’t mean that we can’t compute with much
> smaller or much bigger floating point numbers at all. A problem only occurs
> if you for example add 1.0 and 1e-16, because then the precision is not
> enough to represent the difference between 1.0 and 1.0+1e-16.
> As we don’t usually do something like that with permeability it should be
> fine. But of course I don’t know all of the code.
>
> It’s quite difficult to help with the original issue as I’m not familiar
> with the paper or know exactly how the setup looks like.
>
> Best wishes
> Timo
>
>
> On 22. Nov 2021, at 17:30, Ana Carolina Loyola 
> wrote:
>
> Following up on my question on the precision error of my boundary fluxes,
> I am trying to introduce the quad precision for my multidomain problem with
> facets. For that, I have to use another Linear Solver. Following the test
> case in dumux/test/multidomain/facet/1p_1p/analytical, I tried to
> use BlockDiagAMGBiCGSTABSolver, but the following error occurs:
>
> Solve: M deltax^k = rNewton: Caught exception: "NumericalProblem
> [solveLinearSystem:/home/analoyola/dumux/dumux/dumux/nonlinear/newtonsolver.hh:383]:
> Linear solver did not converge"
> Dune reported error: NumericalProblem
> [.../dumux/dumux/dumux/nonlinear/newtonsolver.hh:260]: Newton solver didn't
> converge after 0 iterations.
>  ---> Abort!
>
> When I try to run the test in
> dumux/test/multidomain/facet/1p_1p/analytical for the box method, a crash
> also occurs:
>
> Solve: M deltax^k = rNewton: Caught exception: "SolverAbort
> [apply:/home/analoyola/dumux3.4/dune-istl/dune/istl/solvers.hh:494]:
> breakdown in BiCGSTAB - rho 0 <= EPSILON 1e-80 after 93.5 iterations"
> terminate called after throwing an instance of 'Dumux::NumericalProblem'
>   what():  NumericalProblem
> [.../dumux/dumux/nonlinear/newtonsolver.hh:397]: Newton solver didn't
> converge after 0 iterations.
>
> The same does not occur for the cell-centered schemes.
>
> Any idea on what may be causing the crash and how to solve this ?
>
> Thanks a lot
>
> Ana
>
> Em sex., 19 de nov. de 2021 às 00:02, <
> dumux-requ...@listserv.uni-stuttgart.de> escreveu:
>
>> Send DuMux mailing list submissions to
>> dumux@listserv.uni-stuttgart.de
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://listserv.uni-stuttgart.de/mailman/listinfo/dumux
>> or, via email, send a message with subject or body 'help' to
>> dumux-requ...@listserv.uni-stuttgart.de
>>
>> You can reach the person managing the list at
>> dumux-ow...@listserv.uni-stuttgart.de
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of DuMux digest..."
>>
>>
>> Today's Topics:
>>
>>1. Re: Integration of boundary flux for upscaling: "errors"
>>   (Christoph Gr?ninger)
>>2. Re: (no subject) (Pham, Vuong Van)
>>
>>
>> --
>>
>> Message: 1
>> Date: Thu, 18 Nov 2021 23:07:15 +0100
>> From: Christoph Gr?ninger 
>> To: dumux@listserv.uni-stuttgart.de
>> Subject: Re: [DuMux] Integration of boundary flux for upscaling:
>> "errors"
>> Message-ID: <3f963241-43cb-9cf1-7b16-3331adc37...@grueninger.de>
>> Content-Type: text/plain; charset=utf-8; format=flowed
>>
>> Hi Ana,
>> I am not sure whether you are right that 1e-14 to 1e-16 should not be
>> con

Re: [DuMux] DuMux Digest, Vol 128, Issue 12

2021-11-22 Thread Ana Carolina Loyola
Hello Dennis,
Yes, I have in installed.

libsuitesparse-dev is already the newest version (1:5.7.1+dfsg-2).

I can run some test examples using  BlockDiagAMGBiCGSTABSolver when the
discretization scheme is either tpfa or mpfa, so far these crashes have
happened when using the Box. I wonder why.

Best wishes

Ana

Em seg., 22 de nov. de 2021 às 17:48, Dennis Gläser <
dennis.glae...@iws.uni-stuttgart.de> escreveu:

> Hi Ana,
>
> do you have UMFPack (a direct solver) installed on your system? As far as
> I know, that is used under the hood to do the coarse grid solve in the
> AMGSolver, but has an iterative fallback in case it is not found.
>
> On ubuntu, you can install it with 'apt install libsuitesparse-dev'.
>
> After installing, you may need to remove your cmake caches and rerun
> dunecontrol.
>
> Let me know if this helps!
>
> Best wishes,
> Dennis
> On 22.11.21 17:30, Ana Carolina Loyola wrote:
>
> Following up on my question on the precision error of my boundary fluxes,
> I am trying to introduce the quad precision for my multidomain problem with
> facets. For that, I have to use another Linear Solver. Following the test
> case in dumux/test/multidomain/facet/1p_1p/analytical, I tried to
> use BlockDiagAMGBiCGSTABSolver, but the following error occurs:
>
> Solve: M deltax^k = rNewton: Caught exception: "NumericalProblem
> [solveLinearSystem:/home/analoyola/dumux/dumux/dumux/nonlinear/newtonsolver.hh:383]:
> Linear solver did not converge"
> Dune reported error: NumericalProblem
> [.../dumux/dumux/dumux/nonlinear/newtonsolver.hh:260]: Newton solver didn't
> converge after 0 iterations.
>  ---> Abort!
>
> When I try to run the test in
> dumux/test/multidomain/facet/1p_1p/analytical for the box method, a crash
> also occurs:
>
> Solve: M deltax^k = rNewton: Caught exception: "SolverAbort
> [apply:/home/analoyola/dumux3.4/dune-istl/dune/istl/solvers.hh:494]:
> breakdown in BiCGSTAB - rho 0 <= EPSILON 1e-80 after 93.5 iterations"
> terminate called after throwing an instance of 'Dumux::NumericalProblem'
>   what():  NumericalProblem
> [.../dumux/dumux/nonlinear/newtonsolver.hh:397]: Newton solver didn't
> converge after 0 iterations.
>
> The same does not occur for the cell-centered schemes.
>
> Any idea on what may be causing the crash and how to solve this ?
>
> Thanks a lot
>
> Ana
>
> Em sex., 19 de nov. de 2021 às 00:02, <
> dumux-requ...@listserv.uni-stuttgart.de> escreveu:
>
>> Send DuMux mailing list submissions to
>> dumux@listserv.uni-stuttgart.de
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://listserv.uni-stuttgart.de/mailman/listinfo/dumux
>> or, via email, send a message with subject or body 'help' to
>> dumux-requ...@listserv.uni-stuttgart.de
>>
>> You can reach the person managing the list at
>> dumux-ow...@listserv.uni-stuttgart.de
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of DuMux digest..."
>>
>>
>> Today's Topics:
>>
>>1. Re: Integration of boundary flux for upscaling: "errors"
>>   (Christoph Gr?ninger)
>>2. Re: (no subject) (Pham, Vuong Van)
>>
>>
>> --
>>
>> Message: 1
>> Date: Thu, 18 Nov 2021 23:07:15 +0100
>> From: Christoph Gr?ninger 
>> To: dumux@listserv.uni-stuttgart.de
>> Subject: Re: [DuMux] Integration of boundary flux for upscaling:
>> "errors"
>> Message-ID: <3f963241-43cb-9cf1-7b16-3331adc37...@grueninger.de>
>> Content-Type: text/plain; charset=utf-8; format=flowed
>>
>> Hi Ana,
>> I am not sure whether you are right that 1e-14 to 1e-16 should not be
>> considered as zero. It is the expected precision of a double. In
>> general, you cannot expect it to be better, just because in the regular
>> case it by luck is actually better.
>>
>> Maybe you can run your simulation with quad precision. That might help
>> to get more knowledge regarding the limits of double precision.
>>
>> Bye
>> Christoph
>>
>>
>> Am 17.11.21 um 11:31 schrieb Ana Carolina Loyola:
>> > Hello,
>> >
>> > I have been working on the?upscaling of the 2D permeability tensor of
>> > fractured media with the Box Method using the multidomain module of
>> > Dumux 3.2. The code attached has worked well when compared to the
>> > analytical solutions of perpendicular and equally spaced fractures.
>> > I apply linear pressure boundary conditions and integrate flow at the
>> > boundaries using the following equation*
>> > image.png
>> > And for that, I created a "boundary flux" function (at main.cc), which
>> > calls the computeFlux function for all the faces that are located at
>> the
>> > boundary of the domain.
>> >
>> > The reason I send this message is that I have noticed some precision
>> > errors that concern me when running another simple test case (one
>> > horizontal?and non-persistent fracture, meshes are attached).? It is
>> > expected that I have kxy and kyx equal to 0, which seems to work fine
>> > when 

Re: [DuMux] DuMux Digest, Vol 128, Issue 12

2021-11-22 Thread Timo Koch
Dear Ana,

can you try the 1p quad precision test first?
https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/blob/master/test/porousmediumflow/1p/incompressible/CMakeLists.txt
 


in the dumux build folder, e.g. dumux/build-cmake run
make test_1p_incompressible_tpfa_quad
ctest -R test_1p_incompressible_tpfa_quad -VV

BTW I don’t suspect that your original problem is related to precision (but it 
could be). The double precision is indeed something like 15 significant digits 
but that doesn’t mean that we can’t compute with much smaller or much bigger 
floating point numbers at all. A problem only occurs if you for example add 1.0 
and 1e-16, because then the precision is not enough to represent the difference 
between 1.0 and 1.0+1e-16.
As we don’t usually do something like that with permeability it should be fine. 
But of course I don’t know all of the code.

It’s quite difficult to help with the original issue as I’m not familiar with 
the paper or know exactly how the setup looks like.

Best wishes
Timo


> On 22. Nov 2021, at 17:30, Ana Carolina Loyola  
> wrote:
> 
> Following up on my question on the precision error of my boundary fluxes, I 
> am trying to introduce the quad precision for my multidomain problem with 
> facets. For that, I have to use another Linear Solver. Following the test 
> case in dumux/test/multidomain/facet/1p_1p/analytical, I tried to use 
> BlockDiagAMGBiCGSTABSolver, but the following error occurs:
> 
> Solve: M deltax^k = rNewton: Caught exception: "NumericalProblem 
> [solveLinearSystem:/home/analoyola/dumux/dumux/dumux/nonlinear/newtonsolver.hh:383]:
>  Linear solver did not converge"
> Dune reported error: NumericalProblem 
> [.../dumux/dumux/dumux/nonlinear/newtonsolver.hh:260]: Newton solver didn't 
> converge after 0 iterations.
>  ---> Abort!
> 
> When I try to run the test in dumux/test/multidomain/facet/1p_1p/analytical 
> for the box method, a crash also occurs:
> 
> Solve: M deltax^k = rNewton: Caught exception: "SolverAbort 
> [apply:/home/analoyola/dumux3.4/dune-istl/dune/istl/solvers.hh:494]: 
> breakdown in BiCGSTAB - rho 0 <= EPSILON 1e-80 after 93.5 iterations"
> terminate called after throwing an instance of 'Dumux::NumericalProblem'
>   what():  NumericalProblem [.../dumux/dumux/nonlinear/newtonsolver.hh:397]: 
> Newton solver didn't converge after 0 iterations.
> 
> The same does not occur for the cell-centered schemes.
> 
> Any idea on what may be causing the crash and how to solve this ?
> 
> Thanks a lot
> 
> Ana 
> 
> Em sex., 19 de nov. de 2021 às 00:02, 
>  > escreveu:
> Send DuMux mailing list submissions to
> dumux@listserv.uni-stuttgart.de 
> 
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> https://listserv.uni-stuttgart.de/mailman/listinfo/dumux 
> 
> or, via email, send a message with subject or body 'help' to
> dumux-requ...@listserv.uni-stuttgart.de 
> 
> 
> You can reach the person managing the list at
> dumux-ow...@listserv.uni-stuttgart.de 
> 
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of DuMux digest..."
> 
> 
> Today's Topics:
> 
>1. Re: Integration of boundary flux for upscaling: "errors"
>   (Christoph Gr?ninger)
>2. Re: (no subject) (Pham, Vuong Van)
> 
> 
> --
> 
> Message: 1
> Date: Thu, 18 Nov 2021 23:07:15 +0100
> From: Christoph Gr?ninger mailto:f...@grueninger.de>>
> To: dumux@listserv.uni-stuttgart.de 
> Subject: Re: [DuMux] Integration of boundary flux for upscaling:
> "errors"
> Message-ID: <3f963241-43cb-9cf1-7b16-3331adc37...@grueninger.de 
> >
> Content-Type: text/plain; charset=utf-8; format=flowed
> 
> Hi Ana,
> I am not sure whether you are right that 1e-14 to 1e-16 should not be 
> considered as zero. It is the expected precision of a double. In 
> general, you cannot expect it to be better, just because in the regular 
> case it by luck is actually better.
> 
> Maybe you can run your simulation with quad precision. That might help 
> to get more knowledge regarding the limits of double precision.
> 
> Bye
> Christoph
> 
> 
> Am 17.11.21 um 11:31 schrieb Ana Carolina Loyola:
> > Hello,
> > 
> > I have been working on the?upscaling of the 2D permeability tensor of 
> > fractured media with the Box Method using the multidomain module of 
> > Dumux 3.2. The code attached has worked well when compared to the 
> > analytical solutions of perpendicular and equally spaced fr

Re: [DuMux] DuMux Digest, Vol 128, Issue 12

2021-11-22 Thread Dennis Gläser

Hi Ana,

do you have UMFPack (a direct solver) installed on your system? As far 
as I know, that is used under the hood to do the coarse grid solve in 
the AMGSolver, but has an iterative fallback in case it is not found.


On ubuntu, you can install it with 'apt install libsuitesparse-dev'.

After installing, you may need to remove your cmake caches and rerun 
dunecontrol.


Let me know if this helps!

Best wishes,
Dennis

On 22.11.21 17:30, Ana Carolina Loyola wrote:
Following up on my question on the precision error of my 
boundary fluxes, I am trying to introduce the quad precision for my 
multidomain problem with facets. For that, I have to use another 
Linear Solver. Following the test case 
in dumux/test/multidomain/facet/1p_1p/analytical, I tried to 
use BlockDiagAMGBiCGSTABSolver, but the following error occurs:


Solve: M deltax^k = rNewton: Caught exception: "NumericalProblem 
[solveLinearSystem:/home/analoyola/dumux/dumux/dumux/nonlinear/newtonsolver.hh:383]: 
Linear solver did not converge"
Dune reported error: NumericalProblem 
[.../dumux/dumux/dumux/nonlinear/newtonsolver.hh:260]: Newton solver 
didn't converge after 0 iterations.

 ---> Abort!

When I try to run the test in 
dumux/test/multidomain/facet/1p_1p/analytical for the box method, a 
crash also occurs:


Solve: M deltax^k = rNewton: Caught exception: "SolverAbort 
[apply:/home/analoyola/dumux3.4/dune-istl/dune/istl/solvers.hh:494]: 
breakdown in BiCGSTAB - rho 0 <= EPSILON 1e-80 after 93.5 iterations"

terminate called after throwing an instance of 'Dumux::NumericalProblem'
  what():  NumericalProblem 
[.../dumux/dumux/nonlinear/newtonsolver.hh:397]: Newton solver didn't 
converge after 0 iterations.


The same does not occur for the cell-centered schemes.

Any idea on what may be causing the crash and how to solve this ?

Thanks a lot

Ana

Em sex., 19 de nov. de 2021 às 00:02, 
> escreveu:


Send DuMux mailing list submissions to
dumux@listserv.uni-stuttgart.de


To subscribe or unsubscribe via the World Wide Web, visit
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux

or, via email, send a message with subject or body 'help' to
dumux-requ...@listserv.uni-stuttgart.de


You can reach the person managing the list at
dumux-ow...@listserv.uni-stuttgart.de


When replying, please edit your Subject line so it is more specific
than "Re: Contents of DuMux digest..."


Today's Topics:

   1. Re: Integration of boundary flux for upscaling: "errors"
      (Christoph Gr?ninger)
   2. Re: (no subject) (Pham, Vuong Van)


--

Message: 1
Date: Thu, 18 Nov 2021 23:07:15 +0100
From: Christoph Gr?ninger mailto:f...@grueninger.de>>
To: dumux@listserv.uni-stuttgart.de

Subject: Re: [DuMux] Integration of boundary flux for upscaling:
        "errors"
Message-ID: <3f963241-43cb-9cf1-7b16-3331adc37...@grueninger.de
>
Content-Type: text/plain; charset=utf-8; format=flowed

Hi Ana,
I am not sure whether you are right that 1e-14 to 1e-16 should not be
considered as zero. It is the expected precision of a double. In
general, you cannot expect it to be better, just because in the
regular
case it by luck is actually better.

Maybe you can run your simulation with quad precision. That might
help
to get more knowledge regarding the limits of double precision.

Bye
Christoph


Am 17.11.21 um 11:31 schrieb Ana Carolina Loyola:
> Hello,
>
> I have been working on the?upscaling of the 2D permeability
tensor of
> fractured media with the Box Method using the multidomain module of
> Dumux 3.2. The code attached has worked well when compared to the
> analytical solutions of perpendicular and equally spaced fractures.
> I apply linear pressure boundary conditions and integrate flow
at the
> boundaries using the following equation*
> image.png
> And for that, I created a "boundary flux" function (at main.cc),
which
> calls the computeFlux function for all the faces that are
located at the
> boundary of the domain.
>
> The reason I send this message is that I have noticed some
precision
> errors that concern me when running another simple test case (one
> horizontal?and non-persistent fracture, meshes are attached).?
It is
> expected that I have kxy and kyx equal to 0, which seems to work
fine
> when I use a symmetric mesh (signalized with -sym in the .msh
files),
> since I get kxy and kyx in t

Re: [DuMux] Dispersion in radial flow

2021-11-22 Thread Timo Koch
Hi Dmitry,

Dumux differentiates between diffusion (molecular diffusion) and dispersion (a 
scale effect).

(Dispersion used to be implemented in dumux 2 but some problems and had not 
been reimplemented after 3.0 since no-one had an immediate used case.)

However, you are asking in the right time because there is 
https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/merge_requests/2921 

 about to be merged,
which adds dispersion back in. Maybe you can have a look at that if this would 
be suitable to model what you are looking for.

Best wishes,
Timo

> On 22. Nov 2021, at 15:18, Dmitry Pavlov  wrote:
> 
> Hello,
> 
> It is well known that DuMux models (i. e. 1pnc) include diffusion term. If 
> the velocity of the flow is constant, diffusion and dispersion can 
> simultaneously be modeled via this one term. But in the case of a radial 
> flow, velocity decreases with distance from the source, so the dispersion 
> near the source is bigger than far from the source.
> 
> Is there a way for the user to include e. g. pressure gradient into the 
> calculation of the diffusion term in 1pnc or 2pnc?
> 
> Best regards,
> 
> Dmitry
> 
> 
> ___
> DuMux mailing list
> DuMux@listserv.uni-stuttgart.de
> https://listserv.uni-stuttgart.de/mailman/listinfo/dumux

___
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


[DuMux] Dispersion in radial flow

2021-11-22 Thread Dmitry Pavlov

Hello,

It is well known that DuMux models (i. e. 1pnc) include diffusion term. 
If the velocity of the flow is constant, diffusion and dispersion can 
simultaneously be modeled via this one term. But in the case of a radial 
flow, velocity decreases with distance from the source, so the 
dispersion near the source is bigger than far from the source.


Is there a way for the user to include e. g. pressure gradient into the 
calculation of the diffusion term in 1pnc or 2pnc?


Best regards,

Dmitry


___
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux