Re: [DuMuX] Memory Problem with AMG in parallel computation in version 2.10

2017-02-08 Thread Etienne Ahusborde

Hi Bernd,

I reply instead of my colleague Mustapha.

1- Yes it is the result of a sequential run.

2- Yes it is the result of the unmodified co2 test 
(test/porousmedia/co2/implicit/test_ccco2.cc). We have only increased 
the duration of the simulation (Tend=1e6 to 1e7s) to have a better sight 
of the problem.


The increase of memory consumption is easily visible with the command top.

Best regards

Etienne and Mustapha

Le 08/02/2017 à 13:06, Bernd Flemisch a écrit :

Hi Mustapha,

as Christoph says, we should try to make the example as minimal as 
possible.


On 02/08/2017 09:56 AM, Mustapha El Ossmani wrote:


We have run a simple test (co2 model in DUMUX)in some iterations with 
valgrind and we obtained the messages that can be found in the 
enclosed filewhich confirms that there is a loss of memory.



Is this the result of a sequential run? If no, can you also post that?
Is it the result of an unmodified test from 
test/porousmediumflow/co2/implicit? If no, do you see the same problem 
for such an unmodified test? If again no, can you share the 
modifications which you did that trigger the behavior?


Kind regards
Bernd


Have you ever met this issue ?

Best regards

M. El Ossmani


Le 07/02/2017 à 06:22, Christoph Grüninger a écrit :

Hi Mustapha,
I have never seen such problems but I am no solver expert. What you 
can do:


* Do see this problem only in parallel or also in sequential code? 
Only with AMG or also with GMRES and ILU?
* Reduce the problem as much as you can. The best would be a minimal 
piece of software that only depends on Dune-istl that reads in your 
matrix and repeatedly solves your linear system with the seen 
undesired memory consumption. Not sure whether this is possible and 
whether such minimal setup would still have the problem at all. When 
you reduce your current problem, the problem might vanish, too. This 
can help to find the cause of your issue.
* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.
* Turning on all compiler warnings and carefully evaluating them 
might help. But there are false positives but it can help.
* Maybe its worth repeating your question at the Dune mailing list 
d...@dune-project.org  as there are 
more Users and developers of istl.


Bye,
Christoph

Am 03.02.2017 um 11:14 schrieb Mustapha El Ossmani 
>:




Dear DuMu^X developers,

We are performing parallel computation with AMG solver. Due to 
problems of convergence in the newton's method, in amgproperties.hh 
we  set  the Preconditioner from Dune::SeqSSOR to Dune::SeqILU0 :


typedef 
Dune::BlockPreconditioner > Smoother;
 // typedef 
Dune::BlockPreconditioner > Smoother;


It seems that there is some problems of memory loss with ILU0 
preconditionner. Indeed we can see that the memory of computation 
is continually


increasing, until the computation stops with the following error 
message :


Solve: M deltax^k = rslurmstepd: Job 936902 exceeded memory limit 
(41146808 > 41058304), being killed

slurmstepd: Exceeded job memory limit

We can notice that this problem does not occur with SSOR as 
Preconditionner.


Have you ever met this issue ?

Best regards

M. El Ossmani

University of Pau

___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de 


https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux




___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



--
___

Bernd Flemisch phone: +49 711 685 69162
IWS, Universität Stuttgart fax:   +49 711 685 60430
Pfaffenwaldring 61email:be...@iws.uni-stuttgart.de
D-70569 Stuttgarturl:www.hydrosys.uni-stuttgart.de
___


___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


Re: [DuMuX] Memory Problem with AMG in parallel computation in version 2.10

2017-02-08 Thread Timo Koch

Hi Mustapha,


if I try running the pamgtest 
(dune-istl//dune/istl/paamg/test/pamgtest) with valgrind I get


very similar memory leaks. I can't confirm, though, that they are 
different depending on the smoother (ILU0, SSOR)


of the AMG preconditioner. The test is a good starting point for a 
minimal example.



If you can confirm the leaks there, please also report on the Dune 
Mailing List ( d...@dune-project.org ).



Note that some of the leaks could also be related to false positive with 
valgrind (http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mpiwrap)

during debugging with mpi parallel programs.

Kind regards
Timo

On 08.02.2017 13:06, Bernd Flemisch wrote:

Hi Mustapha,

as Christoph says, we should try to make the example as minimal as 
possible.


On 02/08/2017 09:56 AM, Mustapha El Ossmani wrote:


We have run a simple test (co2 model in DUMUX)in some iterations with 
valgrind and we obtained the messages that can be found in the 
enclosed filewhich confirms that there is a loss of memory.



Is this the result of a sequential run? If no, can you also post that?
Is it the result of an unmodified test from 
test/porousmediumflow/co2/implicit? If no, do you see the same problem 
for such an unmodified test? If again no, can you share the 
modifications which you did that trigger the behavior?


Kind regards
Bernd


Have you ever met this issue ?

Best regards

M. El Ossmani


Le 07/02/2017 à 06:22, Christoph Grüninger a écrit :

Hi Mustapha,
I have never seen such problems but I am no solver expert. What you 
can do:


* Do see this problem only in parallel or also in sequential code? 
Only with AMG or also with GMRES and ILU?
* Reduce the problem as much as you can. The best would be a minimal 
piece of software that only depends on Dune-istl that reads in your 
matrix and repeatedly solves your linear system with the seen 
undesired memory consumption. Not sure whether this is possible and 
whether such minimal setup would still have the problem at all. When 
you reduce your current problem, the problem might vanish, too. This 
can help to find the cause of your issue.
* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.
* Turning on all compiler warnings and carefully evaluating them 
might help. But there are false positives but it can help.
* Maybe its worth repeating your question at the Dune mailing list 
d...@dune-project.org  as there are 
more Users and developers of istl.


Bye,
Christoph

Am 03.02.2017 um 11:14 schrieb Mustapha El Ossmani 
>:




Dear DuMu^X developers,

We are performing parallel computation with AMG solver. Due to 
problems of convergence in the newton's method, in amgproperties.hh 
we  set  the Preconditioner from Dune::SeqSSOR to Dune::SeqILU0 :


typedef 
Dune::BlockPreconditioner > Smoother;
 // typedef 
Dune::BlockPreconditioner > Smoother;


It seems that there is some problems of memory loss with ILU0 
preconditionner. Indeed we can see that the memory of computation 
is continually


increasing, until the computation stops with the following error 
message :


Solve: M deltax^k = rslurmstepd: Job 936902 exceeded memory limit 
(41146808 > 41058304), being killed

slurmstepd: Exceeded job memory limit

We can notice that this problem does not occur with SSOR as 
Preconditionner.


Have you ever met this issue ?

Best regards

M. El Ossmani

University of Pau

___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de 


https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux




___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



--
___

Bernd Flemisch phone: +49 711 685 69162
IWS, Universität Stuttgart fax:   +49 711 685 60430
Pfaffenwaldring 61email:be...@iws.uni-stuttgart.de
D-70569 Stuttgarturl:www.hydrosys.uni-stuttgart.de
___


___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



--


Timo Koch  phone: +49 711 685 64676
IWS, Universität Stuttgart fax:   +49 711 685 60430
Pfaffenwaldring 61 email: timo.k...@iws.uni-stuttgart.de
D-70569 Stuttgart  

Re: [DuMuX] Memory Problem with AMG in parallel computation in version 2.10

2017-02-08 Thread Mustapha El Ossmani

Hi Martin,

Thank you for your reply,

Yes, I did this but  I have always the same problem.


@+

Mustapha


Le 08/02/2017 à 10:20, Martin Schneider a écrit :


Hi Mustapha,

did you try to also change the CoarsenCriterion? You could try to 
replace "Dune::Amg::SymmetricCriterion" with
"Dune::Amg::UnSymmetricCriterion".  There might be some reason why a 
symmetric preconditioner SSOR

is used for for the case of a symmetric CoarsenCriterion.

Regards,
Martin

On 02/08/2017 09:56 AM, Mustapha El Ossmani wrote:


Hi Christoph,

Thank you for your reply. Here are some precisions

* Do see this problem only in parallel or also in sequential code? 
Only with AMG or also with GMRES and ILU?


We have this problem in both case (sequential and parrallel code) 
*only*when we compile with *MPI* and only with *AMG* solver and 
Dune::*SeqILU0* as Preconditioner. Contrariwise this problem does not 
occur with SSOR as Preconditionner.


* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.


We have run a simple test (co2 model in DUMUX)in some iterations with 
valgrind and we obtained the messages that can be found in the 
enclosed filewhich confirms that there is a loss of memory.


Have you ever met this issue ?

Best regards

M. El Ossmani


Le 07/02/2017 à 06:22, Christoph Grüninger a écrit :

Hi Mustapha,
I have never seen such problems but I am no solver expert. What you 
can do:


* Do see this problem only in parallel or also in sequential code? 
Only with AMG or also with GMRES and ILU?
* Reduce the problem as much as you can. The best would be a minimal 
piece of software that only depends on Dune-istl that reads in your 
matrix and repeatedly solves your linear system with the seen 
undesired memory consumption. Not sure whether this is possible and 
whether such minimal setup would still have the problem at all. When 
you reduce your current problem, the problem might vanish, too. This 
can help to find the cause of your issue.
* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.
* Turning on all compiler warnings and carefully evaluating them 
might help. But there are false positives but it can help.
* Maybe its worth repeating your question at the Dune mailing list 
d...@dune-project.org  as there are 
more Users and developers of istl.


Bye,
Christoph

Am 03.02.2017 um 11:14 schrieb Mustapha El Ossmani 
>:




Dear DuMu^X developers,

We are performing parallel computation with AMG solver. Due to 
problems of convergence in the newton's method, in amgproperties.hh 
we  set  the Preconditioner from Dune::SeqSSOR to Dune::SeqILU0 :


typedef 
Dune::BlockPreconditioner > Smoother;
 // typedef 
Dune::BlockPreconditioner > Smoother;


It seems that there is some problems of memory loss with ILU0 
preconditionner. Indeed we can see that the memory of computation 
is continually


increasing, until the computation stops with the following error 
message :


Solve: M deltax^k = rslurmstepd: Job 936902 exceeded memory limit 
(41146808 > 41058304), being killed

slurmstepd: Exceeded job memory limit

We can notice that this problem does not occur with SSOR as 
Preconditionner.


Have you ever met this issue ?

Best regards

M. El Ossmani

University of Pau

___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de 


https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux




___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux




___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


Re: [DuMuX] Memory Problem with AMG in parallel computation in version 2.10

2017-02-08 Thread Martin Schneider

Hi Mustapha,

did you try to also change the CoarsenCriterion? You could try to 
replace "Dune::Amg::SymmetricCriterion" with
"Dune::Amg::UnSymmetricCriterion".  There might be some reason why a 
symmetric preconditioner SSOR

is used for for the case of a symmetric CoarsenCriterion.

Regards,
Martin

On 02/08/2017 09:56 AM, Mustapha El Ossmani wrote:


Hi Christoph,

Thank you for your reply. Here are some precisions

* Do see this problem only in parallel or also in sequential code? 
Only with AMG or also with GMRES and ILU?


We have this problem in both case (sequential and parrallel code) 
*only*when we compile with *MPI* and only with *AMG* solver and 
Dune::*SeqILU0* as Preconditioner. Contrariwise this problem does not 
occur with SSOR as Preconditionner.


* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.


We have run a simple test (co2 model in DUMUX)in some iterations with 
valgrind and we obtained the messages that can be found in the 
enclosed filewhich confirms that there is a loss of memory.


Have you ever met this issue ?

Best regards

M. El Ossmani


Le 07/02/2017 à 06:22, Christoph Grüninger a écrit :

Hi Mustapha,
I have never seen such problems but I am no solver expert. What you 
can do:


* Do see this problem only in parallel or also in sequential code? 
Only with AMG or also with GMRES and ILU?
* Reduce the problem as much as you can. The best would be a minimal 
piece of software that only depends on Dune-istl that reads in your 
matrix and repeatedly solves your linear system with the seen 
undesired memory consumption. Not sure whether this is possible and 
whether such minimal setup would still have the problem at all. When 
you reduce your current problem, the problem might vanish, too. This 
can help to find the cause of your issue.
* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.
* Turning on all compiler warnings and carefully evaluating them 
might help. But there are false positives but it can help.
* Maybe its worth repeating your question at the Dune mailing list 
d...@dune-project.org  as there are 
more Users and developers of istl.


Bye,
Christoph

Am 03.02.2017 um 11:14 schrieb Mustapha El Ossmani 
>:




Dear DuMu^X developers,

We are performing parallel computation with AMG solver. Due to 
problems of convergence in the newton's method,  in amgproperties.hh 
we  set  the Preconditioner from Dune::SeqSSOR to Dune::SeqILU0 :


typedef 
Dune::BlockPreconditioner > Smoother;
 // typedef 
Dune::BlockPreconditioner > Smoother;


It seems that there is some problems of memory loss with ILU0 
preconditionner. Indeed we can see that the memory of computation is 
continually


increasing, until the computation stops with the following error 
message :


Solve: M deltax^k = rslurmstepd: Job 936902 exceeded memory limit 
(41146808 > 41058304), being killed

slurmstepd: Exceeded job memory limit

We can notice that this problem does not occur with SSOR as 
Preconditionner.


Have you ever met this issue ?

Best regards

M. El Ossmani

University of Pau

___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de 
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux




___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


Re: [DuMuX] Memory Problem with AMG in parallel computation in version 2.10

2017-02-08 Thread Mustapha El Ossmani

Hi Christoph,

Thank you for your reply. Here are some precisions

* Do see this problem only in parallel or also in sequential code? Only 
with AMG or also with GMRES and ILU?


We have this problem in both case (sequential and parrallel code) 
*only*when we compile with *MPI* and only with *AMG* solver and 
Dune::*SeqILU0* as Preconditioner. Contrariwise this problem does not 
occur with SSOR as Preconditionner.


* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.


We have run a simple test (co2 model in DUMUX)in some iterations with 
valgrind and we obtained the messages that can be found in the enclosed 
filewhich confirms that there is a loss of memory.


Have you ever met this issue ?

Best regards

M. El Ossmani


Le 07/02/2017 à 06:22, Christoph Grüninger a écrit :

Hi Mustapha,
I have never seen such problems but I am no solver expert. What you 
can do:


* Do see this problem only in parallel or also in sequential code? 
Only with AMG or also with GMRES and ILU?
* Reduce the problem as much as you can. The best would be a minimal 
piece of software that only depends on Dune-istl that reads in your 
matrix and repeatedly solves your linear system with the seen 
undesired memory consumption. Not sure whether this is possible and 
whether such minimal setup would still have the problem at all. When 
you reduce your current problem, the problem might vanish, too. This 
can help to find the cause of your issue.
* Analyze the problem with Valgrind or AddressSanitizer. Having it 
reduced might be beneficial.
* Turning on all compiler warnings and carefully evaluating them might 
help. But there are false positives but it can help.
* Maybe its worth repeating your question at the Dune mailing list 
d...@dune-project.org  as there are more 
Users and developers of istl.


Bye,
Christoph

Am 03.02.2017 um 11:14 schrieb Mustapha El Ossmani 
>:




Dear DuMu^X developers,

We are performing parallel computation with AMG solver. Due to 
problems of convergence in the newton's method,  in amgproperties.hh 
we  set  the Preconditioner from Dune::SeqSSOR to Dune::SeqILU0 :


typedef 
Dune::BlockPreconditioner > Smoother;
 // typedef 
Dune::BlockPreconditioner > Smoother;


It seems that there is some problems of memory loss with ILU0 
preconditionner. Indeed we can see that the memory of computation is 
continually


increasing, until the computation stops with the following error 
message :


Solve: M deltax^k = rslurmstepd: Job 936902 exceeded memory limit 
(41146808 > 41058304), being killed

slurmstepd: Exceeded job memory limit

We can notice that this problem does not occur with SSOR as 
Preconditionner.


Have you ever met this issue ?

Best regards

M. El Ossmani

University of Pau

___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de 
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux



___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux


==8173== Memcheck, a memory error detector
==8173== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==8173== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==8173== Command: ./test_ccco2
==8173== Parent PID: 22281
==8173== 
==8173== 
==8173== HEAP SUMMARY:
==8173== in use at exit: 6,160,374 bytes in 203 blocks
==8173==   total heap usage: 2,040,274 allocs, 2,040,071 frees, 455,504,626 
bytes allocated
==8173== 
==8173== 1 bytes in 1 blocks are definitely lost in loss record 1 of 169
==8173==at 0x4C2DB8F: malloc (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==8173==by 0xAA9F72C: ???
==8173==by 0x70AECC6: opal_db_base_store (in 
/usr/lib/openmpi/lib/libopen-pal.so.13.0.2)
==8173==by 0x4E7D702: ompi_modex_send_string (in 
/usr/lib/openmpi/lib/libmpi.so.12.0.2)
==8173==by 0x4E7AAED: ompi_mpi_init (in 
/usr/lib/openmpi/lib/libmpi.so.12.0.2)
==8173==by 0x4E9954C: PMPI_Init (in /usr/lib/openmpi/lib/libmpi.so.12.0.2)
==8173==by 0x9D8C68: Dune::MPIHelper::MPIHelper(int&, char**&) 
(mpihelper.hh:247)
==8173==by 0x9D8B5F: Dune::MPIHelper::instance(int&, char**&) 
(mpihelper.hh:220)
==8173==by 0xA1645E: int 
Dumux::start_(int, char**, 
void (*)(char const*, std::__cxx11::basic_string const&)) (start.hh:310)
==8173==by 0xA0BF1E: int 
Dumux::start(int, char**, void 
(*)(char const*, std::__cxx11::basic_string const&)) (start.hh:510)
==8173==by 0x9D0AA7: main (test_ccco2.cc:79)
==8173== 
==8173== 21 bytes in 1 blocks are definitely lost in loss record 4 of 169
==8173==  

[DuMuX] Memory Problem with AMG in parallel computation in version 2.10

2017-02-03 Thread Mustapha El Ossmani


Dear DuMu^X developers,

We are performing parallel computation with AMG solver. Due to problems 
of convergence in the newton's method,  in amgproperties.hh we  set  the 
Preconditioner from Dune::SeqSSOR to Dune::SeqILU0 :


typedef 
Dune::BlockPreconditioner > Smoother;
 // typedef 
Dune::BlockPreconditioner > Smoother;


It seems that there is some problems of memory loss with ILU0 
preconditionner. Indeed we can see that the memory of computation is 
continually


increasing, until the computation stops with the following error message :

Solve: M deltax^k = rslurmstepd: Job 936902 exceeded memory limit 
(41146808 > 41058304), being killed

slurmstepd: Exceeded job memory limit

We can notice that this problem does not occur with SSOR as Preconditionner.

Have you ever met this issue ?

Best regards

M. El Ossmani

University of Pau

___
Dumux mailing list
Dumux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux