Can you please post the output of make VERBOSE=1 test_box2p
after performing a "make clean". In any case, you have to use the AMGBackend as linear solver. This is the default, but in 2.12, the linear solver is set to ILU0BiCGSTABBackend in lensproblem.hh, which isn't parallel. So you have to replace the line SET_TYPE_PROP(LensBoxProblem, LinearSolver, ILU0BiCGSTABBackend<TypeTag> ); by SET_TYPE_PROP(LensBoxProblem, LinearSolver, AMGBackend<TypeTag> ); and to #include <dumux/linear/amgbackend.hh> in that file. Anyway, that's not the problem that you encounter here. Apparently, MPI is detected properly, but you are not employing it. Kind regards Bernd -- _______________________________________________________________ Bernd Flemisch phone: +49 711 685 69162 IWS, Universität Stuttgart fax: +49 711 685 60430 Pfaffenwaldring 61 email: [email protected] D-70569 Stuttgart url: www.hydrosys.uni-stuttgart.de _______________________________________________________________ ________________________________ Von: lc <[email protected]> Gesendet: Mittwoch, 30. Januar 2019 13:19:47 An: Flemisch, Bernd; DuMuX User Forum; Koch, Timo Betreff: Re: AW: [DuMuX] set_singular_limit and parallel execution Here is the log, Thank you, Lorenzo On 30.01.2019 14:53, Bernd Flemisch wrote: Are you sure that dunecontrol finds MPI? Can you post the output of dunecontrol? Bernd On 01/30/2019 12:46 PM, lc wrote: Good morning, I updated the virtual machine as you may see from the lscpu_new file attached. Then, I try to run both sequential 2p test_impesadaptive and 2p implicit test_box2p (input file attached) with 1 4 and 8 cores straight from the Dumux 2.12 test suite without any other modification (just the grid in the input file). I still get the same behaviour: 1 core -> 302 s 4 cores -> 337 s (output attached) 8 cores -> 470 s It seems that as you noted, these processes do not communicate each other. Do you agree? These testcases, should be fully parallel, so what's wrong? Thank you, P.S. I also have other pending questions ... Kind regards, Lorenzo On 24.01.2019 14:42, Flemisch, Bernd wrote: Timo is of course right. Nevertheless, your parallel output shows that the program isn't executed in a distributed way. It simply executes the full sequential code on four processes that don't communicate with each other. The four processes occupy two physical cores resulting in a degradation in performance. The sequential (in the sense of temporal discretization and coupling) 2p2c model is not parallel. The fully implicit ones are parallel, also the sequential 2p model is parallel. Let us know if you experience the same problems with a truly parallel model. If you do so, please consider Timo's comment that you choose a number of processes that is less or equal than the number of physical cores on your machine. Kind regards Bernd -- _______________________________________________________________ Bernd Flemisch phone: +49 711 685 69162 IWS, Universität Stuttgart fax: +49 711 685 60430 Pfaffenwaldring 61 email: [email protected]<mailto:[email protected]> D-70569 Stuttgart url: www.hydrosys.uni-stuttgart.de<http://www.hydrosys.uni-stuttgart.de> _______________________________________________________________ ________________________________ Von: Dumux <[email protected]><mailto:[email protected]> im Auftrag von Timo Koch <[email protected]><mailto:[email protected]> Gesendet: Donnerstag, 24. Januar 2019 11:01:56 An: DuMuX User Forum; lc Betreff: Re: [DuMuX] set_singular_limit and parallel execution Hi Lorenzo, you can't scale well to 4 cores because your CPU only has two cores. The two additional ones are only hyperthreads. Please repeat the same numerical experiment on a machine with more cores. Timo On 24.01.19 10:37, lc wrote: On 18.01.2019 21:52, Flemisch, Bernd wrote: Can you please post the complete output for the run with 4 cores? Plus the initial vtk files? And, pretty please with sugar on top, the number of physical cores on your machine? Good morning, I enclose the requested files for the sequential (1 processor) and parallel (4 processors) simulations plus some detail on my machine. Thanks for help, Lorenzo _______________________________________________ Dumux mailing list [email protected]<mailto:[email protected]> https://listserv.uni-stuttgart.de/mailman/listinfo/dumux -- _______________________________________________________________ Timo Koch phone: +49 711 685 64676 IWS, Universität Stuttgart fax: +49 711 685 60430 Pfaffenwaldring 61 email: [email protected]<mailto:[email protected]> D-70569 Stuttgart url: www.hydrosys.uni-stuttgart.de<http://www.hydrosys.uni-stuttgart.de> _______________________________________________________________ -- _______________________________________________________________ Bernd Flemisch phone: +49 711 685 69162 IWS, Universität Stuttgart fax: +49 711 685 60430 Pfaffenwaldring 61 email: [email protected]<mailto:[email protected]> D-70569 Stuttgart url: www.hydrosys.uni-stuttgart.de<http://www.hydrosys.uni-stuttgart.de> _______________________________________________________________
_______________________________________________ Dumux mailing list [email protected] https://listserv.uni-stuttgart.de/mailman/listinfo/dumux
