Hi Ali, the AMG uses the sequential SuperLU as a coarse solver, you should install that one. All degrees of freedom on the coarsest level are gathered on one process. If you want to go really massively parallel >>100 processes, you would like to avoid that, otherwise it is ok.
Thank you for the good wishes. All the best to you, too! Kind regards Bernd On Tue, 17 Dec 2013 18:20:34 +0000 <[email protected]> wrote: >Dear DuMux > >Thanks a lot for your quick answer. >I would like to have your confirmation about superLU. >I receive a warning each time that I use AMG backend as it is used without >SuperLU. >We would like to install superLU, however we found three version ( sequential, >shred memory parallel and distributed memory parallel). >As we perform parallel computation, we assume that we should use superLU_DIST. >Can you please confirm our assumption? Is superLU_DIST compatible with DuMux? > >We wish you and all DuMux team a Great, Prosperous, Blissful, Healthy, >Energetic and Extremely Happy, HAPPY NEW YEAR 2014. >Best Regards >Ali NOWAMOOZ > > >From: Bernd Flemisch [mailto:[email protected]] >Sent: lundi 16 décembre 2013 02:03 >To: [email protected]; NOWAMOOZ Ali >Subject: Re: [DuMuX] Error using Alu2dGrid and AMG solver > >Hi Ali, > >The AMGBackend only works for parallel grids, namely, UG, Yasp and 3d-Alu. It >does not matter whether you actually run it sequentially or in parallel. > >For all sequential runs, and particularly for all sequential grids, you can >use SeqAMGBackend by changing the LinearSolver property in your problem file. > >Kind regards >Bernd > > >[email protected]<mailto:[email protected]> schrieb: >Dear DuMux, > >In test 1p with Amg, I tried to use 2D Alugrid (Dune::ALUGrid<2, 2, >Dune::cube, Dune::conforming> type;) with an Amg backend linear solver but I >received an error message (please see below or attached file for complete >terminal message). I also had the same problem with typedef Dune::SGrid<2, 2> >type;. I didn’t receive this error when I used typedef Dune::YaspGrid<2> >type; or in 3D (only with 2D Alugrid and SGrid). > >Could you please help me with this problem? > >Best regards >Ali NOWAMOOZ > >test_box1pwithamg.cc:69:58: required from here >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-pdelab-1.1.0/dune/pdelab/backend/novlpistlsolverbackend.hh:1324:77: > error: no matching function for call to >âDune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, >int>::OwnerOverlapCopyCommunication(const CollectiveCommunication&, >Dune::SolverCategory::Category)â >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-pdelab-1.1.0/dune/pdelab/backend/novlpistlsolverbackend.hh:1324:77: > note: candidates are: >In file included from >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-pdelab-1.1.0/dune/pdelab/backend/novlpistlsolverbackend.hh:15:0, > from ../../../dumux/linear/amgbackend.hh:30, > from 1ptestproblem.hh:47, > from test_box1pwithamg.cc:31: >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:668:5: > note: Dune::OwnerOverlapCopyCommunication<T1, >T2>::OwnerOverlapCopyCommunication(const >Dune::OwnerOverlapCopyCommunication<T1, T2>&) [with GlobalIdType = >Dune::bigunsignedint<96>; LocalIdType = int; >Dune::OwnerOverlapCopyCommunication<T1, T2> = >Dune::OwnerOverlapCopyCommunication<Dune::bigunsignedint<96>, int>] >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:668:5: > note: candidate expects 1 argument, 2 provided >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:589:2: > note: Dune::OwnerOverlapCopyCommunication<T1, >T2>::OwnerOverlapCopyCommunication(const Dune::IndexInfoFromGrid<GlobalIdType, >LocalIdType>&, MPI_Comm, Dune::SolverCategory::Category, bool) [with >GlobalIdType = Dune::bigunsignedint<96>; LocalIdType = int; MPI_Comm = >ompi_communicator_t*] >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:589:2: > note: no known conversion for argument 1 from âconst >CollectiveCommunication {aka const >Dune::CollectiveCommunication<Dune::ALU2dGrid<2, 2, >(ALU2DGrid::ElementType)1u> >}â to âconst >Dune::IndexInfoFromGrid<Dune::bigunsignedint<96>, int>&â >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:575:5: > note: Dune::OwnerOverlapCopyCommunication<T1, >T2>::OwnerOverlapCopyCommunication(Dune::SolverCategory::Category) [with >GlobalIdType = Dune::bigunsignedint<96>; LocalIdType = int] >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:575:5: > note: candidate expects 1 argument, 2 provided >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:557:5: > note: Dune::OwnerOverlapCopyCommunication<T1, >T2>::OwnerOverlapCopyCommunication(MPI_Comm, Dune::SolverCategory::Category, >bool) [with GlobalIdType = Dune::bigunsignedint<96>; LocalIdType = int; >MPI_Comm = ompi_communicator_t*] >/rap/jda-332-aa/apps/dumux-2.4.0-2/dune-istl-2.2.1/dune/istl/owneroverlapcopy.hh:557:5: > note: no known conversion for argument 1 from âconst >CollectiveCommunication {aka const >Dune::CollectiveCommunication<Dune::ALU2dGrid<2, 2, >(ALU2DGrid::ElementType)1u> >}â to âMPI_Comm {aka ompi_communicator_t*}â >make: *** [test_box1pwithamg.o] Error 1 _______________________________________________________________ !!!! CMWR 2014: 10th - 13th June 2014 in Stuttgart !!!! Please visit www.cmwr14.de _______________________________________________________________ Bernd Flemisch phone: +49 711 685 69162 IWS, Universitaet Stuttgart fax: +49 711 685 67020 Pfaffenwaldring 61 email: [email protected] D-70569 Stuttgart url: www.hydrosys.uni-stuttgart.de _______________________________________________________________ _______________________________________________ Dumux mailing list [email protected] https://listserv.uni-stuttgart.de/mailman/listinfo/dumux
