On Fri, 26 Feb 2010, Satish Balay wrote: > I think eliminating mpi.h, mpi.mod etc is not a good change. Its > likely to break users codes. I suspect it breaks FACETS. [and perhaps > Flotran] > > With this change we will continue to have MPI symbols [esp from > fortran] in libpetsc.a. So this is not really clean absorbtion of > mpiuni into petsc. And I don't think we can avoid having these MPI > symbols in -lpetsc. > > The fact that MPI API symbols exist in libpetsc.a makes PETSc not > really pure plant. Its still a plant/animal - so the goal of this code > reorganization is still not met.
I guess one way to make this a pure 'plant' is to have '#define MPI_Comm_size()' etc in mpiuni/mpif.h as well [but well have to somehow figureout how do do this for allcaps, all lowercase - usages - that the language allows]. But I'll have to figureout a different model for FACETS to interact with PETSc [with uni]. Maybe its possible [don't know as of now..] Satish > > [I might not have raised this issue earlier wrt merging mpiuni > completely into petsc - thats my omission sorry about that.] > > > > I suspec the following usage in FACETS/UEDGE will break with this > change: > > If you have package-A doing the same type of thing for its sequential > implementation - it will have its own mpi_comm_size() etc internally. > With this - mixing package-A with petsc-sequential will cause > duplicate symbol conflict. > > The way I'm currently resolving this issue is: Only one package should > provide [internal] MPI - the other package is compiled with > --with-mpi=enabled. > > I.e PETSc compiled with MPIUNI [says MPI_ENABLED]. Package-A is now > compiled with MPI-ENABLED - but links with PETSc - that provides > MPIUNI - as MPI. > > Also due to this - we added more stuff to MPUNI to cover all MPI > symbol usage from FACETS. > > > So - the previous MPIUNI implementation scheme eventhough slightly > inconsistant, provided good user interface - with minimal maintainance > overhead.. Matt's schem makes everything consistant - but with extra > maintainance overhead. > > So I still prefer previous secheme. If that not acceptable - then I > guess we have to go with Matt's scheme [ slplit up mpiuni into a > different package, add build/make support to it - and deal with all > the petsc-maint that creates..]. But the current change really breaks > things - so we should not do this. > > Satish > > On Thu, 25 Feb 2010, Barry Smith wrote: > > > > > After listening to our discussion of the half-plant/half animal handling of > > MPIUni I have adopted the plant :-) model. > > > > Background: All the packages in PETSc/packages and > > BuildSystem/config/packages > > have the following paradigm except MPI and BlasLapack. > > > > 1) PETSc can be built --with-package or --with-package=0 > > 2) If --with-package=0 then there is no PETSC_HAVE_package defined, no extra > > libraries to link against and no extra include paths to look in > > > > BlasLapack breaks this paradigm in only one way! You cannot use > > --with-blaspack=0 > > > > MPI breaks the paradigm more completely. You can use --with-mpi=0 BUT if you > > use --with-mpi=0 then PETSC_HAVE_MPI is STILL defined!!!!!! There is an > > extra > > library to link against and an extra include path to look in. > > > > The two possible solutions to resolve this perverse beast are > > 1) make mpiuni be a --download-mpiuni replacement for MPI, as we do > > --download-c-blaslapack (this would replace the current --with-mpi=0 > > support). > > 2) better supporting --with-mpi=0 without breaking the paradigm > > > > I agree with Matt that 1 is the more elegant solution, since it fits the > > paradigm perfectly. But having --with-mpi=0 is easier and more > > understandable > > to the user then explaining about downloading a dummy MPI. > > > > Thus I have implemented and pushed 2). When you use --with-mpi=0 > > 1) the macro PETSC_HAVE_MPI is not set > > 2) the list of include directories is not added to > > 3) the list of linked against libraries is not added to. > > > > I have implemented 2) and 3) by having in petsc.h (fortran also) > > #if defined(PETSC_HAVE_MPI) > > #include "mpi.h" > > #else > > #include "mpiuni/mpi.h" > > #endif > > and putting the dummy MPI stubs always into the PETSc libraries for both > > single library and multiple library PETSc installs. > > > > Note: this means one cannot have an include "mpi.h" in the uni case which > > bothered me initially but then Lisandro convinced me it was not a bad thing. > > > > The actual code changes to implement this were, of course, tiny. It is not > > perfect (only --download-mpiuni would be perfect :-), but it is better than > > before. > > > > > > Sorry Matt, > > > > > > Barry > > > > > > > > > >
