On Tue, 29 Mar 2022, Junchao Zhang wrote: > On Tue, Mar 29, 2022 at 4:59 PM Satish Balay via petsc-dev < > petsc-dev@mcs.anl.gov> wrote: > > > We do have such builds in CI - don't know why CI didn't catch it. > > > > $ grep with-64-bit-indices=1 *.py > > arch-ci-freebsd-cxx-cmplx-64idx-dbg.py: '--with-64-bit-indices=1', > > arch-ci-linux-cuda-double-64idx.py: '--with-64-bit-indices=1', > > arch-ci-linux-cxx-cmplx-pkgs-64idx.py: '--with-64-bit-indices=1', > > arch-ci-linux-pkgs-64idx.py: '--with-64-bit-indices=1', > > arch-ci-opensolaris-misc.py: '--with-64-bit-indices=1', > > > > It implies these CI jobs do not have a recent MPI (like MPICH-4.x ) that > supports MPI-4 large count? It looks we need to have one.
And a Mac I can't reproduce on linux [even with latest clang] Satish > > > > > > Satish > > > > On Tue, 29 Mar 2022, Fande Kong wrote: > > > > > OK, I attached the configure log here so that we have move information. > > > > > > I feel like we should do > > > > > > typedef MPI_Count PetscSFCount > > > > > > Do we have the target of 64-bit-indices with C++ in CI? I was > > > surprised that I am the only guy who saw this issue > > > > > > Thanks, > > > > > > Fande > > > > > > On Tue, Mar 29, 2022 at 2:50 PM Satish Balay <ba...@mcs.anl.gov> wrote: > > > > > > > What MPI is this? How to reproduce? > > > > > > > > Perhaps its best if you can send the relevant logs. > > > > > > > > The likely trigger code in sfneighbor.c: > > > > > > > > >>>> > > > > /* A convenience temporary type */ > > > > #if defined(PETSC_HAVE_MPI_LARGE_COUNT) && > > defined(PETSC_USE_64BIT_INDICES) > > > > typedef PetscInt PetscSFCount; > > > > #else > > > > typedef PetscMPIInt PetscSFCount; > > > > #endif > > > > > > > > This change is at https://gitlab.com/petsc/petsc/-/commit/c87b50c4628 > > > > > > > > Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type > > that > > > > should go with it which we could use - instead of PetscInt? > > > > > > > > > > > > Perhaps it should be: "typedef log PetscSFCount;" > > > > > > > > Satish > > > > > > > > > > > > On Tue, 29 Mar 2022, Fande Kong wrote: > > > > > > > > > It seems correct according to > > > > > > > > > > #define PETSC_SIZEOF_LONG 8 > > > > > > > > > > #define PETSC_SIZEOF_LONG_LONG 8 > > > > > > > > > > > > > > > Can not convert from "non-constant" to "constant"? > > > > > > > > > > Fande > > > > > > > > > > On Tue, Mar 29, 2022 at 2:22 PM Fande Kong <fdkong...@gmail.com> > > wrote: > > > > > > > > > > > Hi All, > > > > > > > > > > > > When building PETSc with 64 bit indices, it seems that > > PetscSFCount is > > > > > > 64-bit integer while MPI_Count is still 32 bit. > > > > > > > > > > > > typedef long MPI_Count; > > > > > > > > > > > > typedef PetscInt PetscSFCount; > > > > > > > > > > > > > > > > > > I had the following errors. Do I have a bad MPI? > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Fande > > > > > > > > > > > > > > > > > > > > > > > > Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18: > > > > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c' > > > > > > > > > > > > > > > > > > PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req)); > > > > > > > > > > > > > > > > > > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > > > > > > > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79: > > > > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv' > > > > > > #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j) > > > > > > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j) > > > > > > > > > > > > ^~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: > > note: > > > > > > expanded from macro 'PetscCallMPI' > > > > > > PetscMPIInt _7_errorcode = __VA_ARGS__; > > > > > > \ > > > > > > ^~~~~~~~~~~ > > > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: > > note: > > > > > > candidate function not viable: no known conversion from > > 'PetscSFCount > > > > *' > > > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') > > for 2nd > > > > > > argument > > > > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count > > > > > > sendcounts[], > > > > > > ^ > > > > > > > > > > > > /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:195:18: > > > > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c' > > > > > > > > > > > > > > > > > > PetscCallMPI(MPIU_Ineighbor_alltoallv(leafbuf,dat->leafcounts,dat->leafdispls,unit,rootbuf,dat->rootcounts,dat->rootdispls,unit,distcomm,req)); > > > > > > > > > > > > > > > > > > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > > > > > > > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79: > > > > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv' > > > > > > #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j) > > > > > > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j) > > > > > > > > > > > > ^~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: > > note: > > > > > > expanded from macro 'PetscCallMPI' > > > > > > PetscMPIInt _7_errorcode = __VA_ARGS__; > > > > > > \ > > > > > > ^~~~~~~~~~~ > > > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: > > note: > > > > > > candidate function not viable: no known conversion from > > 'PetscSFCount > > > > *' > > > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') > > for 2nd > > > > > > argument > > > > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count > > > > > > sendcounts[], > > > > > > ^ > > > > > > > > > > > > /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:240:18: > > > > > > error: no matching function for call to 'MPI_Neighbor_alltoallv_c' > > > > > > > > > > > > > > > > > > PetscCallMPI(MPIU_Neighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,comm)); > > > > > > > > > > > > > > > > > > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > > > > > > > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:96:79: > > > > > > note: expanded from macro 'MPIU_Neighbor_alltoallv' > > > > > > #define MPIU_Neighbor_alltoallv(a,b,c,d,e,f,g,h,i) > > > > > > MPI_Neighbor_alltoallv_c(a,b,c,d,e,f,g,h,i) > > > > > > > > > > > > ^~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: > > note: > > > > > > expanded from macro 'PetscCallMPI' > > > > > > PetscMPIInt _7_errorcode = __VA_ARGS__; > > > > > > \ > > > > > > ^~~~~~~~~~~ > > > > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:1001:5: > > note: > > > > > > candidate function not viable: no known conversion from > > 'PetscSFCount > > > > *' > > > > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') > > for 2nd > > > > > > argument > > > > > > int MPI_Neighbor_alltoallv_c(const void *sendbuf, const MPI_Count > > > > > > sendcounts[], > > > > > > ^ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >