Re: [petsc-users] A bad commit affects MOOSE

2018-04-03 Thread Kong, Fande
On Tue, Apr 3, 2018 at 11:29 AM, Smith, Barry F. <bsm...@mcs.anl.gov> wrote:

>
>   Fande,
>
>  The reason for MPI_Comm_dup() and the inner communicator is that this
> communicator is used by hypre and so cannot "just" be a PETSc communicator.
> We cannot have PETSc and hypre using the same communicator since they may
> capture each others messages etc.
>
>   See my pull request that I think should resolve the issue in the
> short term,
>

Yes, it helps as well.

The question becomes we can not have more than 2000 AMG solvers in one
application because each Hypre owns its communicator.  There is no way to
have all AMG solvers share the same HYPRE-sided communicator? Just like
what we are dong for PETSc objects?


Fande,



>
> Barry
>
>
> > On Apr 3, 2018, at 11:21 AM, Kong, Fande <fande.k...@inl.gov> wrote:
> >
> > Figured out:
> >
> > The reason is that  in  MatCreate_HYPRE(Mat B), we call MPI_Comm_dup
> instead of PetscCommDuplicate. The PetscCommDuplicate is better, and it
> does not actually create a communicator if the communicator is already
> known to PETSc.
> >
> > Furthermore, I do not think we should a comm in
> >
> > typedef struct {
> >   HYPRE_IJMatrix ij;
> >   HYPRE_IJVector x;
> >   HYPRE_IJVector b;
> >   MPI_Comm   comm;
> > } Mat_HYPRE;
> >
> > It is an inner data of Mat, and it should already the same comm as the
> Mat. I do not understand why the internal data has its own comm.
> >
> > The following patch fixed the issue (just deleted this extra comm).
> >
> > diff --git a/src/mat/impls/hypre/mhypre.c b/src/mat/impls/hypre/mhypre.c
> > index dc19892..d8cfe3d 100644
> > --- a/src/mat/impls/hypre/mhypre.c
> > +++ b/src/mat/impls/hypre/mhypre.c
> > @@ -74,7 +74,7 @@ static PetscErrorCode MatHYPRE_CreateFromMat(Mat A,
> Mat_HYPRE *hA)
> >rend   = A->rmap->rend;
> >cstart = A->cmap->rstart;
> >cend   = A->cmap->rend;
> > -  PetscStackCallStandard(HYPRE_IJMatrixCreate,(hA->comm,
> rstart,rend-1,cstart,cend-1,>ij));
> > +  PetscStackCallStandard(HYPRE_IJMatrixCreate,(
> PetscObjectComm((PetscObject)A),rstart,rend-1,cstart,cend-1,>ij));
> >PetscStackCallStandard(HYPRE_IJMatrixSetObjectType,(hA->ij,
> HYPRE_PARCSR));
> >{
> >  PetscBool  same;
> > @@ -434,7 +434,7 @@ PetscErrorCode MatDestroy_HYPRE(Mat A)
> >if (hA->x) PetscStackCallStandard(HYPRE_IJVectorDestroy,(hA->x));
> >if (hA->b) PetscStackCallStandard(HYPRE_IJVectorDestroy,(hA->b));
> >if (hA->ij) PetscStackCallStandard(HYPRE_IJMatrixDestroy,(hA->ij));
> > -  if (hA->comm) { ierr = MPI_Comm_free(>comm);CHKERRQ(ierr);}
> > +  /*if (hA->comm) { ierr = MPI_Comm_free(>comm);CHKERRQ(ierr);}*/
> >ierr = PetscObjectComposeFunction((PetscObject)A,"MatConvert_
> hypre_aij_C",NULL);CHKERRQ(ierr);
> >ierr = PetscFree(A->data);CHKERRQ(ierr);
> >PetscFunctionReturn(0);
> > @@ -500,7 +500,8 @@ PETSC_EXTERN PetscErrorCode MatCreate_HYPRE(Mat B)
> >B->ops->destroy   = MatDestroy_HYPRE;
> >B->ops->assemblyend   = MatAssemblyEnd_HYPRE;
> >
> > -  ierr = MPI_Comm_dup(PetscObjectComm((PetscObject)B),>comm);
> CHKERRQ(ierr);
> > +  /*ierr = 
> > MPI_Comm_dup(PetscObjectComm((PetscObject)B),>comm);CHKERRQ(ierr);
> */
> > +  /*ierr = PetscCommDuplicate(PetscObjectComm((PetscObject)
> B),>comm,NULL);CHKERRQ(ierr);*/
> >ierr = PetscObjectChangeTypeName((PetscObject)B,MATHYPRE);
> CHKERRQ(ierr);
> >ierr = PetscObjectComposeFunction((PetscObject)B,"MatConvert_
> hypre_aij_C",MatConvert_HYPRE_AIJ);CHKERRQ(ierr);
> >PetscFunctionReturn(0);
> > diff --git a/src/mat/impls/hypre/mhypre.h b/src/mat/impls/hypre/mhypre.h
> > index 3d9ddd2..1189020 100644
> > --- a/src/mat/impls/hypre/mhypre.h
> > +++ b/src/mat/impls/hypre/mhypre.h
> > @@ -10,7 +10,7 @@ typedef struct {
> >HYPRE_IJMatrix ij;
> >HYPRE_IJVector x;
> >HYPRE_IJVector b;
> > -  MPI_Comm   comm;
> > +  /*MPI_Comm   comm;*/
> >  } Mat_HYPRE;
> >
> >
> >
> > Fande,
> >
> >
> >
> >
> > On Tue, Apr 3, 2018 at 10:35 AM, Satish Balay <ba...@mcs.anl.gov> wrote:
> > On Tue, 3 Apr 2018, Satish Balay wrote:
> >
> > > On Tue, 3 Apr 2018, Derek Gaston wrote:
> > >
> > > > One thing I want to be clear of here: is that we're not trying to
> solve
> > > > this particular problem (where we're creating 1000 instances o

Re: [petsc-users] A bad commit affects MOOSE

2018-04-03 Thread Kong, Fande
I think we could add an inner comm for external package. If the same comm
is passed in again, we just retrieve the same communicator, instead of
MPI_Comm_dup(), for that external package (at least HYPRE team claimed this
will be fine).   I did not see any issue with this idea so far.

I might be missing something here


Fande,

On Tue, Apr 3, 2018 at 1:45 PM, Satish Balay  wrote:

> On Tue, 3 Apr 2018, Smith, Barry F. wrote:
>
> >
> >
> > > On Apr 3, 2018, at 11:59 AM, Balay, Satish  wrote:
> > >
> > > On Tue, 3 Apr 2018, Smith, Barry F. wrote:
> > >
> > >>   Note that PETSc does one MPI_Comm_dup() for each hypre matrix.
> Internally hypre does at least one MPI_Comm_create() per hypre boomerAMG
> solver. So even if PETSc does not do the MPI_Comm_dup() you will still be
> limited due to hypre's MPI_Comm_create.
> > >>
> > >>I will compose an email to hypre cc:ing everyone to get
> information from them.
> > >
> > > Actually I don't see any calls to MPI_Comm_dup() in hypre sources
> [there are stubs for it for non-mpi build]
> > >
> > > There was that call to MPI_Comm_create() in the stack trace [via
> hypre_BoomerAMGSetup]
> >
> >This is what I said. The MPI_Comm_create() is called for each solver
> and hence uses a slot for each solver.
>
> Ops sorry - misread the text..
>
> Satish
>


Re: [petsc-users] A bad commit affects MOOSE

2018-04-03 Thread Kong, Fande
It looks nice for me.

Fande,

On Tue, Apr 3, 2018 at 3:04 PM, Stefano Zampini <stefano.zamp...@gmail.com>
wrote:

> What about
>
> PetscCommGetPkgComm(MPI_Comm comm ,const char* package, MPI_Comm* pkgcomm)
>
> with a key for each of the external packages PETSc can use?
>
>
> On Apr 3, 2018, at 10:56 PM, Kong, Fande <fande.k...@inl.gov> wrote:
>
> I think we could add an inner comm for external package. If the same comm
> is passed in again, we just retrieve the same communicator, instead of
> MPI_Comm_dup(), for that external package (at least HYPRE team claimed
> this will be fine).   I did not see any issue with this idea so far.
>
> I might be missing something here
>
>
> Fande,
>
> On Tue, Apr 3, 2018 at 1:45 PM, Satish Balay <ba...@mcs.anl.gov> wrote:
>
>> On Tue, 3 Apr 2018, Smith, Barry F. wrote:
>>
>> >
>> >
>> > > On Apr 3, 2018, at 11:59 AM, Balay, Satish <ba...@mcs.anl.gov> wrote:
>> > >
>> > > On Tue, 3 Apr 2018, Smith, Barry F. wrote:
>> > >
>> > >>   Note that PETSc does one MPI_Comm_dup() for each hypre matrix.
>> Internally hypre does at least one MPI_Comm_create() per hypre boomerAMG
>> solver. So even if PETSc does not do the MPI_Comm_dup() you will still be
>> limited due to hypre's MPI_Comm_create.
>> > >>
>> > >>I will compose an email to hypre cc:ing everyone to get
>> information from them.
>> > >
>> > > Actually I don't see any calls to MPI_Comm_dup() in hypre sources
>> [there are stubs for it for non-mpi build]
>> > >
>> > > There was that call to MPI_Comm_create() in the stack trace [via
>> hypre_BoomerAMGSetup]
>> >
>> >This is what I said. The MPI_Comm_create() is called for each solver
>> and hence uses a slot for each solver.
>>
>> Ops sorry - misread the text..
>>
>> Satish
>>
>
>
>


[petsc-users] slepc-master does not configure correctly

2018-03-21 Thread Kong, Fande
Hi All,

~/projects/slepc]> PETSC_ARCH=arch-darwin-c-debug-master ./configure







*Checking environment...Traceback (most recent call last):  File
"./configure", line 10, in 
execfile(os.path.join(os.path.dirname(__file__), 'config',
'configure.py'))  File "./config/configure.py", line 206, in 
log.write('PETSc install directory: '+petsc.destdir)AttributeError: PETSc
instance has no attribute 'destdir'*



SLEPc may be needed to synchronized for new changes in PETSc.

Thanks,

Fande Kong


<    1   2