Re: [petsc-users] error: Petsc has generated inconsistent data, MPI_Allreduce() called in different locations (code lines) on different processors

2019-04-05 Thread Smith, Barry F. via petsc-users


  Eda,

   Can you send us your code (and any needed data files)? We certainly expect 
PETSc to perform correctly if the size of the matrix cannot be divide by the 
number of processors. It is possible the problem is due to bugs either in 
MatStashScatterBegin_BTS() or in your code.

   Thanks

Barry


> On Apr 5, 2019, at 7:20 AM, Matthew Knepley via petsc-users 
>  wrote:
> 
> On Fri, Apr 5, 2019 at 3:20 AM Eda Oktay via petsc-users 
>  wrote:
> Hello,
> 
> I am trying to calculate unweighted Laplacian of a matrix by using 2 cores. 
> If the size of matrix is in even number then my program works. However, when 
> I try to use a matrix having odd number for size, I guess since size of the 
> matrix cannot be divided into processors correctly, I get the following error:
> 
> [0]PETSC ERROR: - Error Message 
> --
> [1]PETSC ERROR: - Error Message 
> --
> [1]PETSC ERROR: Petsc has generated inconsistent data
> [1]PETSC ERROR: MPI_Allreduce() called in different locations (code lines) on 
> different processors
> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [1]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: MPI_Allreduce() called in different locations (code lines) on 
> different processors
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 
> [0]PETSC ERROR: ./SON_YENI_DENEME_TEMIZ_ENYENI_FINAL on a arch-linux2-c-debug 
> named dfa.wls.metu.edu.tr by edaoktay Fri Apr  5 09:50:54 2019
> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
> --with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas 
> --download-metis --download-parmetis --download-superlu_dist --download-slepc 
> --download-mpich
> [1]PETSC ERROR: ./SON_YENI_DENEME_TEMIZ_ENYENI_FINAL on a arch-linux2-c-debug 
> named dfa.wls.metu.edu.tr by edaoktay Fri Apr  5 09:50:54 2019
> [1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
> --with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas 
> --download-metis --download-parmetis --download-superlu_dist --download-slepc 
> --download-mpich
> [1]PETSC ERROR: [0]PETSC ERROR: #1 MatSetOption() line 5505 in 
> /home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
> #1 MatStashScatterBegin_BTS() line 843 in 
> /home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
> [1]PETSC ERROR: [0]PETSC ERROR: #2 MatSetOption() line 5505 in 
> /home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
> #2 MatStashScatterBegin_BTS() line 843 in 
> /home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
> [1]PETSC ERROR: #3 MatStashScatterBegin_Private() line 462 in 
> /home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
> [0]PETSC ERROR: #3 main() line 164 in 
> /home/edaoktay/petsc-3.10.3/arch-linux2-c-debug/share/slepc/examples/src/eda/SON_YENI_DENEME_TEMIZ_ENYENI_FINAL.c
> [1]PETSC ERROR: #4 MatAssemblyBegin_MPIAIJ() line 774 in 
> /home/edaoktay/petsc-3.10.3/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: [0]PETSC ERROR: PETSc Option Table entries:
> [0]PETSC ERROR: -f 
> /home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/binary_files/airfoil1_binary
> #5 MatAssemblyBegin() line 5251 in 
> /home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
> [1]PETSC ERROR: [0]PETSC ERROR: -mat_partitioning_type parmetis
> [0]PETSC ERROR: -unweighted
> #6 main() line 169 in 
> /home/edaoktay/petsc-3.10.3/arch-linux2-c-debug/share/slepc/examples/src/eda/SON_YENI_DENEME_TEMIZ_ENYENI_FINAL.c
> [1]PETSC ERROR: PETSc Option Table entries:
> [1]PETSC ERROR: [0]PETSC ERROR: End of Error Message 
> ---send entire error message to petsc-ma...@mcs.anl.gov--
> -f 
> /home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/binary_files/airfoil1_binary
> [1]PETSC ERROR: -mat_partitioning_type parmetis
> [1]PETSC ERROR: -unweighted
> [1]PETSC ERROR: application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
> End of Error Message ---send entire error message to 
> petsc-ma...@mcs.anl.gov--
> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
> 
> 
>  where line 164 in my main program is MatSetOption and line 169 is 
> MatAssemblyBegin. I am new to MPI usage so I did not understand why 
> MPI_Allreduce() causes a problem and how can I fix the problem.
> 
> You have to call collective methods on all processes in the same order. This 
> is not happening in your code. Beyond that,
> there is no way for us to tell how this happened.
> 
>   Thanks,
> 
>  Matt
>  
> Thank you,
> 
> Eda
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to whic

Re: [petsc-users] Estimate memory needs for large grids

2019-04-05 Thread Jed Brown via petsc-users
Memory use will depend on the preconditioner.  This will converge very
slowly (i.e., never) without multigrid unless time steps are small.
Depending on how rough the coefficients are, you may be able to use
geometric multigrid, which has pretty low setup costs and memory
requirements.

To estimate memory with an arbitrary preconditioner, I would run a
smaller problem using the desired preconditioner and check its memory
use using -log_view.  From that you can estimate total memory
requirements for the target job.

Sajid Ali via petsc-users  writes:

> Hi,
>
> I've solving a simple linear equation [ u_t = A*u_xx + A*u_yy + F_t*u ] on
> a grid size of 55296x55296. I'm reading a vector of that size from an hdf5
> file and have the jacobian matrix as a modified 5-point stencil which is
> preallocated with the following
> ```
>   ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr);
>   ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,M,M);CHKERRQ(ierr);
>   ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);
>   ierr = MatSetFromOptions(A);CHKERRQ(ierr);
>   ierr = MatMPIAIJSetPreallocation(A,5,NULL,5,NULL);CHKERRQ(ierr);
>   ierr = MatSeqAIJSetPreallocation(A,5,NULL);CHKERRQ(ierr);
> ```
> Total number of elements is ~3e9 and the matrix size is ~9e9 (but only 5
> diagonals are non zeros). I'm reading F_t which has ~3e9 elements. I'm
> using double complex numbers and I've compiled with int64 indices.
>
> Thus, for the vector I need, 55296x55296x2x8 bytes ~ 50Gb and for the F
> vector, another 50 Gb. For the matrix I need ~250 Gb and some overhead for
> the solver.
>
> How do I estimate this overhead (and estimate how many nodes I would need
> to run this given the maximum memory per node (as specified by slurm's
> --mem option)) ?
>
> Thanks in advance for the help!
>
> -- 
> Sajid Ali
> Applied Physics
> Northwestern University


[petsc-users] Estimate memory needs for large grids

2019-04-05 Thread Sajid Ali via petsc-users
Hi,

I've solving a simple linear equation [ u_t = A*u_xx + A*u_yy + F_t*u ] on
a grid size of 55296x55296. I'm reading a vector of that size from an hdf5
file and have the jacobian matrix as a modified 5-point stencil which is
preallocated with the following
```
  ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr);
  ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,M,M);CHKERRQ(ierr);
  ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);
  ierr = MatSetFromOptions(A);CHKERRQ(ierr);
  ierr = MatMPIAIJSetPreallocation(A,5,NULL,5,NULL);CHKERRQ(ierr);
  ierr = MatSeqAIJSetPreallocation(A,5,NULL);CHKERRQ(ierr);
```
Total number of elements is ~3e9 and the matrix size is ~9e9 (but only 5
diagonals are non zeros). I'm reading F_t which has ~3e9 elements. I'm
using double complex numbers and I've compiled with int64 indices.

Thus, for the vector I need, 55296x55296x2x8 bytes ~ 50Gb and for the F
vector, another 50 Gb. For the matrix I need ~250 Gb and some overhead for
the solver.

How do I estimate this overhead (and estimate how many nodes I would need
to run this given the maximum memory per node (as specified by slurm's
--mem option)) ?

Thanks in advance for the help!

-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Matthew Knepley via petsc-users
On Fri, Apr 5, 2019 at 4:44 PM Danyang Su via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi All,
>
> I got a strange error in calling DMPlexDistribute after updating PETSc
> to V3.11.0. There sounds no change in the interface of DMPlexDistribute
> as documented in
>
>
> https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
>
>
> https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
>
> The code section is shown below.
>
>!c distribute mesh over processes
>call
> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
>
> When I use PETSc V3.10 and earlier versions, it works fine. After
> updating to latest PETSc V3.11.0, I got the following error during
> compiling
>
>call
> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
>   1
> Error: Non-variable expression in variable definition context (actual
> argument to INTENT = OUT/INOUT) at (1)
> /home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31:
>
> recipe for target '../../solver/solver_ddmethod.o' failed
>
> The fortran example
> /home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90
>
> which also uses DMPlexDistribute can be compiled without problem. Is
> there any updates in the compiler flags I need to change?
>

What includes do you have? It looks like it does not understand
PETSC_NULL_SF.

  Thanks,

Matt


> Thanks,
>
> Danyang
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Solution Diverging

2019-04-05 Thread Mark Adams via petsc-users
On Fri, Apr 5, 2019 at 5:27 PM Maahi Talukder  wrote:

> Hi,
> Thank you for your reply.
>
> Well, I have verified the solution on large problems, and I have converged
> the solution to machine accuracy using different solvers. So I guess the
> setup okay in that respect.
>

I am suspicious that it "worked" on large problems but failed on small. But
you never know.


>
> The BC I am using is of Dirichlet type.
>
>
>
> On Fri, Apr 5, 2019 at 5:21 PM Mark Adams  wrote:
>
>> So you have a 2D Poisson solver and it is working on large problems but
>> the linear solver is diverging on small problems.
>>
>> I would verify the the solution is good on the large problem.
>>
>> You might have a problem with boundary conditions. What BCs do you think
>> you are using?
>>
>> On Fri, Apr 5, 2019 at 4:33 PM Maahi Talukder via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>>> Dear All,
>>>
>>> I am solving a linear system arising from 9-point discretization of
>>> Poisson equation to generate grid. KSP solvers are giving nice converged
>>> result when I am dealing with a large matrix. But when I am trying to
>>> implement KSP solvers when the matrix is smaller and the values are smaller
>>> too, the solution is diverging.
>>> Any suggestions how to get around this problem?
>>>
>>>
>>> Regards,
>>> Maahi Talukder
>>>
>>>
>>>
>>>
>>>


Re: [petsc-users] Solution Diverging

2019-04-05 Thread Mark Adams via petsc-users
So you have a 2D Poisson solver and it is working on large problems but the
linear solver is diverging on small problems.

I would verify the the solution is good on the large problem.

You might have a problem with boundary conditions. What BCs do you think
you are using?

On Fri, Apr 5, 2019 at 4:33 PM Maahi Talukder via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Dear All,
>
> I am solving a linear system arising from 9-point discretization of
> Poisson equation to generate grid. KSP solvers are giving nice converged
> result when I am dealing with a large matrix. But when I am trying to
> implement KSP solvers when the matrix is smaller and the values are smaller
> too, the solution is diverging.
> Any suggestions how to get around this problem?
>
>
> Regards,
> Maahi Talukder
>
>
>
>
>


Re: [petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Balay, Satish via petsc-users
A complete simple code showing this error would be useful.

wrt 'Error: Non-variable expression in variable definition context' google 
gives:

https://www.queryxchange.com/q/27_45961166/non-variable-expression-in-variable-definition-context-compilation-error/

But I don't see an expression in the code snippet here..

There must be some difference between your usage of this function vs the petsc 
example..

Satish

On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:

> Hi Satish,
> 
> Because of the intent(out) declaration, I use a temporary solution that
> passing a PetscSF type variable to DMPlexDistribute instead of passing
> PETSC_NULL_SF. But I am still confused how the example works without problem.
> 
> Thanks,
> 
> Danyang
> 
> On 2019-04-05 2:00 p.m., Balay, Satish wrote:
> > Ah - the message about distributed_dm - not PETSC_NULL_SF. So I'm off base
> > here..
> >
> > Satish
> >
> >
> > On Fri, 5 Apr 2019, Balay, Satish via petsc-users wrote:
> >
> >> A fortran interface definition was added in petsc-3.11 so the compile now
> >> checks if its used correctly.
> >>
> >> http://bitbucket.org/petsc/petsc/commits/fdb49207a8b58c421782c7e45b1394c0a6567048
> >>
> >> +  PetscSF, intent(out) :: sf
> >>
> >> So this should be inout?
> >>
> >> [All auto-generated stubs don't quantify intent()]
> >>
> >> Satish
> >>
> >> On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:
> >>
> >>> Hi All,
> >>>
> >>> I got a strange error in calling DMPlexDistribute after updating PETSc to
> >>> V3.11.0. There sounds no change in the interface of DMPlexDistribute as
> >>> documented in
> >>>
> >>> https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> >>>
> >>> https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> >>>
> >>> The code section is shown below.
> >>>
> >>>    !c distribute mesh over processes
> >>>    call
> >>> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> >>>
> >>> When I use PETSc V3.10 and earlier versions, it works fine. After updating
> >>> to
> >>> latest PETSc V3.11.0, I got the following error during compiling
> >>>
> >>>   call
> >>> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> >>>  1
> >>> Error: Non-variable expression in variable definition context (actual
> >>> argument
> >>> to INTENT = OUT/INOUT) at (1)
> >>> /home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31:
> >>> recipe for target '../../solver/solver_ddmethod.o' failed
> >>>
> >>> The fortran example
> >>> /home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90
> >>> which also uses DMPlexDistribute can be compiled without problem. Is there
> >>> any
> >>> updates in the compiler flags I need to change?
> >>>
> >>> Thanks,
> >>>
> >>> Danyang
> >>>
> >>>
> >>>
> 


Re: [petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Balay, Satish via petsc-users
Ah - the message about distributed_dm - not PETSC_NULL_SF. So I'm off base 
here..

Satish


On Fri, 5 Apr 2019, Balay, Satish via petsc-users wrote:

> A fortran interface definition was added in petsc-3.11 so the compile now 
> checks if its used correctly.
> 
> http://bitbucket.org/petsc/petsc/commits/fdb49207a8b58c421782c7e45b1394c0a6567048
> 
> +  PetscSF, intent(out) :: sf
> 
> So this should be inout?
> 
> [All auto-generated stubs don't quantify intent()]
> 
> Satish
> 
> On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:
> 
> > Hi All,
> > 
> > I got a strange error in calling DMPlexDistribute after updating PETSc to
> > V3.11.0. There sounds no change in the interface of DMPlexDistribute as
> > documented in
> > 
> > https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> > 
> > https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> > 
> > The code section is shown below.
> > 
> >   !c distribute mesh over processes
> >   call 
> > DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> > 
> > When I use PETSc V3.10 and earlier versions, it works fine. After updating 
> > to
> > latest PETSc V3.11.0, I got the following error during compiling
> > 
> >   call
> > DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> >  1
> > Error: Non-variable expression in variable definition context (actual 
> > argument
> > to INTENT = OUT/INOUT) at (1)
> > /home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31:
> > recipe for target '../../solver/solver_ddmethod.o' failed
> > 
> > The fortran example
> > /home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90
> > which also uses DMPlexDistribute can be compiled without problem. Is there 
> > any
> > updates in the compiler flags I need to change?
> > 
> > Thanks,
> > 
> > Danyang
> > 
> > 
> > 
> 

Re: [petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Balay, Satish via petsc-users
A fortran interface definition was added in petsc-3.11 so the compile now 
checks if its used correctly.

http://bitbucket.org/petsc/petsc/commits/fdb49207a8b58c421782c7e45b1394c0a6567048

+  PetscSF, intent(out) :: sf

So this should be inout?

[All auto-generated stubs don't quantify intent()]

Satish

On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:

> Hi All,
> 
> I got a strange error in calling DMPlexDistribute after updating PETSc to
> V3.11.0. There sounds no change in the interface of DMPlexDistribute as
> documented in
> 
> https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> 
> https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> 
> The code section is shown below.
> 
>   !c distribute mesh over processes
>   call 
> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> 
> When I use PETSc V3.10 and earlier versions, it works fine. After updating to
> latest PETSc V3.11.0, I got the following error during compiling
> 
>   call
> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
>  1
> Error: Non-variable expression in variable definition context (actual argument
> to INTENT = OUT/INOUT) at (1)
> /home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31:
> recipe for target '../../solver/solver_ddmethod.o' failed
> 
> The fortran example
> /home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90
> which also uses DMPlexDistribute can be compiled without problem. Is there any
> updates in the compiler flags I need to change?
> 
> Thanks,
> 
> Danyang
> 
> 
> 


[petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Danyang Su via petsc-users

Hi All,

I got a strange error in calling DMPlexDistribute after updating PETSc 
to V3.11.0. There sounds no change in the interface of DMPlexDistribute 
as documented in


https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute

https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute

The code section is shown below.

  !c distribute mesh over processes
  call 
DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)


When I use PETSc V3.10 and earlier versions, it works fine. After 
updating to latest PETSc V3.11.0, I got the following error during compiling


  call 
DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)

 1
Error: Non-variable expression in variable definition context (actual 
argument to INTENT = OUT/INOUT) at (1)
/home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31: 
recipe for target '../../solver/solver_ddmethod.o' failed


The fortran example 
/home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90 
which also uses DMPlexDistribute can be compiled without problem. Is 
there any updates in the compiler flags I need to change?


Thanks,

Danyang




Re: [petsc-users] ASCIIRead error for multiple processors

2019-04-05 Thread Yuyun Yang via petsc-users
Ok great, thanks for the explanation!

Best,
Yuyun

From: Matthew Knepley 
Sent: Friday, April 5, 2019 7:31 AM
To: Yuyun Yang 
Cc: Smith, Barry F. ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] ASCIIRead error for multiple processors

On Fri, Apr 5, 2019 at 10:27 AM Yuyun Yang 
mailto:yyan...@stanford.edu>> wrote:
Hmm ok. Then should I use this function or not when I'm reading the input? It's 
probably still going to give me the same error and unable to proceed?

I'd like to know if I should use something else to work around this problem.

No, what you do is

  if (!rank) {
PetscViewerASCIIRead();
MPI_Bcast();
  } else {
MPI_Bcast();
  }

  Thanks,

 Matt

Thanks,
Yuyun

Get Outlook for iOS

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: Friday, April 5, 2019 5:16:12 AM
To: Yuyun Yang
Cc: Smith, Barry F.; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] ASCIIRead error for multiple processors

On Thu, Apr 4, 2019 at 10:56 PM Yuyun Yang 
mailto:yyan...@stanford.edu>> wrote:
So do I call MPI_Bcast right after I call PetscViewerASCIIRead? Is that going 
to prevent the other processors from trying to read the same file but were 
unable to?

No, all this does is replicate data from process 0 on the other processes.

   Matt

Thanks,
Yuyun

Get Outlook for iOS

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: Thursday, April 4, 2019 7:30:20 PM
To: Yuyun Yang
Cc: Smith, Barry F.; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] ASCIIRead error for multiple processors

On Thu, Apr 4, 2019 at 9:19 PM Yuyun Yang via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
We are probably not going to use hundreds of processors, but i think it would 
be good to just have processor 0 read the input and broadcast that to all the 
other processors. Would that be a possible fix? And what would you suggest to 
work around this problem for now?

Explicitly call MPI_Bcast().

   Matt

Thanks!
Yuyun

Get Outlook for iOS

From: Smith, Barry F. mailto:bsm...@mcs.anl.gov>>
Sent: Thursday, April 4, 2019 3:07:37 PM
To: Yuyun Yang
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] ASCIIRead error for multiple processors


   Currently PetscViewerFileSetName_ASCII() only opens the file on process 0 
(for read or write) thus when you call PetscViewerASCIIRead() from any process 
but the first it will be reading from an fd that has not been set and you could 
get unpredictable results.

   The implementation and documentation for PetscViewerASCIIRead() is buggy.

   There are two possible fixes we could make

1) have PetscViewerFileSetName_ASCII()  open the file for reading on all 
processes or
2) have PetscViewerASCIIRead() generate an error if the process is not rank == 0

   Barry

Note that using PetscViewerASCIIRead() from a handful of processes is probably 
fine but having hundreds or thousands of processes open the same ASCII file and 
reading from it will likely not be scalable.




> On Apr 4, 2019, at 3:15 PM, Yuyun Yang via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Hello team,
>
> I’m trying to use PetscViewerASCIIRead() to read in a single interger/scalar 
> value from an input file. It works for one processor. However, when running 
> on multiple processors, I’m getting the below error:
>
> [1]PETSC ERROR: Invalid argument
> [1]PETSC ERROR: Insufficient data, read only 0 < 1 items
> [1]PETSC ERROR: #1 PetscViewerASCIIRead() line 1054 in 
> /usr/local/CLAB-2/petsc-3.6/src/sys/classes/viewer/impls/ascii/filev.c
>
> Is there something wrong with how I’m implementing this, or ASCIIRead does 
> not work with multiple processors?
>
> Thanks,
> Yuyun


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] ASCIIRead error for multiple processors

2019-04-05 Thread Matthew Knepley via petsc-users
On Fri, Apr 5, 2019 at 10:27 AM Yuyun Yang  wrote:

> Hmm ok. Then should I use this function or not when I'm reading the input?
> It's probably still going to give me the same error and unable to proceed?
>
> I'd like to know if I should use something else to work around this
> problem.
>

No, what you do is

  if (!rank) {
PetscViewerASCIIRead();
MPI_Bcast();
  } else {
MPI_Bcast();
  }

  Thanks,

 Matt


> Thanks,
> Yuyun
>
> Get Outlook for iOS 
> --
> *From:* Matthew Knepley 
> *Sent:* Friday, April 5, 2019 5:16:12 AM
> *To:* Yuyun Yang
> *Cc:* Smith, Barry F.; petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] ASCIIRead error for multiple processors
>
> On Thu, Apr 4, 2019 at 10:56 PM Yuyun Yang  wrote:
>
>> So do I call MPI_Bcast right after I call PetscViewerASCIIRead? Is that
>> going to prevent the other processors from trying to read the same file but
>> were unable to?
>>
>
> No, all this does is replicate data from process 0 on the other processes.
>
>Matt
>
>
>> Thanks,
>> Yuyun
>>
>> Get Outlook for iOS 
>> --
>> *From:* Matthew Knepley 
>> *Sent:* Thursday, April 4, 2019 7:30:20 PM
>> *To:* Yuyun Yang
>> *Cc:* Smith, Barry F.; petsc-users@mcs.anl.gov
>> *Subject:* Re: [petsc-users] ASCIIRead error for multiple processors
>>
>> On Thu, Apr 4, 2019 at 9:19 PM Yuyun Yang via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>>> We are probably not going to use hundreds of processors, but i think it
>>> would be good to just have processor 0 read the input and broadcast that to
>>> all the other processors. Would that be a possible fix? And what would you
>>> suggest to work around this problem for now?
>>>
>>
>> Explicitly call MPI_Bcast().
>>
>>Matt
>>
>>
>>> Thanks!
>>> Yuyun
>>>
>>> Get Outlook for iOS 
>>> --
>>> *From:* Smith, Barry F. 
>>> *Sent:* Thursday, April 4, 2019 3:07:37 PM
>>> *To:* Yuyun Yang
>>> *Cc:* petsc-users@mcs.anl.gov
>>> *Subject:* Re: [petsc-users] ASCIIRead error for multiple processors
>>>
>>>
>>>Currently PetscViewerFileSetName_ASCII() only opens the file on
>>> process 0 (for read or write) thus when you call PetscViewerASCIIRead()
>>> from any process but the first it will be reading from an fd that has not
>>> been set and you could get unpredictable results.
>>>
>>>The implementation and documentation for PetscViewerASCIIRead() is
>>> buggy.
>>>
>>>There are two possible fixes we could make
>>>
>>> 1) have PetscViewerFileSetName_ASCII()  open the file for reading on all
>>> processes or
>>> 2) have PetscViewerASCIIRead() generate an error if the process is not
>>> rank == 0
>>>
>>>Barry
>>>
>>> Note that using PetscViewerASCIIRead() from a handful of processes is
>>> probably fine but having hundreds or thousands of processes open the same
>>> ASCII file and reading from it will likely not be scalable.
>>>
>>>
>>>
>>>
>>> > On Apr 4, 2019, at 3:15 PM, Yuyun Yang via petsc-users <
>>> petsc-users@mcs.anl.gov> wrote:
>>> >
>>> > Hello team,
>>> >
>>> > I’m trying to use PetscViewerASCIIRead() to read in a single
>>> interger/scalar value from an input file. It works for one processor.
>>> However, when running on multiple processors, I’m getting the below error:
>>> >
>>> > [1]PETSC ERROR: Invalid argument
>>> > [1]PETSC ERROR: Insufficient data, read only 0 < 1 items
>>> > [1]PETSC ERROR: #1 PetscViewerASCIIRead() line 1054 in
>>> /usr/local/CLAB-2/petsc-3.6/src/sys/classes/viewer/impls/ascii/filev.c
>>> >
>>> > Is there something wrong with how I’m implementing this, or ASCIIRead
>>> does not work with multiple processors?
>>> >
>>> > Thanks,
>>> > Yuyun
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] ASCIIRead error for multiple processors

2019-04-05 Thread Yuyun Yang via petsc-users
Hmm ok. Then should I use this function or not when I'm reading the input? It's 
probably still going to give me the same error and unable to proceed?

I'd like to know if I should use something else to work around this problem.

Thanks,
Yuyun

Get Outlook for iOS

From: Matthew Knepley 
Sent: Friday, April 5, 2019 5:16:12 AM
To: Yuyun Yang
Cc: Smith, Barry F.; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] ASCIIRead error for multiple processors

On Thu, Apr 4, 2019 at 10:56 PM Yuyun Yang 
mailto:yyan...@stanford.edu>> wrote:
So do I call MPI_Bcast right after I call PetscViewerASCIIRead? Is that going 
to prevent the other processors from trying to read the same file but were 
unable to?

No, all this does is replicate data from process 0 on the other processes.

   Matt

Thanks,
Yuyun

Get Outlook for iOS

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: Thursday, April 4, 2019 7:30:20 PM
To: Yuyun Yang
Cc: Smith, Barry F.; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] ASCIIRead error for multiple processors

On Thu, Apr 4, 2019 at 9:19 PM Yuyun Yang via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
We are probably not going to use hundreds of processors, but i think it would 
be good to just have processor 0 read the input and broadcast that to all the 
other processors. Would that be a possible fix? And what would you suggest to 
work around this problem for now?

Explicitly call MPI_Bcast().

   Matt

Thanks!
Yuyun

Get Outlook for iOS

From: Smith, Barry F. mailto:bsm...@mcs.anl.gov>>
Sent: Thursday, April 4, 2019 3:07:37 PM
To: Yuyun Yang
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] ASCIIRead error for multiple processors


   Currently PetscViewerFileSetName_ASCII() only opens the file on process 0 
(for read or write) thus when you call PetscViewerASCIIRead() from any process 
but the first it will be reading from an fd that has not been set and you could 
get unpredictable results.

   The implementation and documentation for PetscViewerASCIIRead() is buggy.

   There are two possible fixes we could make

1) have PetscViewerFileSetName_ASCII()  open the file for reading on all 
processes or
2) have PetscViewerASCIIRead() generate an error if the process is not rank == 0

   Barry

Note that using PetscViewerASCIIRead() from a handful of processes is probably 
fine but having hundreds or thousands of processes open the same ASCII file and 
reading from it will likely not be scalable.




> On Apr 4, 2019, at 3:15 PM, Yuyun Yang via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Hello team,
>
> I’m trying to use PetscViewerASCIIRead() to read in a single interger/scalar 
> value from an input file. It works for one processor. However, when running 
> on multiple processors, I’m getting the below error:
>
> [1]PETSC ERROR: Invalid argument
> [1]PETSC ERROR: Insufficient data, read only 0 < 1 items
> [1]PETSC ERROR: #1 PetscViewerASCIIRead() line 1054 in 
> /usr/local/CLAB-2/petsc-3.6/src/sys/classes/viewer/impls/ascii/filev.c
>
> Is there something wrong with how I’m implementing this, or ASCIIRead does 
> not work with multiple processors?
>
> Thanks,
> Yuyun



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] PetscSFReduceBegin can not handle MPI_CHAR?

2019-04-05 Thread Jed Brown via petsc-users
Junchao's PR has been merged to 'master'.

https://bitbucket.org/petsc/petsc/pull-requests/1511/add-signed-char-unsigned-char-and-char

Fande Kong via petsc-users  writes:

> Thanks for the reply.  It is not necessary for me to use MPI_SUM.  I think 
> the better choice is MPIU_REPLACE. Doesn’t MPIU_REPLACE work for any 
> mpi_datatype?
>
> Fande 
>
>
>> On Apr 3, 2019, at 9:15 PM, Zhang, Junchao  wrote:
>> 
>> 
>>> On Wed, Apr 3, 2019 at 3:41 AM Lisandro Dalcin via petsc-users 
>>>  wrote:
>>> IIRC, MPI_CHAR is for ASCII text data. Also, remember that in C the 
>>> signedness of plain `char` is implementation (or platform?) dependent.
>>>  I'm not sure MPI_Reduce() is supposed to / should  handle MPI_CHAR, you 
>>> should use MPI_{SIGNED|UNSIGNED}_CHAR for that. Note however that 
>>> MPI_SIGNED_CHAR is from MPI 2.0.
>> 
>> MPI standard chapter 5.9.3, says "MPI_CHAR, MPI_WCHAR, and MPI_CHARACTER 
>> (which represent printable characters) cannot be used in reduction 
>> operations"
>> So Fande's code and Jed's branch have problems. To fix that, we have to add 
>> support for signed char, unsigned char, and char in PetscSF.  The first two 
>> types support add, mult, logical and bitwise operations. The last is a dumb 
>> type, only supports pack/unpack. With this fix, PetscSF/MPI would raise 
>> error on Fande's code. I can come up with a fix tomorrow.
>>  
>>> 
 On Wed, 3 Apr 2019 at 07:01, Fande Kong via petsc-users 
  wrote:
 Hi All,
 
 There were some error messages when using PetscSFReduceBegin with 
 MPI_CHAR. 
 
 ierr = 
 PetscSFReduceBegin(ptap->sf,MPI_CHAR,rmtspace,space,MPI_SUM);CHKERRQ(ierr);
 
 
 My question would be: Does PetscSFReduceBegin suppose work with MPI_CHAR? 
 If not, should we document somewhere?
 
 Thanks
 
 Fande,
 
 
 [0]PETSC ERROR: - Error Message 
 --
 [0]PETSC ERROR: No support for this operation for this object type
 [0]PETSC ERROR: No support for type size not divisible by 4
 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
 for trouble shooting.
 [0]PETSC ERROR: Petsc Development GIT revision: v3.10.4-1989-gd816d1587e  
 GIT Date: 2019-04-02 17:37:18 -0600
 [0]PETSC ERROR: [1]PETSC ERROR: - Error Message 
 --
 [1]PETSC ERROR: No support for this operation for this object type
 [1]PETSC ERROR: No support for type size not divisible by 4
 [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
 for trouble shooting.
 [1]PETSC ERROR: Petsc Development GIT revision: v3.10.4-1989-gd816d1587e  
 GIT Date: 2019-04-02 17:37:18 -0600
 [1]PETSC ERROR: ./ex90 on a arch-linux2-c-dbg-feature-ptap-all-at-once 
 named fn605731.local by kongf Tue Apr  2 21:48:41 2019
 [1]PETSC ERROR: Configure options --download-hypre=1 --with-debugging=yes 
 --with-shared-libraries=1 --download-fblaslapack=1 --download-metis=1 
 --download-parmetis=1 --download-superlu_dist=1 
 PETSC_ARCH=arch-linux2-c-dbg-feature-ptap-all-at-once --download-ptscotch 
 --download-party --download-chaco --with-cxx-dialect=C++11
 [1]PETSC ERROR: #1 PetscSFBasicPackTypeSetup() line 678 in 
 /Users/kongf/projects/petsc/src/vec/is/sf/impls/basic/sfbasic.c
 [1]PETSC ERROR: #2 PetscSFBasicGetPack() line 804 in 
 /Users/kongf/projects/petsc/src/vec/is/sf/impls/basic/sfbasic.c
 [1]PETSC ERROR: #3 PetscSFReduceBegin_Basic() line 1024 in 
 /Users/kongf/projects/petsc/src/vec/is/sf/impls/basic/sfbasic.c
 ./ex90 on a arch-linux2-c-dbg-feature-ptap-all-at-once named 
 fn605731.local by kongf Tue Apr  2 21:48:41 2019
 [0]PETSC ERROR: Configure options --download-hypre=1 --with-debugging=yes 
 --with-shared-libraries=1 --download-fblaslapack=1 --download-metis=1 
 --download-parmetis=1 --download-superlu_dist=1 
 PETSC_ARCH=arch-linux2-c-dbg-feature-ptap-all-at-once --download-ptscotch 
 --download-party --download-chaco --with-cxx-dialect=C++11
 [0]PETSC ERROR: #1 PetscSFBasicPackTypeSetup() line 678 in 
 /Users/kongf/projects/petsc/src/vec/is/sf/impls/basic/sfbasic.c
 [0]PETSC ERROR: #2 PetscSFBasicGetPack() line 804 in 
 /Users/kongf/projects/petsc/src/vec/is/sf/impls/basic/sfbasic.c
 [0]PETSC ERROR: #3 PetscSFReduceBegin_Basic() line 1024 in 
 /Users/kongf/projects/petsc/src/vec/is/sf/impls/basic/sfbasic.c
 [0]PETSC ERROR: #4 PetscSFReduceBegin() line 1208 in 
 /Users/kongf/projects/petsc/src/vec/is/sf/interface/sf.c
 [0]PETSC ERROR: #5 MatPtAPNumeric_MPIAIJ_MPIAIJ_allatonce() line 850 in 
 /Users/kongf/projects/petsc/src/mat/impls/aij/mpi/mpiptap.c
 [0]PETSC ERROR: #6 MatPtAP_MPIAIJ_MPIAIJ() line 202 in 
 /Users/kongf/projects/petsc/src/mat/impls/aij/mpi/mpiptap.c
 [0]

Re: [petsc-users] Constructing a MATNEST with blocks defined in different procs

2019-04-05 Thread Mark Adams via petsc-users
On Fri, Apr 5, 2019 at 7:19 AM Diogo FERREIRA SABINO via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi,
> I'm new in petsc and I'm trying to construct a MATNEST in two procs, by
> setting each block of the nested matrix with a MATMPIAIJ matrix defined in
> each proc.
> I'm trying to use MatCreateNest or MatNestSetSubMats, but I'm not being
> able to do it.
> Using MatNestSetSubMats, I'm trying to construct the MATNEST, giving a
> pointer to the correct matrices depending on the MPIrank of that proc.
> I'm obtaining the error message for the line
> :MatNestSetSubMats(AfullNEST,1,&IS_ROW,2,&IS_COL,AfullNESTpointer);
> [0]PETSC ERROR: Invalid argument
> [0]PETSC ERROR: Wrong type of object: Parameter # 5
>
> Is there a way of doing it, or all the blocks of the MATNEST have to exist
> in the same communicator as the MATNEST matrix?
>

Yes, they must all have the same communicator. A matrix can be empty on a
process, so you just create them with the global communicator, set the
local sizes that you want (eg, 0 on some procs).


> A simple test is given below, lunching it with: mpirun -n 2
> ./Main_petsc.exe
>
> static char help[] = "Create MPI Nest";
> #include 
>
> #undef __FUNCT__
> #define __FUNCT__ "main"
> int main(int argc,char **argv)
> {
>   PetscInitialize(&argc,&argv,(char*)0,help);
>
> //
>   PetscErrorCode  ierr;
>   PetscMPIInt MPIrank,MPIsize;
>   MPI_Comm_rank(PETSC_COMM_WORLD,&MPIrank);
>   MPI_Comm_size(PETSC_COMM_WORLD,&MPIsize);
>
>      Create Each
> Matrix:
>   Mat Adiag;
>
>   //Create a Adiag different on each proc:
>   ierr = MatCreate(PETSC_COMM_SELF,&Adiag);
>  CHKERRQ(ierr);
>   ierr = MatSetSizes(Adiag,2,2,PETSC_DECIDE,PETSC_DECIDE);
> CHKERRQ(ierr);
>   ierr = MatSetType(Adiag,MATMPIAIJ);
>  CHKERRQ(ierr);
>   ierr = MatSetFromOptions(Adiag);
> CHKERRQ(ierr);
>   ierr = MatMPIAIJSetPreallocation(Adiag,2,NULL,2,NULL);
> CHKERRQ(ierr);
>
>   MatSetValue(Adiag,0,0,(MPIrank+5),INSERT_VALUES);
>   MatSetValue(Adiag,0,1,(MPIrank+10),INSERT_VALUES);
>   MatSetValue(Adiag,1,0,(MPIrank+15),INSERT_VALUES);
>   MatSetValue(Adiag,1,1,(MPIrank+20),INSERT_VALUES);
>   MatAssemblyBegin(Adiag,MAT_FINAL_ASSEMBLY);
>  MatAssemblyEnd(Adiag,MAT_FINAL_ASSEMBLY);
>
>   ///   Create
> Nest:
>   MPI_Barrier(PETSC_COMM_WORLD);
>   Mat   AfullNEST, *AfullNESTpointer;
>
>   PetscMalloc1(2,&AfullNESTpointer);
>   AfullNESTpointer[0]=NULL;
>   AfullNESTpointer[1]=NULL;
>   AfullNESTpointer[MPIrank]=Adiag;
>   // Rank=0 --> AfullNESTpointer[0]=Adiag; AfullNESTpointer[1]=NULL;
>   // Rank=1 --> AfullNESTpointer[0]=NULL;  AfullNESTpointer[1]=Adiag;
>
>   ISIS_ROW,IS_COL;
>   ISCreateStride(PETSC_COMM_SELF,1,MPIrank,0,&IS_ROW);
>   ISCreateStride(PETSC_COMM_SELF,2,0,1,&IS_COL);
>   // Rank=0 --> IS_ROW= [ 0 ] ; IS_COL= [ 0, 1 ] ;
>   // Rank=1 --> IS_ROW= [ 1 ] ; IS_COL= [ 0, 1 ] ;
>
>   MatCreate(PETSC_COMM_WORLD,&AfullNEST);
>   MatSetSizes(AfullNEST,2,2,PETSC_DECIDE,PETSC_DECIDE);
>   // MatSetSizes(AfullNEST,PETSC_DECIDE,PETSC_DECIDE,4,4);
>   //
> MatCreateNest(PETSC_COMM_WORLD,1,&IS_ROW,1,&IS_COL,AfullNESTpointer,&AfullNEST);
> ierr =
> MatNestSetSubMats(AfullNEST,1,&IS_ROW,2,&IS_COL,AfullNESTpointer);
> CHKERRQ(ierr);
>
>   ierr = PetscFinalize(); CHKERRQ(ierr);
>   return 0;
> }
>


Re: [petsc-users] error: Petsc has generated inconsistent data, MPI_Allreduce() called in different locations (code lines) on different processors

2019-04-05 Thread Matthew Knepley via petsc-users
On Fri, Apr 5, 2019 at 3:20 AM Eda Oktay via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hello,
>
> I am trying to calculate unweighted Laplacian of a matrix by using 2
> cores. If the size of matrix is in even number then my program works.
> However, when I try to use a matrix having odd number for size, I guess
> since size of the matrix cannot be divided into processors correctly, I get
> the following error:
>
> [0]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: Petsc has generated inconsistent data
> [1]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [1]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
> [0]PETSC ERROR: ./SON_YENI_DENEME_TEMIZ_ENYENI_FINAL on a
> arch-linux2-c-debug named dfa.wls.metu.edu.tr by edaoktay Fri Apr  5
> 09:50:54 2019
> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
> --with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
> --download-metis --download-parmetis --download-superlu_dist
> --download-slepc --download-mpich
> [1]PETSC ERROR: ./SON_YENI_DENEME_TEMIZ_ENYENI_FINAL on a
> arch-linux2-c-debug named dfa.wls.metu.edu.tr by edaoktay Fri Apr  5
> 09:50:54 2019
> [1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
> --with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
> --download-metis --download-parmetis --download-superlu_dist
> --download-slepc --download-mpich
> [1]PETSC ERROR: [0]PETSC ERROR: #1 MatSetOption() line 5505 in
> /home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
> #1 MatStashScatterBegin_BTS() line 843 in
> /home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
> [1]PETSC ERROR: [0]PETSC ERROR: #2 MatSetOption() line 5505 in
> /home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
> #2 MatStashScatterBegin_BTS() line 843 in
> /home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
> [1]PETSC ERROR: #3 MatStashScatterBegin_Private() line 462 in
> /home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
> [0]PETSC ERROR: #3 main() line 164 in
> /home/edaoktay/petsc-3.10.3/arch-linux2-c-debug/share/slepc/examples/src/eda/SON_YENI_DENEME_TEMIZ_ENYENI_FINAL.c
> [1]PETSC ERROR: #4 MatAssemblyBegin_MPIAIJ() line 774 in
> /home/edaoktay/petsc-3.10.3/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: [0]PETSC ERROR: PETSc Option Table entries:
> [0]PETSC ERROR: -f
> /home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/binary_files/airfoil1_binary
> #5 MatAssemblyBegin() line 5251 in
> /home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
> [1]PETSC ERROR: [0]PETSC ERROR: -mat_partitioning_type parmetis
> [0]PETSC ERROR: -unweighted
> #6 main() line 169 in
> /home/edaoktay/petsc-3.10.3/arch-linux2-c-debug/share/slepc/examples/src/eda/SON_YENI_DENEME_TEMIZ_ENYENI_FINAL.c
> [1]PETSC ERROR: PETSc Option Table entries:
> [1]PETSC ERROR: [0]PETSC ERROR: End of Error Message
> ---send entire error message to petsc-ma...@mcs.anl.gov--
> -f
> /home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/binary_files/airfoil1_binary
> [1]PETSC ERROR: -mat_partitioning_type parmetis
> [1]PETSC ERROR: -unweighted
> [1]PETSC ERROR: application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
> End of Error Message ---send entire error message to
> petsc-ma...@mcs.anl.gov--
> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
>
>
>  where line 164 in my main program is MatSetOption and line 169
> is MatAssemblyBegin. I am new to MPI usage so I did not understand why
> MPI_Allreduce() causes a problem and how can I fix the problem.
>

You have to call collective methods on all processes in the same order.
This is not happening in your code. Beyond that,
there is no way for us to tell how this happened.

  Thanks,

 Matt


> Thank you,
>
> Eda
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Constructing a MATNEST with blocks defined in different procs

2019-04-05 Thread Diogo FERREIRA SABINO via petsc-users
Hi,
I'm new in petsc and I'm trying to construct a MATNEST in two procs, by setting 
each block of the nested matrix with a MATMPIAIJ matrix defined in each proc.
I'm trying to use MatCreateNest or MatNestSetSubMats, but I'm not being able to 
do it.
Using MatNestSetSubMats, I'm trying to construct the MATNEST, giving a pointer 
to the correct matrices depending on the MPIrank of that proc.
I'm obtaining the error message for the line 
:MatNestSetSubMats(AfullNEST,1,&IS_ROW,2,&IS_COL,AfullNESTpointer);
[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: Wrong type of object: Parameter # 5

Is there a way of doing it, or all the blocks of the MATNEST have to exist in 
the same communicator as the MATNEST matrix?
A simple test is given below, lunching it with: mpirun -n 2 ./Main_petsc.exe

static char help[] = "Create MPI Nest";
#include 

#undef __FUNCT__
#define __FUNCT__ "main"
int main(int argc,char **argv)
{
  PetscInitialize(&argc,&argv,(char*)0,help);
  //
  PetscErrorCode  ierr;
  PetscMPIInt MPIrank,MPIsize;
  MPI_Comm_rank(PETSC_COMM_WORLD,&MPIrank);
  MPI_Comm_size(PETSC_COMM_WORLD,&MPIsize);

     Create Each Matrix:
  Mat Adiag;

  //Create a Adiag different on each proc:
  ierr = MatCreate(PETSC_COMM_SELF,&Adiag); CHKERRQ(ierr);
  ierr = MatSetSizes(Adiag,2,2,PETSC_DECIDE,PETSC_DECIDE);  CHKERRQ(ierr);
  ierr = MatSetType(Adiag,MATMPIAIJ);   CHKERRQ(ierr);
  ierr = MatSetFromOptions(Adiag);  CHKERRQ(ierr);
  ierr = MatMPIAIJSetPreallocation(Adiag,2,NULL,2,NULL);CHKERRQ(ierr);

  MatSetValue(Adiag,0,0,(MPIrank+5),INSERT_VALUES);
  MatSetValue(Adiag,0,1,(MPIrank+10),INSERT_VALUES);
  MatSetValue(Adiag,1,0,(MPIrank+15),INSERT_VALUES);
  MatSetValue(Adiag,1,1,(MPIrank+20),INSERT_VALUES);
  MatAssemblyBegin(Adiag,MAT_FINAL_ASSEMBLY); 
MatAssemblyEnd(Adiag,MAT_FINAL_ASSEMBLY);

  ///   Create Nest:
  MPI_Barrier(PETSC_COMM_WORLD);
  Mat   AfullNEST, *AfullNESTpointer;

  PetscMalloc1(2,&AfullNESTpointer);
  AfullNESTpointer[0]=NULL;
  AfullNESTpointer[1]=NULL;
  AfullNESTpointer[MPIrank]=Adiag;
  // Rank=0 --> AfullNESTpointer[0]=Adiag; AfullNESTpointer[1]=NULL;
  // Rank=1 --> AfullNESTpointer[0]=NULL;  AfullNESTpointer[1]=Adiag;

  ISIS_ROW,IS_COL;
  ISCreateStride(PETSC_COMM_SELF,1,MPIrank,0,&IS_ROW);
  ISCreateStride(PETSC_COMM_SELF,2,0,1,&IS_COL);
  // Rank=0 --> IS_ROW= [ 0 ] ; IS_COL= [ 0, 1 ] ;
  // Rank=1 --> IS_ROW= [ 1 ] ; IS_COL= [ 0, 1 ] ;

  MatCreate(PETSC_COMM_WORLD,&AfullNEST);
  MatSetSizes(AfullNEST,2,2,PETSC_DECIDE,PETSC_DECIDE);
  // MatSetSizes(AfullNEST,PETSC_DECIDE,PETSC_DECIDE,4,4);
  // 
MatCreateNest(PETSC_COMM_WORLD,1,&IS_ROW,1,&IS_COL,AfullNESTpointer,&AfullNEST);
ierr = MatNestSetSubMats(AfullNEST,1,&IS_ROW,2,&IS_COL,AfullNESTpointer); 
CHKERRQ(ierr);

  ierr = PetscFinalize(); CHKERRQ(ierr);
  return 0;
}


Re: [petsc-users] max_it for EPS minres preconditioner

2019-04-05 Thread Jose E. Roman via petsc-users
I have made a fix in branch jose/maint/change-maxit-lobpcg
It will be included in maint once the nightly tests are clean.
Thanks for reporting this.
Jose

> El 4 abr 2019, a las 23:37, Pieter Ghysels  escribió:
> 
> Dear Jose,
> 
> It indeed works correctly when I set maxit after calling EPSSetUp.
> Thanks!
> 
> Pieter
> 
> On Thu, Apr 4, 2019 at 12:30 PM Jose E. Roman  wrote:
> Oops, the intention was to set 5 as the default value, but it is set at 
> EPSSetUp() so changing it before is useless. This is wrong behaviour. I will 
> try to fix it tomorrow.
> Jose
> 
> 
> > El 4 abr 2019, a las 20:45, Pieter Ghysels via petsc-users 
> >  escribió:
> > 
> > Hi,
> > 
> > I'm trying to set the maximum number of iterations in a minres 
> > preconditioner for the lobpcg eigensolver from SLEPc.
> > Using KSPSetTolerances, I can change the minres tolerance, but not maxit 
> > (it's always 5).
> > 
> >   ierr = EPSCreate( PETSC_COMM_WORLD , &eps ) ;  CHKERRQ( ierr 
> > ) ;
> >   ...
> > 
> >   ST st;
> >   KSP ksp;
> >   PC pc;
> >   ierr = EPSGetST( eps , &st ) ; CHKERRQ( ierr 
> > ) ;
> >   ierr = STSetType( st, STPRECOND ) ;CHKERRQ( ierr 
> > ) ;
> >   ierr = STGetKSP( st , &ksp ) ; CHKERRQ( ierr 
> > ) ;
> >   ierr = KSPSetType( ksp , KSPMINRES ) ; CHKERRQ( ierr 
> > ) ;
> >   ierr = KSPGetPC( ksp , &pc ) ; CHKERRQ( ierr 
> > ) ;
> >   ierr = PCSetType( pc , PCNONE ) ;  CHKERRQ( ierr 
> > ) ;
> >   ierr = KSPSetTolerances
> > ( ksp , /*tol_prec*/ 1e-10 , PETSC_DEFAULT , PETSC_DEFAULT , 
> > /*maxit_prec*/ 7 ) ;
> >   CHKERRQ( ierr ) ;
> >   ierr = KSPSetFromOptions( ksp ) ;  CHKERRQ( ierr 
> > ) ;
> >   ierr = STSetFromOptions( st ) ;CHKERRQ( ierr 
> > ) ;
> >   ierr = EPSSetFromOptions( eps ) ;  CHKERRQ( ierr 
> > ) ;
> > 
> >   ...
> >  ierr = EPSSetOperators( eps , mat , NULL ) ; CHKERRQ( ierr ) ;
> > 
> >  ierr = EPSSolve( eps ) ; CHKERRQ( ierr ) ;
> > 
> > 
> > 
> > When I run with -eps_view, I see:
> > 
> > ...
> > EPS Object: 4 MPI processes
> >   type: lobpcg
> > ...
> > ST Object: 4 MPI processes
> >   type: precond
> >   shift: 0.
> >   number of matrices: 1
> >   KSP Object: (st_) 4 MPI processes
> > type: minres
> > maximum iterations=5, initial guess is zero
> > tolerances:  relative=1e-10, absolute=1e-50, divergence=1.
> > left preconditioning
> > using PRECONDITIONED norm type for convergence test
> >   PC Object: (st_) 4 MPI processes
> > type: none
> > ...
> > 
> 



[petsc-users] error: Petsc has generated inconsistent data, MPI_Allreduce() called in different locations (code lines) on different processors

2019-04-05 Thread Eda Oktay via petsc-users
Hello,

I am trying to calculate unweighted Laplacian of a matrix by using 2 cores.
If the size of matrix is in even number then my program works. However,
when I try to use a matrix having odd number for size, I guess since size
of the matrix cannot be divided into processors correctly, I get the
following error:

[0]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: Petsc has generated inconsistent data
[1]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
on different processors
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[1]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
[0]PETSC ERROR: Petsc has generated inconsistent data
[0]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
on different processors
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
[0]PETSC ERROR: ./SON_YENI_DENEME_TEMIZ_ENYENI_FINAL on a
arch-linux2-c-debug named dfa.wls.metu.edu.tr by edaoktay Fri Apr  5
09:50:54 2019
[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
--with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
--download-metis --download-parmetis --download-superlu_dist
--download-slepc --download-mpich
[1]PETSC ERROR: ./SON_YENI_DENEME_TEMIZ_ENYENI_FINAL on a
arch-linux2-c-debug named dfa.wls.metu.edu.tr by edaoktay Fri Apr  5
09:50:54 2019
[1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
--with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
--download-metis --download-parmetis --download-superlu_dist
--download-slepc --download-mpich
[1]PETSC ERROR: [0]PETSC ERROR: #1 MatSetOption() line 5505 in
/home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
#1 MatStashScatterBegin_BTS() line 843 in
/home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
[1]PETSC ERROR: [0]PETSC ERROR: #2 MatSetOption() line 5505 in
/home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
#2 MatStashScatterBegin_BTS() line 843 in
/home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
[1]PETSC ERROR: #3 MatStashScatterBegin_Private() line 462 in
/home/edaoktay/petsc-3.10.3/src/mat/utils/matstash.c
[0]PETSC ERROR: #3 main() line 164 in
/home/edaoktay/petsc-3.10.3/arch-linux2-c-debug/share/slepc/examples/src/eda/SON_YENI_DENEME_TEMIZ_ENYENI_FINAL.c
[1]PETSC ERROR: #4 MatAssemblyBegin_MPIAIJ() line 774 in
/home/edaoktay/petsc-3.10.3/src/mat/impls/aij/mpi/mpiaij.c
[1]PETSC ERROR: [0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -f
/home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/binary_files/airfoil1_binary
#5 MatAssemblyBegin() line 5251 in
/home/edaoktay/petsc-3.10.3/src/mat/interface/matrix.c
[1]PETSC ERROR: [0]PETSC ERROR: -mat_partitioning_type parmetis
[0]PETSC ERROR: -unweighted
#6 main() line 169 in
/home/edaoktay/petsc-3.10.3/arch-linux2-c-debug/share/slepc/examples/src/eda/SON_YENI_DENEME_TEMIZ_ENYENI_FINAL.c
[1]PETSC ERROR: PETSc Option Table entries:
[1]PETSC ERROR: [0]PETSC ERROR: End of Error Message
---send entire error message to petsc-ma...@mcs.anl.gov--
-f
/home/edaoktay/petsc-3.10.3/share/petsc/datafiles/matrices/binary_files/airfoil1_binary
[1]PETSC ERROR: -mat_partitioning_type parmetis
[1]PETSC ERROR: -unweighted
[1]PETSC ERROR: application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
End of Error Message ---send entire error message to
petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1


 where line 164 in my main program is MatSetOption and line 169
is MatAssemblyBegin. I am new to MPI usage so I did not understand why
MPI_Allreduce() causes a problem and how can I fix the problem.

Thank you,

Eda