Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Zhang, Hong
It is not problem with Matload twice. The file has one matrix, but is loaded 
twice.

Replacing pc with ksp, the code runs fine. 
The error occurs when PCSetUp_LU() is called with SAME_NONZERO_PATTERN.
I'll further look at it later.

Hong

From: Zhang, Hong
Sent: Friday, October 21, 2016 8:18 PM
To: Barry Smith; petsc-users
Subject: RE: [petsc-users] SuperLU_dist issue in 3.7.4

I am investigating it. The file has two matrices. The code takes following 
steps:

PCCreate(PETSC_COMM_WORLD, );

MatCreate(PETSC_COMM_WORLD,);
MatLoad(A,fd);
PCSetOperators(pc,A,A);
PCSetUp(pc);

MatCreate(PETSC_COMM_WORLD,);
MatLoad(A,fd);
PCSetOperators(pc,A,A);
PCSetUp(pc);  //crash here with np=2, superlu_dist, not with mumps/superlu or 
superlu_dist np=1

Hong


From: Barry Smith [bsm...@mcs.anl.gov]
Sent: Friday, October 21, 2016 5:59 PM
To: petsc-users
Cc: Zhang, Hong
Subject: Re: [petsc-users] SuperLU_dist issue in 3.7.4

> On Oct 21, 2016, at 5:16 PM, Satish Balay  wrote:
>
> The issue with this test code is - using MatLoad() twice [with the
> same object - without destroying it]. Not sure if thats supporsed to
> work..

   If the file has two matrices in it then yes a second call to MatLoad() with 
the same matrix should just load in the second matrix from the file correctly. 
Perhaps we need a test in our test suite just to make sure that works.

  Barry



>
> Satish
>
> On Fri, 21 Oct 2016, Hong wrote:
>
>> I can reproduce the error on a linux machine with petsc-maint. It crashes
>> at 2nd solve, on both processors:
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> 0x7f051dc835bd in pdgsequ (A=0x1563910, r=0x176dfe0, c=0x178f7f0,
>>rowcnd=0x7fffcb8dab30, colcnd=0x7fffcb8dab38, amax=0x7fffcb8dab40,
>>info=0x7fffcb8dab4c, grid=0x1563858)
>>at
>> /sandbox/hzhang/petsc/arch-linux-gcc-gfortran/externalpackages/git.superlu_dist/SRC/pdgsequ.c:182
>> 182 c[jcol] = SUPERLU_MAX( c[jcol], fabs(Aval[j]) * r[irow]
>> );
>>
>> The version of superlu_dist:
>> commit 0b5369f304507f1c7904a913f4c0c86777a60639
>> Author: Xiaoye Li 
>> Date:   Thu May 26 11:33:19 2016 -0700
>>
>>rename 'struct pair' to 'struct superlu_pair'.
>>
>> Hong
>>
>> On Fri, Oct 21, 2016 at 5:36 AM, Anton Popov  wrote:
>>
>>>
>>> On 10/19/2016 05:22 PM, Anton Popov wrote:
>>>
>>> I looked at each valgrind-complained item in your email dated Oct. 11.
>>> Those reports are really superficial; I don't see anything  wrong with
>>> those lines (mostly uninitialized variables) singled out.  I did a few
>>> tests with the latest version in github,  all went fine.
>>>
>>> Perhaps you can print your matrix that caused problem, I can run it using
>>> your matrix.
>>>
>>> Sherry
>>>
>>> Hi Sherry,
>>>
>>> I finally figured out a minimalistic setup (attached) that reproduces the
>>> problem.
>>>
>>> I use petsc-maint:
>>>
>>> git clone -b maint https://bitbucket.org/petsc/petsc.git
>>>
>>> and configure it in the debug mode without optimization using the options:
>>>
>>> --download-superlu_dist=1 \
>>> --download-superlu_dist-commit=origin/maint \
>>>
>>> Compile the test, assuming PETSC_DIR points to the described petsc
>>> installation:
>>>
>>> make ex16
>>>
>>> Run with:
>>>
>>> mpirun -n 2 ./ex16 -f binaryoutput -pc_type lu
>>> -pc_factor_mat_solver_package superlu_dist
>>>
>>> Matrix partitioning between the processors will be completely the same as
>>> in our code (hard-coded).
>>>
>>> I factorize the same matrix twice with the same PC object. Remarkably it
>>> runs fine for the first time, but fails for the second.
>>>
>>> Thank you very much for looking into this problem.
>>>
>>> Cheers,
>>> Anton
>>>
>>
>



Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Zhang, Hong
I am investigating it. The file has two matrices. The code takes following 
steps:

PCCreate(PETSC_COMM_WORLD, );

MatCreate(PETSC_COMM_WORLD,);
MatLoad(A,fd); 
PCSetOperators(pc,A,A);
PCSetUp(pc);

MatCreate(PETSC_COMM_WORLD,);
MatLoad(A,fd); 
PCSetOperators(pc,A,A);
PCSetUp(pc);  //crash here with np=2, superlu_dist, not with mumps/superlu or 
superlu_dist np=1

Hong


From: Barry Smith [bsm...@mcs.anl.gov]
Sent: Friday, October 21, 2016 5:59 PM
To: petsc-users
Cc: Zhang, Hong
Subject: Re: [petsc-users] SuperLU_dist issue in 3.7.4

> On Oct 21, 2016, at 5:16 PM, Satish Balay  wrote:
>
> The issue with this test code is - using MatLoad() twice [with the
> same object - without destroying it]. Not sure if thats supporsed to
> work..

   If the file has two matrices in it then yes a second call to MatLoad() with 
the same matrix should just load in the second matrix from the file correctly. 
Perhaps we need a test in our test suite just to make sure that works.

  Barry



>
> Satish
>
> On Fri, 21 Oct 2016, Hong wrote:
>
>> I can reproduce the error on a linux machine with petsc-maint. It crashes
>> at 2nd solve, on both processors:
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> 0x7f051dc835bd in pdgsequ (A=0x1563910, r=0x176dfe0, c=0x178f7f0,
>>rowcnd=0x7fffcb8dab30, colcnd=0x7fffcb8dab38, amax=0x7fffcb8dab40,
>>info=0x7fffcb8dab4c, grid=0x1563858)
>>at
>> /sandbox/hzhang/petsc/arch-linux-gcc-gfortran/externalpackages/git.superlu_dist/SRC/pdgsequ.c:182
>> 182 c[jcol] = SUPERLU_MAX( c[jcol], fabs(Aval[j]) * r[irow]
>> );
>>
>> The version of superlu_dist:
>> commit 0b5369f304507f1c7904a913f4c0c86777a60639
>> Author: Xiaoye Li 
>> Date:   Thu May 26 11:33:19 2016 -0700
>>
>>rename 'struct pair' to 'struct superlu_pair'.
>>
>> Hong
>>
>> On Fri, Oct 21, 2016 at 5:36 AM, Anton Popov  wrote:
>>
>>>
>>> On 10/19/2016 05:22 PM, Anton Popov wrote:
>>>
>>> I looked at each valgrind-complained item in your email dated Oct. 11.
>>> Those reports are really superficial; I don't see anything  wrong with
>>> those lines (mostly uninitialized variables) singled out.  I did a few
>>> tests with the latest version in github,  all went fine.
>>>
>>> Perhaps you can print your matrix that caused problem, I can run it using
>>> your matrix.
>>>
>>> Sherry
>>>
>>> Hi Sherry,
>>>
>>> I finally figured out a minimalistic setup (attached) that reproduces the
>>> problem.
>>>
>>> I use petsc-maint:
>>>
>>> git clone -b maint https://bitbucket.org/petsc/petsc.git
>>>
>>> and configure it in the debug mode without optimization using the options:
>>>
>>> --download-superlu_dist=1 \
>>> --download-superlu_dist-commit=origin/maint \
>>>
>>> Compile the test, assuming PETSC_DIR points to the described petsc
>>> installation:
>>>
>>> make ex16
>>>
>>> Run with:
>>>
>>> mpirun -n 2 ./ex16 -f binaryoutput -pc_type lu
>>> -pc_factor_mat_solver_package superlu_dist
>>>
>>> Matrix partitioning between the processors will be completely the same as
>>> in our code (hard-coded).
>>>
>>> I factorize the same matrix twice with the same PC object. Remarkably it
>>> runs fine for the first time, but fails for the second.
>>>
>>> Thank you very much for looking into this problem.
>>>
>>> Cheers,
>>> Anton
>>>
>>
>



Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Satish Balay
On Fri, 21 Oct 2016, Barry Smith wrote:

> 
>   valgrind first

balay@asterix /home/balay/download-pine/x/superlu_dist_test
$ mpiexec -n 2 $VG ./ex16 -f ~/datafiles/matrices/small
First MatLoad! 
Mat Object: 2 MPI processes
  type: mpiaij
row 0: (0, 4.)  (1, -1.)  (6, -1.) 
row 1: (0, -1.)  (1, 4.)  (2, -1.)  (7, -1.) 
row 2: (1, -1.)  (2, 4.)  (3, -1.)  (8, -1.) 
row 3: (2, -1.)  (3, 4.)  (4, -1.)  (9, -1.) 
row 4: (3, -1.)  (4, 4.)  (5, -1.)  (10, -1.) 
row 5: (4, -1.)  (5, 4.)  (11, -1.) 
row 6: (0, -1.)  (6, 4.)  (7, -1.)  (12, -1.) 
row 7: (1, -1.)  (6, -1.)  (7, 4.)  (8, -1.)  (13, -1.) 
row 8: (2, -1.)  (7, -1.)  (8, 4.)  (9, -1.)  (14, -1.) 
row 9: (3, -1.)  (8, -1.)  (9, 4.)  (10, -1.)  (15, -1.) 
row 10: (4, -1.)  (9, -1.)  (10, 4.)  (11, -1.)  (16, -1.) 
row 11: (5, -1.)  (10, -1.)  (11, 4.)  (17, -1.) 
row 12: (6, -1.)  (12, 4.)  (13, -1.)  (18, -1.) 
row 13: (7, -1.)  (12, -1.)  (13, 4.)  (14, -1.)  (19, -1.) 
row 14: (8, -1.)  (13, -1.)  (14, 4.)  (15, -1.)  (20, -1.) 
row 15: (9, -1.)  (14, -1.)  (15, 4.)  (16, -1.)  (21, -1.) 
row 16: (10, -1.)  (15, -1.)  (16, 4.)  (17, -1.)  (22, -1.) 
row 17: (11, -1.)  (16, -1.)  (17, 4.)  (23, -1.) 
row 18: (12, -1.)  (18, 4.)  (19, -1.)  (24, -1.) 
row 19: (13, -1.)  (18, -1.)  (19, 4.)  (20, -1.)  (25, -1.) 
row 20: (14, -1.)  (19, -1.)  (20, 4.)  (21, -1.)  (26, -1.) 
row 21: (15, -1.)  (20, -1.)  (21, 4.)  (22, -1.)  (27, -1.) 
row 22: (16, -1.)  (21, -1.)  (22, 4.)  (23, -1.)  (28, -1.) 
row 23: (17, -1.)  (22, -1.)  (23, 4.)  (29, -1.) 
row 24: (18, -1.)  (24, 4.)  (25, -1.)  (30, -1.) 
row 25: (19, -1.)  (24, -1.)  (25, 4.)  (26, -1.)  (31, -1.) 
row 26: (20, -1.)  (25, -1.)  (26, 4.)  (27, -1.)  (32, -1.) 
row 27: (21, -1.)  (26, -1.)  (27, 4.)  (28, -1.)  (33, -1.) 
row 28: (22, -1.)  (27, -1.)  (28, 4.)  (29, -1.)  (34, -1.) 
row 29: (23, -1.)  (28, -1.)  (29, 4.)  (35, -1.) 
row 30: (24, -1.)  (30, 4.)  (31, -1.) 
row 31: (25, -1.)  (30, -1.)  (31, 4.)  (32, -1.) 
row 32: (26, -1.)  (31, -1.)  (32, 4.)  (33, -1.) 
row 33: (27, -1.)  (32, -1.)  (33, 4.)  (34, -1.) 
row 34: (28, -1.)  (33, -1.)  (34, 4.)  (35, -1.) 
row 35: (29, -1.)  (34, -1.)  (35, 4.) 
Second MatLoad! 
Mat Object: 2 MPI processes
  type: mpiaij
==4592== Invalid read of size 4
==4592==at 0x5814014: MatView_MPIAIJ_ASCIIorDraworSocket (mpiaij.c:1402)
==4592==by 0x5814A75: MatView_MPIAIJ (mpiaij.c:1440)
==4592==by 0x53373D7: MatView (matrix.c:989)
==4592==by 0x40107E: main (ex16.c:30)
==4592==  Address 0xa47b460 is 20 bytes after a block of size 28 alloc'd
==4592==at 0x4C2FF83: memalign (vg_replace_malloc.c:858)
==4592==by 0x4FD121A: PetscMallocAlign (mal.c:28)
==4592==by 0x5842C70: MatSetUpMultiply_MPIAIJ (mmaij.c:41)
==4592==by 0x5809943: MatAssemblyEnd_MPIAIJ (mpiaij.c:747)
==4592==by 0x536B299: MatAssemblyEnd (matrix.c:5298)
==4592==by 0x5829C05: MatLoad_MPIAIJ (mpiaij.c:3032)
==4592==by 0x5337FEA: MatLoad (matrix.c:1101)
==4592==by 0x400D9F: main (ex16.c:22)
==4592== 
==4591== Invalid read of size 4
==4591==at 0x5814014: MatView_MPIAIJ_ASCIIorDraworSocket (mpiaij.c:1402)
==4591==by 0x5814A75: MatView_MPIAIJ (mpiaij.c:1440)
==4591==by 0x53373D7: MatView (matrix.c:989)
==4591==by 0x40107E: main (ex16.c:30)
==4591==  Address 0xa482958 is 24 bytes before a block of size 7 alloc'd
==4591==at 0x4C2FF83: memalign (vg_replace_malloc.c:858)
==4591==by 0x4FD121A: PetscMallocAlign (mal.c:28)
==4591==by 0x4F31FB5: PetscStrallocpy (str.c:197)
==4591==by 0x4F0D3F5: PetscClassRegLogRegister (classlog.c:253)
==4591==by 0x4EF96E2: PetscClassIdRegister (plog.c:2053)
==4591==by 0x51FA018: VecInitializePackage (dlregisvec.c:165)
==4591==by 0x51F6DE9: VecCreate (veccreate.c:35)
==4591==by 0x51C49F0: VecCreateSeq (vseqcr.c:37)
==4591==by 0x5843191: MatSetUpMultiply_MPIAIJ (mmaij.c:104)
==4591==by 0x5809943: MatAssemblyEnd_MPIAIJ (mpiaij.c:747)
==4591==by 0x536B299: MatAssemblyEnd (matrix.c:5298)
==4591==by 0x5829C05: MatLoad_MPIAIJ (mpiaij.c:3032)
==4591==by 0x5337FEA: MatLoad (matrix.c:1101)
==4591==by 0x400D9F: main (ex16.c:22)
==4591== 
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Argument out of range
[0]PETSC ERROR: Column too large: col 96 max 35
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.7.4-1729-g4c4de23  GIT Date: 
2016-10-20 22:22:58 +
[0]PETSC ERROR: ./ex16 on a arch-idx64-slu named asterix by balay Fri Oct 21 
18:47:51 2016
[0]PETSC ERROR: Configure options --download-metis --download-parmetis 
--download-superlu_dist PETSC_ARCH=arch-idx64-slu
[0]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 585 in 
/home/balay/petsc/src/mat/impls/aij/mpi/mpiaij.c
[0]PETSC ERROR: #2 MatAssemblyEnd_MPIAIJ() line 724 in 
/home/balay/petsc/src/mat/impls/aij/mpi/mpiaij.c
[0]PETSC ERROR: #3 

Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Barry Smith

  valgrind first

> On Oct 21, 2016, at 6:33 PM, Satish Balay  wrote:
> 
> On Fri, 21 Oct 2016, Barry Smith wrote:
> 
>> 
>>> On Oct 21, 2016, at 5:16 PM, Satish Balay  wrote:
>>> 
>>> The issue with this test code is - using MatLoad() twice [with the
>>> same object - without destroying it]. Not sure if thats supporsed to
>>> work..
>> 
>>   If the file has two matrices in it then yes a second call to MatLoad() 
>> with the same matrix should just load in the second matrix from the file 
>> correctly. Perhaps we need a test in our test suite just to make sure that 
>> works.
> 
> This test code crashes with:
> 
> MatLoad()
> MatView()
> MatLoad()
> MatView()
> 
> Satish
> 
> 
> 
> balay@asterix /home/balay/download-pine/x/superlu_dist_test
> $ cat ex16.c
> static char help[] = "Reads matrix and debug solver\n\n";
> #include 
> #undef __FUNCT__
> #define __FUNCT__ "main"
> int main(int argc,char **args)
> {
>  Mat   A;
>  PetscViewer   fd;/* viewer */
>  char  file[PETSC_MAX_PATH_LEN];  /* input file name */
>  PetscErrorCodeierr;
>  PetscBool flg;
> 
>  PetscInitialize(,,(char*)0,help);
> 
>  ierr = PetscOptionsGetString(NULL,NULL,"-f",file,PETSC_MAX_PATH_LEN,); 
> CHKERRQ(ierr);
>  if (!flg) SETERRQ(PETSC_COMM_WORLD,1,"Must indicate binary file with the -f 
> option");
> 
>  ierr = MatCreate(PETSC_COMM_WORLD,); CHKERRQ(ierr);
> 
>  ierr = PetscPrintf(PETSC_COMM_WORLD, "First MatLoad! \n");CHKERRQ(ierr);
>  ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,file,FILE_MODE_READ,); 
> CHKERRQ(ierr);
>  ierr = MatLoad(A,fd); CHKERRQ(ierr);
>  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
>  ierr = MatView(A,0);CHKERRQ(ierr);
> 
>  ierr = PetscPrintf(PETSC_COMM_WORLD, "Second MatLoad! \n");CHKERRQ(ierr);
>  ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,file,FILE_MODE_READ,); 
> CHKERRQ(ierr);
>  ierr = MatLoad(A,fd); CHKERRQ(ierr);
>  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
>  ierr = MatView(A,0);CHKERRQ(ierr);
> 
>  ierr = MatDestroy(); CHKERRQ(ierr);
>  ierr = PetscFinalize();
>  return 0;
> }
> 
> balay@asterix /home/balay/download-pine/x/superlu_dist_test
> $ make ex16
> mpicc -o ex16.o -c -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing 
> -Wno-unknown-pragmas -fvisibility=hidden -g3   -I/home/balay/petsc/include 
> -I/home/balay/petsc/arch-idx64-slu/include`pwd`/ex16.c
> mpicc -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
> -fvisibility=hidden -g3   -o ex16 ex16.o 
> -Wl,-rpath,/home/balay/petsc/arch-idx64-slu/lib 
> -L/home/balay/petsc/arch-idx64-slu/lib  -lpetsc 
> -Wl,-rpath,/home/balay/petsc/arch-idx64-slu/lib -lsuperlu_dist -llapack 
> -lblas -lparmetis -lmetis -lX11 -lpthread -lm 
> -Wl,-rpath,/home/balay/soft/mpich-3.1.4/lib 
> -L/home/balay/soft/mpich-3.1.4/lib 
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/6.2.1 
> -L/usr/lib/gcc/x86_64-redhat-linux/6.2.1 -lmpifort -lgfortran -lm -lgfortran 
> -lm -lquadmath -lm -lmpicxx -lstdc++ 
> -Wl,-rpath,/home/balay/soft/mpich-3.1.4/lib 
> -L/home/balay/soft/mpich-3.1.4/lib 
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/6.2.1 
> -L/usr/lib/gcc/x86_64-redhat-linux/6.2.1 -ldl 
> -Wl,-rpath,/home/balay/soft/mpich-3.1.4/lib -lmpi -lgcc_s -ldl 
> /usr/bin/rm -f ex16.o
> balay@asterix /home/balay/download-pine/x/superlu_dist_test
> $ mpiexec -n 2 ./ex16 -f ~/datafiles/matrices/small
> First MatLoad! 
> Mat Object: 2 MPI processes
>  type: mpiaij
> row 0: (0, 4.)  (1, -1.)  (6, -1.) 
> row 1: (0, -1.)  (1, 4.)  (2, -1.)  (7, -1.) 
> row 2: (1, -1.)  (2, 4.)  (3, -1.)  (8, -1.) 
> row 3: (2, -1.)  (3, 4.)  (4, -1.)  (9, -1.) 
> row 4: (3, -1.)  (4, 4.)  (5, -1.)  (10, -1.) 
> row 5: (4, -1.)  (5, 4.)  (11, -1.) 
> row 6: (0, -1.)  (6, 4.)  (7, -1.)  (12, -1.) 
> row 7: (1, -1.)  (6, -1.)  (7, 4.)  (8, -1.)  (13, -1.) 
> row 8: (2, -1.)  (7, -1.)  (8, 4.)  (9, -1.)  (14, -1.) 
> row 9: (3, -1.)  (8, -1.)  (9, 4.)  (10, -1.)  (15, -1.) 
> row 10: (4, -1.)  (9, -1.)  (10, 4.)  (11, -1.)  (16, -1.) 
> row 11: (5, -1.)  (10, -1.)  (11, 4.)  (17, -1.) 
> row 12: (6, -1.)  (12, 4.)  (13, -1.)  (18, -1.) 
> row 13: (7, -1.)  (12, -1.)  (13, 4.)  (14, -1.)  (19, -1.) 
> row 14: (8, -1.)  (13, -1.)  (14, 4.)  (15, -1.)  (20, -1.) 
> row 15: (9, -1.)  (14, -1.)  (15, 4.)  (16, -1.)  (21, -1.) 
> row 16: (10, -1.)  (15, -1.)  (16, 4.)  (17, -1.)  (22, -1.) 
> row 17: (11, -1.)  (16, -1.)  (17, 4.)  (23, -1.) 
> row 18: (12, -1.)  (18, 4.)  (19, -1.)  (24, -1.) 
> row 19: (13, -1.)  (18, -1.)  (19, 4.)  (20, -1.)  (25, -1.) 
> row 20: (14, -1.)  (19, -1.)  (20, 4.)  (21, -1.)  (26, -1.) 
> row 21: (15, -1.)  (20, -1.)  (21, 4.)  (22, -1.)  (27, -1.) 
> row 22: (16, -1.)  (21, -1.)  (22, 4.)  (23, -1.)  (28, -1.) 
> row 23: (17, -1.)  (22, -1.)  (23, 4.)  (29, -1.) 
> row 24: (18, -1.)  (24, 4.)  (25, -1.)  (30, -1.) 
> row 25: (19, -1.)  (24, -1.)  (25, 4.)  (26, -1.)  (31, -1.) 
> row 26: (20, -1.)  (25, -1.)  (26, 4.)  (27, -1.)  (32, 

Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Satish Balay
On Fri, 21 Oct 2016, Barry Smith wrote:

> 
> > On Oct 21, 2016, at 5:16 PM, Satish Balay  wrote:
> > 
> > The issue with this test code is - using MatLoad() twice [with the
> > same object - without destroying it]. Not sure if thats supporsed to
> > work..
> 
>If the file has two matrices in it then yes a second call to MatLoad() 
> with the same matrix should just load in the second matrix from the file 
> correctly. Perhaps we need a test in our test suite just to make sure that 
> works.

This test code crashes with:

MatLoad()
MatView()
MatLoad()
MatView()

Satish



balay@asterix /home/balay/download-pine/x/superlu_dist_test
$ cat ex16.c
static char help[] = "Reads matrix and debug solver\n\n";
#include 
#undef __FUNCT__
#define __FUNCT__ "main"
int main(int argc,char **args)
{
  Mat   A;
  PetscViewer   fd;/* viewer */
  char  file[PETSC_MAX_PATH_LEN];  /* input file name */
  PetscErrorCodeierr;
  PetscBool flg;

  PetscInitialize(,,(char*)0,help);

  ierr = PetscOptionsGetString(NULL,NULL,"-f",file,PETSC_MAX_PATH_LEN,); 
CHKERRQ(ierr);
  if (!flg) SETERRQ(PETSC_COMM_WORLD,1,"Must indicate binary file with the -f 
option");

  ierr = MatCreate(PETSC_COMM_WORLD,); CHKERRQ(ierr);

  ierr = PetscPrintf(PETSC_COMM_WORLD, "First MatLoad! \n");CHKERRQ(ierr);
  ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,file,FILE_MODE_READ,); 
CHKERRQ(ierr);
  ierr = MatLoad(A,fd); CHKERRQ(ierr);
  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
  ierr = MatView(A,0);CHKERRQ(ierr);

  ierr = PetscPrintf(PETSC_COMM_WORLD, "Second MatLoad! \n");CHKERRQ(ierr);
  ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,file,FILE_MODE_READ,); 
CHKERRQ(ierr);
  ierr = MatLoad(A,fd); CHKERRQ(ierr);
  ierr = PetscViewerDestroy(); CHKERRQ(ierr);
  ierr = MatView(A,0);CHKERRQ(ierr);

  ierr = MatDestroy(); CHKERRQ(ierr);
  ierr = PetscFinalize();
  return 0;
}

balay@asterix /home/balay/download-pine/x/superlu_dist_test
$ make ex16
mpicc -o ex16.o -c -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing 
-Wno-unknown-pragmas -fvisibility=hidden -g3   -I/home/balay/petsc/include 
-I/home/balay/petsc/arch-idx64-slu/include`pwd`/ex16.c
mpicc -fPIC  -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
-fvisibility=hidden -g3   -o ex16 ex16.o 
-Wl,-rpath,/home/balay/petsc/arch-idx64-slu/lib 
-L/home/balay/petsc/arch-idx64-slu/lib  -lpetsc 
-Wl,-rpath,/home/balay/petsc/arch-idx64-slu/lib -lsuperlu_dist -llapack -lblas 
-lparmetis -lmetis -lX11 -lpthread -lm 
-Wl,-rpath,/home/balay/soft/mpich-3.1.4/lib -L/home/balay/soft/mpich-3.1.4/lib 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/6.2.1 
-L/usr/lib/gcc/x86_64-redhat-linux/6.2.1 -lmpifort -lgfortran -lm -lgfortran 
-lm -lquadmath -lm -lmpicxx -lstdc++ 
-Wl,-rpath,/home/balay/soft/mpich-3.1.4/lib -L/home/balay/soft/mpich-3.1.4/lib 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/6.2.1 
-L/usr/lib/gcc/x86_64-redhat-linux/6.2.1 -ldl 
-Wl,-rpath,/home/balay/soft/mpich-3.1.4/lib -lmpi -lgcc_s -ldl 
/usr/bin/rm -f ex16.o
balay@asterix /home/balay/download-pine/x/superlu_dist_test
$ mpiexec -n 2 ./ex16 -f ~/datafiles/matrices/small
First MatLoad! 
Mat Object: 2 MPI processes
  type: mpiaij
row 0: (0, 4.)  (1, -1.)  (6, -1.) 
row 1: (0, -1.)  (1, 4.)  (2, -1.)  (7, -1.) 
row 2: (1, -1.)  (2, 4.)  (3, -1.)  (8, -1.) 
row 3: (2, -1.)  (3, 4.)  (4, -1.)  (9, -1.) 
row 4: (3, -1.)  (4, 4.)  (5, -1.)  (10, -1.) 
row 5: (4, -1.)  (5, 4.)  (11, -1.) 
row 6: (0, -1.)  (6, 4.)  (7, -1.)  (12, -1.) 
row 7: (1, -1.)  (6, -1.)  (7, 4.)  (8, -1.)  (13, -1.) 
row 8: (2, -1.)  (7, -1.)  (8, 4.)  (9, -1.)  (14, -1.) 
row 9: (3, -1.)  (8, -1.)  (9, 4.)  (10, -1.)  (15, -1.) 
row 10: (4, -1.)  (9, -1.)  (10, 4.)  (11, -1.)  (16, -1.) 
row 11: (5, -1.)  (10, -1.)  (11, 4.)  (17, -1.) 
row 12: (6, -1.)  (12, 4.)  (13, -1.)  (18, -1.) 
row 13: (7, -1.)  (12, -1.)  (13, 4.)  (14, -1.)  (19, -1.) 
row 14: (8, -1.)  (13, -1.)  (14, 4.)  (15, -1.)  (20, -1.) 
row 15: (9, -1.)  (14, -1.)  (15, 4.)  (16, -1.)  (21, -1.) 
row 16: (10, -1.)  (15, -1.)  (16, 4.)  (17, -1.)  (22, -1.) 
row 17: (11, -1.)  (16, -1.)  (17, 4.)  (23, -1.) 
row 18: (12, -1.)  (18, 4.)  (19, -1.)  (24, -1.) 
row 19: (13, -1.)  (18, -1.)  (19, 4.)  (20, -1.)  (25, -1.) 
row 20: (14, -1.)  (19, -1.)  (20, 4.)  (21, -1.)  (26, -1.) 
row 21: (15, -1.)  (20, -1.)  (21, 4.)  (22, -1.)  (27, -1.) 
row 22: (16, -1.)  (21, -1.)  (22, 4.)  (23, -1.)  (28, -1.) 
row 23: (17, -1.)  (22, -1.)  (23, 4.)  (29, -1.) 
row 24: (18, -1.)  (24, 4.)  (25, -1.)  (30, -1.) 
row 25: (19, -1.)  (24, -1.)  (25, 4.)  (26, -1.)  (31, -1.) 
row 26: (20, -1.)  (25, -1.)  (26, 4.)  (27, -1.)  (32, -1.) 
row 27: (21, -1.)  (26, -1.)  (27, 4.)  (28, -1.)  (33, -1.) 
row 28: (22, -1.)  (27, -1.)  (28, 4.)  (29, -1.)  (34, -1.) 
row 29: (23, -1.)  (28, -1.)  (29, 4.)  (35, -1.) 
row 30: (24, -1.)  (30, 4.)  (31, -1.) 
row 31: (25, -1.)  (30, -1.)  (31, 4.)  (32, -1.) 
row 32: (26, -1.)  (31, 

Re: [petsc-users] matrix preallocation

2016-10-21 Thread Barry Smith

   We don't currently have a MatReset  (corresponding to PCRest() etc) but it 
is the right thing for you in this situation I think.

   A shallow MatReset() would destroy all the matrix data structures but not 
the Layout information (likely you want this one) while a deep reset would even 
get rid of the size information and be like the matrix just came from 
MatCreate(). 

If you want to start a MatReset() and post a pull request we can get it in. 
Note that you will need a MatReset_SeqAIJ() and a MatReset_MPIAIJ() to start 
with.

Barry

> On Oct 21, 2016, at 4:51 PM, Kong, Fande  wrote:
> 
> Hi,
> 
> For mechanics problems, the contact surface changes  during each nonlinear 
> iteration. Therefore, the sparsity of matrix also changes during each 
> nonlinear iteration. We know the preallocaiton is important for performance.
> 
> My question is:  it is possible to re-allocate memory during each nonlinear 
> iteration?
> 
> Fande



Re: [petsc-users] matrix preallocation

2016-10-21 Thread Jed Brown
"Kong, Fande"  writes:

> Hi,
>
> For mechanics problems, the contact surface changes  during each nonlinear
> iteration. Therefore, the sparsity of matrix also changes during each
> nonlinear iteration. We know the preallocaiton is important for performance.
>
> My question is:  it is possible to re-allocate memory during each nonlinear
> iteration?

Sure, call MatXAIJSetPreallocation inside your SNES Jacobian function.


signature.asc
Description: PGP signature


Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Barry Smith

> On Oct 21, 2016, at 5:16 PM, Satish Balay  wrote:
> 
> The issue with this test code is - using MatLoad() twice [with the
> same object - without destroying it]. Not sure if thats supporsed to
> work..

   If the file has two matrices in it then yes a second call to MatLoad() with 
the same matrix should just load in the second matrix from the file correctly. 
Perhaps we need a test in our test suite just to make sure that works.

  Barry



> 
> Satish
> 
> On Fri, 21 Oct 2016, Hong wrote:
> 
>> I can reproduce the error on a linux machine with petsc-maint. It crashes
>> at 2nd solve, on both processors:
>> 
>> Program received signal SIGSEGV, Segmentation fault.
>> 0x7f051dc835bd in pdgsequ (A=0x1563910, r=0x176dfe0, c=0x178f7f0,
>>rowcnd=0x7fffcb8dab30, colcnd=0x7fffcb8dab38, amax=0x7fffcb8dab40,
>>info=0x7fffcb8dab4c, grid=0x1563858)
>>at
>> /sandbox/hzhang/petsc/arch-linux-gcc-gfortran/externalpackages/git.superlu_dist/SRC/pdgsequ.c:182
>> 182 c[jcol] = SUPERLU_MAX( c[jcol], fabs(Aval[j]) * r[irow]
>> );
>> 
>> The version of superlu_dist:
>> commit 0b5369f304507f1c7904a913f4c0c86777a60639
>> Author: Xiaoye Li 
>> Date:   Thu May 26 11:33:19 2016 -0700
>> 
>>rename 'struct pair' to 'struct superlu_pair'.
>> 
>> Hong
>> 
>> On Fri, Oct 21, 2016 at 5:36 AM, Anton Popov  wrote:
>> 
>>> 
>>> On 10/19/2016 05:22 PM, Anton Popov wrote:
>>> 
>>> I looked at each valgrind-complained item in your email dated Oct. 11.
>>> Those reports are really superficial; I don't see anything  wrong with
>>> those lines (mostly uninitialized variables) singled out.  I did a few
>>> tests with the latest version in github,  all went fine.
>>> 
>>> Perhaps you can print your matrix that caused problem, I can run it using
>>> your matrix.
>>> 
>>> Sherry
>>> 
>>> Hi Sherry,
>>> 
>>> I finally figured out a minimalistic setup (attached) that reproduces the
>>> problem.
>>> 
>>> I use petsc-maint:
>>> 
>>> git clone -b maint https://bitbucket.org/petsc/petsc.git
>>> 
>>> and configure it in the debug mode without optimization using the options:
>>> 
>>> --download-superlu_dist=1 \
>>> --download-superlu_dist-commit=origin/maint \
>>> 
>>> Compile the test, assuming PETSC_DIR points to the described petsc
>>> installation:
>>> 
>>> make ex16
>>> 
>>> Run with:
>>> 
>>> mpirun -n 2 ./ex16 -f binaryoutput -pc_type lu
>>> -pc_factor_mat_solver_package superlu_dist
>>> 
>>> Matrix partitioning between the processors will be completely the same as
>>> in our code (hard-coded).
>>> 
>>> I factorize the same matrix twice with the same PC object. Remarkably it
>>> runs fine for the first time, but fails for the second.
>>> 
>>> Thank you very much for looking into this problem.
>>> 
>>> Cheers,
>>> Anton
>>> 
>> 
> 



Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Satish Balay
The issue with this test code is - using MatLoad() twice [with the
same object - without destroying it]. Not sure if thats supporsed to
work..

Satish

On Fri, 21 Oct 2016, Hong wrote:

> I can reproduce the error on a linux machine with petsc-maint. It crashes
> at 2nd solve, on both processors:
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0x7f051dc835bd in pdgsequ (A=0x1563910, r=0x176dfe0, c=0x178f7f0,
> rowcnd=0x7fffcb8dab30, colcnd=0x7fffcb8dab38, amax=0x7fffcb8dab40,
> info=0x7fffcb8dab4c, grid=0x1563858)
> at
> /sandbox/hzhang/petsc/arch-linux-gcc-gfortran/externalpackages/git.superlu_dist/SRC/pdgsequ.c:182
> 182 c[jcol] = SUPERLU_MAX( c[jcol], fabs(Aval[j]) * r[irow]
> );
> 
> The version of superlu_dist:
> commit 0b5369f304507f1c7904a913f4c0c86777a60639
> Author: Xiaoye Li 
> Date:   Thu May 26 11:33:19 2016 -0700
> 
> rename 'struct pair' to 'struct superlu_pair'.
> 
> Hong
> 
> On Fri, Oct 21, 2016 at 5:36 AM, Anton Popov  wrote:
> 
> >
> > On 10/19/2016 05:22 PM, Anton Popov wrote:
> >
> > I looked at each valgrind-complained item in your email dated Oct. 11.
> > Those reports are really superficial; I don't see anything  wrong with
> > those lines (mostly uninitialized variables) singled out.  I did a few
> > tests with the latest version in github,  all went fine.
> >
> > Perhaps you can print your matrix that caused problem, I can run it using
> >  your matrix.
> >
> > Sherry
> >
> > Hi Sherry,
> >
> > I finally figured out a minimalistic setup (attached) that reproduces the
> > problem.
> >
> > I use petsc-maint:
> >
> > git clone -b maint https://bitbucket.org/petsc/petsc.git
> >
> > and configure it in the debug mode without optimization using the options:
> >
> > --download-superlu_dist=1 \
> > --download-superlu_dist-commit=origin/maint \
> >
> > Compile the test, assuming PETSC_DIR points to the described petsc
> > installation:
> >
> > make ex16
> >
> > Run with:
> >
> > mpirun -n 2 ./ex16 -f binaryoutput -pc_type lu
> > -pc_factor_mat_solver_package superlu_dist
> >
> > Matrix partitioning between the processors will be completely the same as
> > in our code (hard-coded).
> >
> > I factorize the same matrix twice with the same PC object. Remarkably it
> > runs fine for the first time, but fails for the second.
> >
> > Thank you very much for looking into this problem.
> >
> > Cheers,
> > Anton
> >
> 



[petsc-users] matrix preallocation

2016-10-21 Thread Kong, Fande
Hi,

For mechanics problems, the contact surface changes  during each nonlinear
iteration. Therefore, the sparsity of matrix also changes during each
nonlinear iteration. We know the preallocaiton is important for performance.

My question is:  it is possible to re-allocate memory during each nonlinear
iteration?

Fande


Re: [petsc-users] Column #j is wrong in parallel from message "Inserting a new nonzero (i, j) into matrix"

2016-10-21 Thread Dave May
On 21 October 2016 at 18:55, Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:

> Hi,
>
> I am on a new issue with a message:
> [1]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: Argument out of range
> [1]PETSC ERROR: New nonzero at (374328,1227) caused a malloc
> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
> off this check
> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
> [1]PETSC ERROR: /pmi/ericc/projetm4/depots_prepush/BIB/bin/BIBMEF.opt on
> a arch-linux2-c-debug named lorien by eric Fri Oct 21 13:46:51 2016
> [1]PETSC ERROR: Configure options 
> --prefix=/opt/petsc-3.7.2_debug_matmatmult_mpi
> --with-mpi-compilers=1 --with-make-np=12 --with-shared-libraries=1
> --with-mpi-dir=/opt/openmpi-1.10.2 --with-debugging=yes
> --with-mkl_pardiso=1 --with-mkl_pardiso-dir=/opt/intel/composerxe/mkl
> --download-ml=yes --download-mumps=yes --download-superlu=yes
> --download-superlu_dist=yes --download-parmetis=yes --download-ptscotch=yes
> --download-metis=yes --download-suitesparse=yes --download-hypre=yes
> --with-scalapack=1 --with-scalapack-include=/opt/intel/composerxe/mkl/include
> --with-scalapack-lib="-L/opt/intel/composerxe/mkl/lib/intel64
> -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64"
> --with-blas-lapack-dir=/opt/intel/composerxe/mkl/lib/intel64
> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 616 in
> /groshd/ericc/petsc-3.7.2-debug/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: #2 MatAssemblyEnd_MPIAIJ() line 724 in
> /groshd/ericc/petsc-3.7.2-debug/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: #3 MatAssemblyEnd() line 5194 in
> /groshd/ericc/petsc-3.7.2-debug/src/mat/interface/matrix.c
>
> I am starting to debug, but I just want to be sure that the indices 374328
> and 1227 are both global indices...
>

They are.


>
> re-reading the thread makes me think yes... but I am not 100% sure...
>
> Thanks,
>
> Eric
>
>
>
> On 26/03/15 09:52 PM, Barry Smith wrote:
>
>>
>>   Eric,
>>
>>   I have now updated all the standard MPI matrix types AIJ, BAIJ, SBAIJ
>> to print the correct global indices in the error messages when a new
>> nonzero location is generated thus making debugging this issue easier. In
>> the branches barry/fix-inserting-new-nonzero-column-location, next and
>> the next release.
>>
>>Thanks for pushing on this. The previous code was too "developer
>> centric" and not enough "user centric" enough.
>>
>>Barry
>>
>> On Mar 25, 2015, at 1:03 PM, Eric Chamberland <
>>> eric.chamberl...@giref.ulaval.ca> wrote:
>>>
>>> Hi,
>>>
>>> while looking for where in the world do I insert the (135,9) entry in my
>>> matrix, I have discovered that the column # shown is wrong in parallel!
>>>
>>> I am using PETsc 3.5.3.
>>>
>>> The full error message is:
>>>
>>> [0]PETSC ERROR: MatSetValues_MPIAIJ() line 564 in
>>> /home/mefpp_ericc/petsc-3.5.3/src/mat/impls/aij/mpi/mpiaij.c Inserting
>>> a new nonzero (135, 9) into matrix
>>>
>>> This line code is a call to a #defined macro:
>>>
>>> MatSetValues_SeqAIJ_B_Private(row,col,value,addv);
>>>
>>> where the "col" parameter is not equal to "in[j]"!!!
>>>
>>> in gdb, printing "in[j]" gave me:
>>>
>>> print in[j]
>>> $6 = 537
>>>
>>> while "col" is:
>>>
>>> print col
>>> $7 = 9
>>>
>>> So, I expected to have a message telling me that (135,537) and not
>>> (135,9) is a new entry matrix!!!
>>>
>>> Would it be a big work to fix this so that the col # displayed is
>>> correct?
>>>
>>> Thanks!
>>>
>>> Eric
>>>
>>


Re: [petsc-users] Column #j is wrong in parallel from message "Inserting a new nonzero (i, j) into matrix"

2016-10-21 Thread Eric Chamberland

Hi,

I am on a new issue with a message:
[1]PETSC ERROR: - Error Message 
--

[1]PETSC ERROR: Argument out of range
[1]PETSC ERROR: New nonzero at (374328,1227) caused a malloc
Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn 
off this check
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trouble shooting.

[1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
[1]PETSC ERROR: /pmi/ericc/projetm4/depots_prepush/BIB/bin/BIBMEF.opt on 
a arch-linux2-c-debug named lorien by eric Fri Oct 21 13:46:51 2016
[1]PETSC ERROR: Configure options 
--prefix=/opt/petsc-3.7.2_debug_matmatmult_mpi --with-mpi-compilers=1 
--with-make-np=12 --with-shared-libraries=1 
--with-mpi-dir=/opt/openmpi-1.10.2 --with-debugging=yes 
--with-mkl_pardiso=1 --with-mkl_pardiso-dir=/opt/intel/composerxe/mkl 
--download-ml=yes --download-mumps=yes --download-superlu=yes 
--download-superlu_dist=yes --download-parmetis=yes 
--download-ptscotch=yes --download-metis=yes --download-suitesparse=yes 
--download-hypre=yes --with-scalapack=1 
--with-scalapack-include=/opt/intel/composerxe/mkl/include 
--with-scalapack-lib="-L/opt/intel/composerxe/mkl/lib/intel64 
-lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64" 
--with-blas-lapack-dir=/opt/intel/composerxe/mkl/lib/intel64
[1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 616 in 
/groshd/ericc/petsc-3.7.2-debug/src/mat/impls/aij/mpi/mpiaij.c
[1]PETSC ERROR: #2 MatAssemblyEnd_MPIAIJ() line 724 in 
/groshd/ericc/petsc-3.7.2-debug/src/mat/impls/aij/mpi/mpiaij.c
[1]PETSC ERROR: #3 MatAssemblyEnd() line 5194 in 
/groshd/ericc/petsc-3.7.2-debug/src/mat/interface/matrix.c


I am starting to debug, but I just want to be sure that the indices 
374328 and 1227 are both global indices...


re-reading the thread makes me think yes... but I am not 100% sure...

Thanks,

Eric


On 26/03/15 09:52 PM, Barry Smith wrote:


  Eric,

  I have now updated all the standard MPI matrix types AIJ, BAIJ, SBAIJ to 
print the correct global indices in the error messages when a new nonzero 
location is generated thus making debugging this issue easier. In the branches 
barry/fix-inserting-new-nonzero-column-location, next and the next release.

   Thanks for pushing on this. The previous code was too "developer centric" and not 
enough "user centric" enough.

   Barry


On Mar 25, 2015, at 1:03 PM, Eric Chamberland 
 wrote:

Hi,

while looking for where in the world do I insert the (135,9) entry in my 
matrix, I have discovered that the column # shown is wrong in parallel!

I am using PETsc 3.5.3.

The full error message is:

[0]PETSC ERROR: MatSetValues_MPIAIJ() line 564 in 
/home/mefpp_ericc/petsc-3.5.3/src/mat/impls/aij/mpi/mpiaij.c Inserting a new 
nonzero (135, 9) into matrix

This line code is a call to a #defined macro:

MatSetValues_SeqAIJ_B_Private(row,col,value,addv);

where the "col" parameter is not equal to "in[j]"!!!

in gdb, printing "in[j]" gave me:

print in[j]
$6 = 537

while "col" is:

print col
$7 = 9

So, I expected to have a message telling me that (135,537) and not (135,9) is a 
new entry matrix!!!

Would it be a big work to fix this so that the col # displayed is correct?

Thanks!

Eric


Re: [petsc-users] Looking for a quick example of a symmetric KKT system

2016-10-21 Thread Jed Brown
Why doesn't a Stokes problem fulfill your needs?

Patrick Sanan  writes:

> Yes, but AFAIK that example produces a 2x2 system - I was hoping for
> something with a variable problem size, ideally with some sort of
> physics motivating the underlying optimization problem.
>
> On Fri, Oct 21, 2016 at 7:23 PM, Justin Chang  wrote:
>> Something like this?
>>
>> http://www.mcs.anl.gov/petsc/petsc-current/src/tao/constrained/examples/tutorials/toy.c.html
>>
>>
>> On Friday, October 21, 2016, Patrick Sanan  wrote:
>>>
>>> Are there any examples already in PETSc or TAO that assemble such a
>>> system (which could thus be dumped)? SNES example ex73f90t assembles a
>>> non-symmetric KKT system.


signature.asc
Description: PGP signature


Re: [petsc-users] Looking for a quick example of a symmetric KKT system

2016-10-21 Thread Patrick Sanan
Yes, but AFAIK that example produces a 2x2 system - I was hoping for
something with a variable problem size, ideally with some sort of
physics motivating the underlying optimization problem.

On Fri, Oct 21, 2016 at 7:23 PM, Justin Chang  wrote:
> Something like this?
>
> http://www.mcs.anl.gov/petsc/petsc-current/src/tao/constrained/examples/tutorials/toy.c.html
>
>
> On Friday, October 21, 2016, Patrick Sanan  wrote:
>>
>> Are there any examples already in PETSc or TAO that assemble such a
>> system (which could thus be dumped)? SNES example ex73f90t assembles a
>> non-symmetric KKT system.


Re: [petsc-users] Looking for a quick example of a symmetric KKT system

2016-10-21 Thread Justin Chang
Something like this?

http://www.mcs.anl.gov/petsc/petsc-current/src/tao/constrained/examples/tutorials/toy.c.html

On Friday, October 21, 2016, Patrick Sanan  wrote:

> Are there any examples already in PETSc or TAO that assemble such a
> system (which could thus be dumped)? SNES example ex73f90t assembles a
> non-symmetric KKT system.
>


[petsc-users] Looking for a quick example of a symmetric KKT system

2016-10-21 Thread Patrick Sanan
Are there any examples already in PETSc or TAO that assemble such a
system (which could thus be dumped)? SNES example ex73f90t assembles a
non-symmetric KKT system.


Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-21 Thread Hong
I can reproduce the error on a linux machine with petsc-maint. It crashes
at 2nd solve, on both processors:

Program received signal SIGSEGV, Segmentation fault.
0x7f051dc835bd in pdgsequ (A=0x1563910, r=0x176dfe0, c=0x178f7f0,
rowcnd=0x7fffcb8dab30, colcnd=0x7fffcb8dab38, amax=0x7fffcb8dab40,
info=0x7fffcb8dab4c, grid=0x1563858)
at
/sandbox/hzhang/petsc/arch-linux-gcc-gfortran/externalpackages/git.superlu_dist/SRC/pdgsequ.c:182
182 c[jcol] = SUPERLU_MAX( c[jcol], fabs(Aval[j]) * r[irow]
);

The version of superlu_dist:
commit 0b5369f304507f1c7904a913f4c0c86777a60639
Author: Xiaoye Li 
Date:   Thu May 26 11:33:19 2016 -0700

rename 'struct pair' to 'struct superlu_pair'.

Hong

On Fri, Oct 21, 2016 at 5:36 AM, Anton Popov  wrote:

>
> On 10/19/2016 05:22 PM, Anton Popov wrote:
>
> I looked at each valgrind-complained item in your email dated Oct. 11.
> Those reports are really superficial; I don't see anything  wrong with
> those lines (mostly uninitialized variables) singled out.  I did a few
> tests with the latest version in github,  all went fine.
>
> Perhaps you can print your matrix that caused problem, I can run it using
>  your matrix.
>
> Sherry
>
> Hi Sherry,
>
> I finally figured out a minimalistic setup (attached) that reproduces the
> problem.
>
> I use petsc-maint:
>
> git clone -b maint https://bitbucket.org/petsc/petsc.git
>
> and configure it in the debug mode without optimization using the options:
>
> --download-superlu_dist=1 \
> --download-superlu_dist-commit=origin/maint \
>
> Compile the test, assuming PETSC_DIR points to the described petsc
> installation:
>
> make ex16
>
> Run with:
>
> mpirun -n 2 ./ex16 -f binaryoutput -pc_type lu
> -pc_factor_mat_solver_package superlu_dist
>
> Matrix partitioning between the processors will be completely the same as
> in our code (hard-coded).
>
> I factorize the same matrix twice with the same PC object. Remarkably it
> runs fine for the first time, but fails for the second.
>
> Thank you very much for looking into this problem.
>
> Cheers,
> Anton
>


[petsc-users] How to scatter values

2016-10-21 Thread 丁老师
Dear professor:
 I  partitioned my 2D cartesian grid with  4rows*4cols CPUS.
   12  13 14 15
     8   9  10  11
4   5  6  7
0   1  2   3 
  
   Now i need to scatter the values belonging to cpu5 to every cpu along x 
direction( 4 5 6 7) , which function should i use.



Regards



 





 





 





 





 





 





 





 

Re: [petsc-users] PetscFE questions

2016-10-21 Thread Matthew Knepley
On Fri, Oct 21, 2016 at 2:26 AM, Julian Andrej  wrote:

> On Thu, Oct 20, 2016 at 5:18 PM, Matthew Knepley 
> wrote:
> > On Thu, Oct 20, 2016 at 9:42 AM, Julian Andrej 
> wrote:
> >>
> >> Thanks for the suggestion. I guess DMCreateSubDM can work, but is
> >> cumbersome to handle for the normal solution process since the mass
> >> matrix for example is not a seperate field.
> >
> >
> > I did not understand what you meant by "parts of the physics". If you
> just
> > want to make a different operator, then swap out the PetscDS from the DM.
> > That holds the pointwise functions and discretizations.
> >
>
> Yes, its basically a different operator! Thats a really smart design,
> i can just create different PetscDS objects and stick them in to
> assemble the operator.
>
>   /* Assemble mass operator */
>   DMSetDS(dm, ds_mass);
>   DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->M, ctx->M, NULL);
>   /* Assemble laplacian operator */
>   DMSetDS(dm, ds_laplacian);
>   DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->J, ctx->J, NULL);
>
> There is one thing that bothers me just a bit. Everytime you call
> DMSetDS the old PetscDS object is destroyed and you have to reacreate
> the object in case you want to reassemble that operator.
>
> src/dm/interface/dm.c:3889:  ierr = PetscDSDestroy(>prob);
> CHKERRQ(ierr);
>
> Maybe it is just my specific use case but something to think about.


If you want to keep them around, you should do this

DMGetDS(dm, );
PetscObjectReference(oldds);
DMSetDS(dm, newds);

DMSetDS(dm, oldds);
PetscObjectDeferefence(oldds);

  Thanks,

 Matt


> >>
> >> src/snes/examples/tutorials/ex77 handles a seperate field for the
> >> nullspace, if anyone is interested in that.
> >>
> >> An intuitive way was just copying the DM and describing a new problem on
> >> it.
> >>
> >>   DM dm_mass;
> >>   PetscDS ds_mass;
> >>   Vec dummy;
> >>   PetscInt id = 1;
> >>   petsc_call(DMCreateGlobalVector(dm, ));
> >>   petsc_call(DMClone(ctx->dm, _mass));
> >>   petsc_call(DMGetDS(dm_mass, _mass));
> >>   petsc_call(PetscDSSetDiscretization(ds_mass, 0, (PetscObject)fe));
> >>   petsc_call(PetscDSSetJacobian(ds_mass, 0, 0, mass_kernel, NULL, NULL,
> >> NULL));
> >>   petsc_call(PetscDSAddBoundary(ds_mass, PETSC_TRUE, "wall", "marker",
> >> 0, 0, NULL, (void (*)())ctx->exact_funcs[0], 1, , ctx));
> >>   petsc_call(DMCreateMatrix(dm_mass, >M));
> >>   petsc_call(DMPlexSNESComputeJacobianFEM(dm_mass, dummy, ctx->M,
> >> ctx->M, NULL));
> >>
> >> is this an intended way to assemble a jacobian based on a weak form?
> >> The memory overhead for a DM copy isn't huge on the first sight.
> >
> >
> > Its O(1).
> >
> >>
> >> And a much more important question. Is there any mathematical
> >> description how exactly you handle dirichlet boundary conditions here?
> >
> >
> > Right now, you can do two things:
> >
> >   1) Handle it yourself
> >
> > or
> >
> >   2) eliminate particular dofs
> >
> > If you use 2), these dofs are eliminated from the global vector. They
> remain
> > in the
> > local vector, and boundary values are inserted before local vectors are
> > passed to
> > assembly routines.
> >
> >Matt
> >
>
> Thank you again for your help and suggestions.
>
> Regards
> Julian
>
> >>
> >> On first sight it looks like condensing the nodes only to
> >> non-essential nodes and then projecting them back in the solution
> >> vector. If thats teh case I don't understand how you "augment" the
> >> solution with the boundary nodes.
> >>
> >> Regards
> >> Julian
> >>
> >>
> >> On Wed, Oct 19, 2016 at 11:51 AM, Matthew Knepley 
> >> wrote:
> >> > On Tue, Oct 18, 2016 at 7:38 AM, Julian Andrej 
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> i have general question about PetscFE. When i want to assemble
> certain
> >> >> parts of physics separately, how can i do that? I basically want to
> >> >> assemble matrices/vectors from the weak forms on the same DM (and
> >> >> avoid copying the DM) and use them afterwards. Is there a convenient
> >> >> way for doing that?
> >> >>
> >> >> The "workflow" i'm approaching is something like:
> >> >>
> >> >> - Setup the DM
> >> >> - Setup discretization (spaces and quadrature) for each weak form i
> >> >> want to compute
> >> >> - Compute just the weak form i want right now for a specific
> >> >> discretization and field.
> >> >>
> >> >> The reason is i need certain parts of the "complete" Jacobian for
> >> >> computations of eigenproblems and like to avoid computing those more
> >> >> often than needed.
> >> >
> >> >
> >> > The way I envision this working is to use DMCreateSubDM(). It should
> >> > extract
> >> > everything correctly for the subset of fields you select. However, I
> >> > have
> >> > not
> >> > extensively tested, so if something is wrong let me know.
> >> >
> >> >   Thanks,
> >> >
> >> >  Matt
> >> >
> >> >>
> >> >> Regards
> >> >> Julian
> >> >
> >> >
> >> >
> >> >
> >> 

Re: [petsc-users] PetscFE questions

2016-10-21 Thread Julian Andrej
Yeah, thanks for pointing out my mistake. Next time i'm going to think
one more time before writing ;)

On Fri, Oct 21, 2016 at 12:17 PM, Lawrence Mitchell
 wrote:
>
>> On 21 Oct 2016, at 08:26, Julian Andrej  wrote:
>>
>> On Thu, Oct 20, 2016 at 5:18 PM, Matthew Knepley  wrote:
>>> On Thu, Oct 20, 2016 at 9:42 AM, Julian Andrej  wrote:

 Thanks for the suggestion. I guess DMCreateSubDM can work, but is
 cumbersome to handle for the normal solution process since the mass
 matrix for example is not a seperate field.
>>>
>>>
>>> I did not understand what you meant by "parts of the physics". If you just
>>> want to make a different operator, then swap out the PetscDS from the DM.
>>> That holds the pointwise functions and discretizations.
>>>
>>
>> Yes, its basically a different operator! Thats a really smart design,
>> i can just create different PetscDS objects and stick them in to
>> assemble the operator.
>>
>>  /* Assemble mass operator */
>>  DMSetDS(dm, ds_mass);
>>  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->M, ctx->M, NULL);
>>  /* Assemble laplacian operator */
>>  DMSetDS(dm, ds_laplacian);
>>  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->J, ctx->J, NULL);
>>
>> There is one thing that bothers me just a bit. Everytime you call
>> DMSetDS the old PetscDS object is destroyed and you have to reacreate
>> the object in case you want to reassemble that operator.
>>
>> src/dm/interface/dm.c:3889:  ierr = PetscDSDestroy(>prob);CHKERRQ(ierr);
>
> All objects in PETSc are refcounted.  So this just drops the reference that 
> the DM is holding to the DS.  As long as you're still holding a reference in 
> your code (you haven't called PetscDSDestroy) then this does not actually 
> deallocate the DS, just decrements the refcount.
>
> Lawrence


Re: [petsc-users] PetscFE questions

2016-10-21 Thread Lawrence Mitchell

> On 21 Oct 2016, at 08:26, Julian Andrej  wrote:
> 
> On Thu, Oct 20, 2016 at 5:18 PM, Matthew Knepley  wrote:
>> On Thu, Oct 20, 2016 at 9:42 AM, Julian Andrej  wrote:
>>> 
>>> Thanks for the suggestion. I guess DMCreateSubDM can work, but is
>>> cumbersome to handle for the normal solution process since the mass
>>> matrix for example is not a seperate field.
>> 
>> 
>> I did not understand what you meant by "parts of the physics". If you just
>> want to make a different operator, then swap out the PetscDS from the DM.
>> That holds the pointwise functions and discretizations.
>> 
> 
> Yes, its basically a different operator! Thats a really smart design,
> i can just create different PetscDS objects and stick them in to
> assemble the operator.
> 
>  /* Assemble mass operator */
>  DMSetDS(dm, ds_mass);
>  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->M, ctx->M, NULL);
>  /* Assemble laplacian operator */
>  DMSetDS(dm, ds_laplacian);
>  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->J, ctx->J, NULL);
> 
> There is one thing that bothers me just a bit. Everytime you call
> DMSetDS the old PetscDS object is destroyed and you have to reacreate
> the object in case you want to reassemble that operator.
> 
> src/dm/interface/dm.c:3889:  ierr = PetscDSDestroy(>prob);CHKERRQ(ierr);

All objects in PETSc are refcounted.  So this just drops the reference that the 
DM is holding to the DS.  As long as you're still holding a reference in your 
code (you haven't called PetscDSDestroy) then this does not actually deallocate 
the DS, just decrements the refcount.

Lawrence

Re: [petsc-users] PetscFE questions

2016-10-21 Thread Julian Andrej
On Thu, Oct 20, 2016 at 5:18 PM, Matthew Knepley  wrote:
> On Thu, Oct 20, 2016 at 9:42 AM, Julian Andrej  wrote:
>>
>> Thanks for the suggestion. I guess DMCreateSubDM can work, but is
>> cumbersome to handle for the normal solution process since the mass
>> matrix for example is not a seperate field.
>
>
> I did not understand what you meant by "parts of the physics". If you just
> want to make a different operator, then swap out the PetscDS from the DM.
> That holds the pointwise functions and discretizations.
>

Yes, its basically a different operator! Thats a really smart design,
i can just create different PetscDS objects and stick them in to
assemble the operator.

  /* Assemble mass operator */
  DMSetDS(dm, ds_mass);
  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->M, ctx->M, NULL);
  /* Assemble laplacian operator */
  DMSetDS(dm, ds_laplacian);
  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->J, ctx->J, NULL);

There is one thing that bothers me just a bit. Everytime you call
DMSetDS the old PetscDS object is destroyed and you have to reacreate
the object in case you want to reassemble that operator.

src/dm/interface/dm.c:3889:  ierr = PetscDSDestroy(>prob);CHKERRQ(ierr);

Maybe it is just my specific use case but something to think about.

>>
>> src/snes/examples/tutorials/ex77 handles a seperate field for the
>> nullspace, if anyone is interested in that.
>>
>> An intuitive way was just copying the DM and describing a new problem on
>> it.
>>
>>   DM dm_mass;
>>   PetscDS ds_mass;
>>   Vec dummy;
>>   PetscInt id = 1;
>>   petsc_call(DMCreateGlobalVector(dm, ));
>>   petsc_call(DMClone(ctx->dm, _mass));
>>   petsc_call(DMGetDS(dm_mass, _mass));
>>   petsc_call(PetscDSSetDiscretization(ds_mass, 0, (PetscObject)fe));
>>   petsc_call(PetscDSSetJacobian(ds_mass, 0, 0, mass_kernel, NULL, NULL,
>> NULL));
>>   petsc_call(PetscDSAddBoundary(ds_mass, PETSC_TRUE, "wall", "marker",
>> 0, 0, NULL, (void (*)())ctx->exact_funcs[0], 1, , ctx));
>>   petsc_call(DMCreateMatrix(dm_mass, >M));
>>   petsc_call(DMPlexSNESComputeJacobianFEM(dm_mass, dummy, ctx->M,
>> ctx->M, NULL));
>>
>> is this an intended way to assemble a jacobian based on a weak form?
>> The memory overhead for a DM copy isn't huge on the first sight.
>
>
> Its O(1).
>
>>
>> And a much more important question. Is there any mathematical
>> description how exactly you handle dirichlet boundary conditions here?
>
>
> Right now, you can do two things:
>
>   1) Handle it yourself
>
> or
>
>   2) eliminate particular dofs
>
> If you use 2), these dofs are eliminated from the global vector. They remain
> in the
> local vector, and boundary values are inserted before local vectors are
> passed to
> assembly routines.
>
>Matt
>

Thank you again for your help and suggestions.

Regards
Julian

>>
>> On first sight it looks like condensing the nodes only to
>> non-essential nodes and then projecting them back in the solution
>> vector. If thats teh case I don't understand how you "augment" the
>> solution with the boundary nodes.
>>
>> Regards
>> Julian
>>
>>
>> On Wed, Oct 19, 2016 at 11:51 AM, Matthew Knepley 
>> wrote:
>> > On Tue, Oct 18, 2016 at 7:38 AM, Julian Andrej 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> i have general question about PetscFE. When i want to assemble certain
>> >> parts of physics separately, how can i do that? I basically want to
>> >> assemble matrices/vectors from the weak forms on the same DM (and
>> >> avoid copying the DM) and use them afterwards. Is there a convenient
>> >> way for doing that?
>> >>
>> >> The "workflow" i'm approaching is something like:
>> >>
>> >> - Setup the DM
>> >> - Setup discretization (spaces and quadrature) for each weak form i
>> >> want to compute
>> >> - Compute just the weak form i want right now for a specific
>> >> discretization and field.
>> >>
>> >> The reason is i need certain parts of the "complete" Jacobian for
>> >> computations of eigenproblems and like to avoid computing those more
>> >> often than needed.
>> >
>> >
>> > The way I envision this working is to use DMCreateSubDM(). It should
>> > extract
>> > everything correctly for the subset of fields you select. However, I
>> > have
>> > not
>> > extensively tested, so if something is wrong let me know.
>> >
>> >   Thanks,
>> >
>> >  Matt
>> >
>> >>
>> >> Regards
>> >> Julian
>> >
>> >
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> > experiments
>> > is infinitely more interesting than any results to which their
>> > experiments
>> > lead.
>> > -- Norbert Wiener
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener