Re: [petsc-users] VecDuplicate for FFTW-Vec causes VecDestroy to fail conditionally on VecLoad

2019-11-01 Thread Sajid Ali via petsc-users
 Hi Junchao/Barry,

It doesn't really matter what the h5 file contains,  so I'm attaching a
lightly edited script of src/vec/vec/examples/tutorials/ex10.c which should
produce a vector to be used as input for the above test case. (I'm working
with ` --with-scalar-type=complex`).

Now that I think of it, fixing this bug is not important, I can workaround
the issue by creating a new vector with VecCreateMPI and accept the small
loss in performance of VecPointwiseMult due to misaligned layouts. If it's
a small fix it may be worth the time, but fixing this is not a big priority
right now. If it's a complicated fix, this issue can serve as a note to
future users.


Thank You,
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io


ex10.c
Description: Binary data


Re: [petsc-users] VecView output to HDF5 in 3.12.0 broken ?

2019-10-16 Thread Sajid Ali via petsc-users
 Hi Barry,

Looking at the current code, am I right in assuming that the change is only
in naming conventions and not in logic? I'll make a MR soon.

Thank You,
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io


Re: [petsc-users] VecView output to HDF5 in 3.12.0 broken ?

2019-10-15 Thread Sajid Ali via petsc-users
Hi PETSc-developers,

I think I’ve found the commit that broke this. In MR-1706
,
the definition of PETSC_HDF5_INT_MAX was changed from being set to
2147483647 to (~(hsize_t)0).

This new definition sets PETSC_HDF5_INT_MAX to 18446744073709551615 thereby
changing the thresholds in the chunking logic at
src/vec/vec/impls/mpi/pdvec.c (which causes the error I’m observing).

I’m not sure where the number 2147483647 comes from but I tried looking at
the older commits only to realize that include/petscviewerhdf5.h has always
had this number (ever since this definition was moved over from
include/petscviewer.h).

Snippet to check value of (~(hsize_t)0) :

(ipy3) [sajid@xrmlite misc]$ cat ex.c
#include "hdf5.h"

int main() {
printf("ref=%llu \n",(~(hsize_t)0));
size_t size = sizeof(hsize_t);
printf("size = %zu\n", size);
}
(ipy3) [sajid@xrmlite misc]$ h5cc ex.c
(ipy3) [sajid@xrmlite misc]$ ./a.out
ref=18446744073709551615
size = 8


Thank You,
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io


Re: [petsc-users] VecView output to HDF5 in 3.12.0 broken ?

2019-10-11 Thread Sajid Ali via petsc-users
Also, both versions of PETSc were built with ^hdf5@1.10.5 ^mpich@3.3
%gcc@8.3.0 so the error is most likely not from hdf5.


Re: [petsc-users] VecView output to HDF5 in 3.12.0 broken ?

2019-10-11 Thread Sajid Ali via petsc-users
Hi Stefano/PETSc Developers,

The chunksize is indeed limited to 4GB as per this page :
https://portal.hdfgroup.org/pages/viewpage.action?pageId=48808714.

With a (complex) DMDA vector of size (16384,16384,2) I see that PETSc saves
it as a hdf5 file with chunks of size (1024,1024,2). But with a non DMDA
vector I don't see any chunking happening. I tried examining the chunk size
after running the same example as above and increasing the size of the
vector until it fails to write.

The output of the following case (first size to fail) is attached : mpirun
-np 16 ./ex10 -m 134217728 &> log. There's a slightly different error here
which states :
```
minor: Some MPI function failed
  #018: H5Dchunk.c line 4706 in H5D__chunk_collective_fill(): Invalid
argument, error stack:
PMPI_Type_create_hvector(125): MPI_Type_create_hvector(count=0,
blocklength=-2147483648, stride=0, MPI_BYTE, newtype=0x7ffc177d5bf8) failed
PMPI_Type_create_hvector(80).: Invalid value for blocklen, must be
non-negative but is -2147483648
major: Internal error (too specific to document in detail)
```

Strangely the same case works with 3.11.1 and the dataset has 4 chunks. I'm
not sure how, but it looks like the chunking logic somehow got broken in
3.12.


Thank You,
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io


log
Description: Binary data


Re: [petsc-users] Question about TSComputeRHSJacobianConstant

2019-09-30 Thread Sajid Ali via petsc-users
Hi PETSc-developers,

Has this bug been fixed in the new 3.12 release ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io


Re: [petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Sajid Ali via petsc-users
 @Barry : Perahps config/BuildSystem/config/packages/fftw.py should use the
--host option when configure for PETSc is run with-batch=1.

If anyone here knows what --host must be set to for KNL, I'd appreciate it.

PS :  I know that Intel-MKL-FFT provides FFTW api. If I'd want to try with
this, is there a way to tell PETSc to pick fftw functions from MKL ?


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Sajid Ali via petsc-users
 Hi Barry,

fftw-configure fails on login node. I'm attaching the error message at the
bottom of this email. I tried request 1 hour of time on a compute node to
compile fftw on it but for some reason 1 hour is not enough to compile
fftw, hence I was forced to use cray-fftw-3.3.8.1 for which I had no
control over though the manpage gives a lower limit for compiler/MPT/craype
versions which I'm not violating.


```
sajid@thetalogin5:~/packages/petsc> python complex_int_64_fftw_debug.py


===


 Configuring PETSc to compile on your system


===


===


  It appears you do not have valgrind installed on your system.


  We HIGHLY recommend you install it from www.valgrind.org


  Or install valgrind-devel or equivalent using your package manager.


  Then rerun ./configure


===


===


  Trying to download http://www.fftw.org/fftw-3.3.8.tar.gz for FFTW


===


===


  Running configure on FFTW; this may take several minutes


===





***


 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
details):

---


Error running configure on FFTW: Could not execute "['./configure
--prefix=/gpfs/mira-home/sajid/packages/petsc/complex_int_64_fftw_debug
MAKE=/usr/bin/gmake --libdir=/gpfs/m
ira-home/sajid/packages/petsc/complex_int_64_fftw_debug/lib CC="cc"
CFLAGS="-fPIC -xMIC-AVX512 -O3"
AR="/opt/cray/pe/cce/8.7.6/binutils/x86_64/x86_64-pc-linux-gnu/bin/ar" ARF
LAGS="cr" LDFLAGS="-dynamic" CXX="CC" CXXFLAGS="-xMIC-AVX512 -O3 -fPIC"
F90="ftn" F90FLAGS="-fPIC -xMIC-AVX512 -O3" F77="ftn" FFLAGS="-fPIC
-xMIC-AVX512 -O3" FC="ftn" FCFLAGS
="-fPIC -xMIC-AVX512 -O3" --enable-shared MPICC="cc" --enable-mpi']":


checking for a BSD-compatible install... /usr/bin/install -c


checking whether build environment is sane... yes


checking for a thread-safe mkdir -p... /usr/bin/mkdir -p


checking for gawk... gawk


checking whether /usr/bin/gmake sets $(MAKE)... yes


checking whether /usr/bin/gmake supports nested variables... yes


checking whether to enable maintainer-specific portions of Makefiles... no


checking build system type... x86_64-pc-linux-gnu


checking host system type... x86_64-pc-linux-gnu


checking for gcc... cc


checking whether the C compiler works... yes


checking for C compiler default output file name... a.out


checking for suffix of executables...


checking whether we are cross compiling...configure: error: in
`/gpfs/mira-home/sajid/packages/petsc/complex_int_64_fftw_debug/externalpackages/fftw-3.3.8':

configure: error: cannot run C compiled programs.


If you meant to cross compile, use `--host'.


See `config.log' for more details


***
  ```





Thank You,
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Sajid Ali via petsc-users
Hi PETSc-developers,

I'm trying to run ex143 on a cluster (alcf-theta). I compiled PETSc on
login node with cray-fftw-3.3.8.1 and there was no error in either
configure or make.

When I try running ex143 with 1 MPI rank on compute node, everything works
fine but with 2 MPI ranks, it crashes due to illegal instruction due to
memory corruption. I tried running it with valgrind but the available
valgrind module on theta gives the error `valgrind: failed to start tool
'memcheck' for platform 'amd64-linux': No such file or directory`.

To get around this, I tried running it with gdb4hpc and I attached the
backtrace which shows that there is some error with mpi-fftw being called.
I also attach the output with -start_in_debugger command option.

What could possibly cause this error and how do I fix it ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University
sajid@thetamom1:/gpfs/mira-home/sajid/sajid_proj/test_fftw> aprun -n 2 --cc 
depth -d 1 -j 1 -r 1 ./ex143 -start_in_debugger -log_view &> out
  
sajid@thetamom1:/gpfs/mira-home/sajid/sajid_proj/test_fftw> cat out 

  
PETSC: Attaching gdb to ./ex143 of pid 62260 on display :0.0 on machine 
nid03832
  
PETSC: Attaching gdb to ./ex143 of pid 62259 on display :0.0 on machine 
nid03832
  
xterm: xterm: Xt error: Can't open display: :0.0

  
Xt error: Can't open display: :0.0  

  
xterm: xterm: DISPLAY is not set

  
DISPLAY is not set  

  
Use PETSc-FFTW interface...1-DIM: 30

  
[1]PETSC ERROR: [0]PETSC ERROR: 

  


  
[0]PETSC ERROR: [1]PETSC ERROR: Caught signal number 4 Illegal instruction: 
Likely due to memory corruption 
  
Caught signal number 4 Illegal instruction: Likely due to memory corruption 

  
[0]PETSC ERROR: [1]PETSC ERROR: Try option -start_in_debugger or 
-on_error_attach_debugger   
 
Try option -start_in_debugger or -on_error_attach_debugger  

  
[0]PETSC ERROR: [1]PETSC ERROR: or see 
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
   
or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind 

  
[0]PETSC ERROR: [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and 
Apple Mac OS X to find memory corruption errors 
  
or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory 
corruption errors   

[1]PETSC ERROR: [0]PETSC ERROR: likely location of problem given in stack below 

  
likely location of problem given in stack below 

  
[1]PETSC ERROR: [0]PETSC ERROR: -  Stack Frames 

  
-  Stack Frames 

  
[0]PETSC ERROR: [1]PETSC ERROR: Note: The EXACT line 

Re: [petsc-users] Question about TSComputeRHSJacobianConstant

2019-05-22 Thread Sajid Ali via petsc-users
Hi Matt,

Thanks for the explanation. That makes sense since I'd get reasonably close
convergence with preonly sometimes and not at other times which was
confusing.

Anyway, since there's no pc_tol (analogous to ksp_rtol/ksp_atol, etc), I'd
have to more carefully set the gamg preconditioner options to ensure that
it converges in one run, but since there's no guarantee that what works for
one problem might not work for another (or the same problem at a different
grid size), I'll stick with GMRES+gamg for now.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about TSComputeRHSJacobianConstant

2019-05-22 Thread Sajid Ali via petsc-users
Hi Hong,

Looks like this is my fault since I'm using -ksp_type preonly -pc_type
gamg. If I use the default ksp (GMRES) then everything works fine on a
smaller problem.

Just to confirm,  -ksp_type preonly is to be used only with direct-solve
preconditioners like LU,Cholesky, right ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about TSComputeRHSJacobianConstant

2019-05-17 Thread Sajid Ali via petsc-users
Hi Hong,

The solution has the right characteristics but it's off by many orders of
magnitude. It is ~3.5x faster as before.

Am I supposed to keep the TSRHSJacobianSetReuse function or not?


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about TSComputeRHSJacobianConstant

2019-05-16 Thread Sajid Ali via petsc-users
While there is a ~3.5X speedup, deleting the aforementioned 20 lines also
leads the new version of petsc to give the wrong solution (off by orders of
magnitude for the same program).

I tried switching over the the IFunction/IJacobian interface as per the
manual (page 146) which the following lines :
```
TSSetProblemType(ts,TSLINEAR);
TSSetRHSFunction(ts,NULL,TSComputeRHSFunctionLinear,NULL);
TSSetRHSJacobian(ts,A,A,TSComputeRHSJacobianConstant,NULL);
```
are equivalent to :
```
TSSetProblemType(ts,TSLINEAR);
TSSetIFunction(ts,NULL,TSComputeIFunctionLinear,NULL);
TSSetIJacobian(ts,A,A,TSComputeIJacobianConstant,NULL);
```
But the example at src/ts/examples/tutorials/ex3.c employs a strategy of
setting a shift flag to prevent re-computation for time-independent
problems. Moreover, the docs say "using this function
(TSComputeIFunctionLinear) is NOT equivalent to using
TSComputeRHSFunctionLinear()" and now I'm even more confused.

PS : Doing the simple switch is as slow as the original code and the answer
is wrong as well.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about TSComputeRHSJacobianConstant

2019-05-16 Thread Sajid Ali via petsc-users
Hi Barry,

Thanks a lot for pointing this out. I'm seeing ~3X speedup in time !

Attached are the new log files. Does everything look right ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


out_50
Description: Binary data


out_100
Description: Binary data


[petsc-users] Question about TSComputeRHSJacobianConstant

2019-05-16 Thread Sajid Ali via petsc-users
Hi PETSc developers,

I have a question about TSComputeRHSJacobianConstant. If I create a TS (of
type linear) for a problem where the jacobian does not change with time
(set with the aforementioned option) and run it for different number of
time steps, why does the time it takes to evaluate the jacobian change (as
indicated by TSJacobianEval) ?

To clarify, I run with the example with different TSSetTimeStep, but the
same jacobian matrix. I see that the time spent in KSPSolve increases with
increasing number of steps (which is as expected as this is a KSPOnly SNES
solver). But surprisingly, the time spent in TSJacobianEval also increases
with decreasing time-step (or increasing number of steps).

For reference, I attach the log files for two cases which were run with
different time steps and the source code.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


ex_dmda.c
Description: Binary data


out_50
Description: Binary data


out_100
Description: Binary data


Re: [petsc-users] Quick question about ISCreateGeneral

2019-05-01 Thread Sajid Ali via petsc-users
Hi Barry,

I've written a simple program that does a scatter and reverses the order of
data between two vectors with locally generate index sets and it works.
While I'd have expected that I would need to concatenate the index sets
before calling vecscatter, the program works without doing so (hopefully
making it more efficient). Does calling vecscatter on each rank with the
local index set take care of the necessary communication behind the scenes
then?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


ex_modify.c
Description: Binary data


[petsc-users] Quick question about ISCreateGeneral

2019-04-30 Thread Sajid Ali via petsc-users
Hi PETSc Developers,

I see that in the examples for ISCreateGeneral, the index sets are created
by copying values from int arrays (which were created by PetscMalloc1 which
is not collective).

If I the ISCreateGeneral is called with PETSC_COMM_WORLD and the int arrays
on each rank are independently created, does the index set created
concatenate all the int-arrays into one ? If not, what needs to be done to
get such an index set ?

PS: For context, I want to write a fftshift convenience function (like
numpy, MATLAB) but for large distributed vectors. I thought that I could do
this with VecScatter and two index sets, one shifted and one un-shifted.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Possible Bug ?

2019-04-22 Thread Sajid Ali via petsc-users
This can be tracked down to n vs N being used. The vector in the second
loop is created using N while MatCreateVecsFFTW uses n (for real numbers).
n!=N and hence the error.

If the lines 50/51 and line 91 are switched to MatCreateVecsFFW instead of
MatGetVecs and VecCreateSeq respectively, the example would work for real
numbers as well.


Re: [petsc-users] Possible Bug ?

2019-04-22 Thread Sajid Ali via petsc-users
Hi Barry,

I'm not sure why MatCreateVecsFFW is not used at line 50/51.

The error occurs at line 94 because in the second loop, the example
manually creates the x vector instead of the one created using the A
matrix. For complex numbers this is not an issue but for real numbers the
dimensions don't match. MatCreateVecsFFTW creates a vector of size 20 for
N=10 but VecCreateSeq is creating a vector of size 10. I'm not sure what
the rationale behind this test is.


Re: [petsc-users] Possible Bug ?

2019-04-22 Thread Sajid Ali via petsc-users
Apologies for the post. I didn't see that it was for complex vectors only.

On Mon, Apr 22, 2019 at 5:00 PM Sajid Ali 
wrote:

> Hi,
>
> I see that src/mat/examples/tests/ex112.c is failing for petsc@3.11.1
> configured without complex scalars. With complex scalars everything works
> fine.
>
> The error I see is :
> ```
> [sajid@xrmlite bugfix]$
> ./ex112
>
> No protocol
> specified
>
>
>
>  1-D: FFTW on vector of size
> 10
>
>   Error norm of |x - z|
> 5.37156
>
>   Error norm of |x - z|
> 5.90871
>
>   Error norm of |x - z|
> 5.96243
>
> [0]PETSC ERROR: - Error Message
> --
>
> [0]PETSC ERROR: Nonconforming object
> sizes
>
> [0]PETSC ERROR: Mat mat,Vec x: global dim 20
> 10
>
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12,
> 2019
>
> [0]PETSC ERROR: ./ex112 on a  named xrmlite.phys.northwestern.edu by
> sajid Mon Apr 22 16:58:41
> 2019
> [0]PETSC ERROR: Configure options
> --prefix=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/petsc-3.11.1-5bdbcozu3labtbbi7gtq4xa
> knay24lo6 --with-ssl=0 --download-c2html=0 --download-sowing=0
> --download-hwloc=0 CFLAGS="-O2 -march=native" FFLAGS="-O2 -march=native"
> CXXFLAGS="-
> O2 -march=native"
> --with-cc=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/mpich-3.3-ig4cr2xw2x63bqs5rnmhfshln4iv7av5/bin/mpic
> c
> --with-cxx=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/mpich-3.3-ig4cr2xw2x63bqs5rnmhfshln4iv7av5/bin/mpic++
> --with-fc=/h
> ome/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/mpich-3.3-ig4cr2xw2x63bqs5rnmhfshln4iv7av5/bin/mpif90
> --with-precision=double --w
> ith-scalar-type=real --with-shared-libraries=1 --with-debugging=1
> --with-64-bit-indices=0
> --with-blaslapack-lib="/home/sajid/packages/spack/opt/spa
>
> ck/linux-centos7-x86_64/gcc-8.3.0/intel-mkl-2019.3.199-kzcly5rtcjbkwtnm6tri6kkexnwoat5m/compilers_and_libraries_2019.3.199/linux/mkl/lib/intel64/li
> bmkl_intel_lp64.so
> /home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/intel-mkl-2019.3.199-kzcly5rtcjbkwtnm6tri6kkexnwoat5m/compil
> ers_and_libraries_2019.3.199/linux/mkl/lib/intel64/libmkl_sequential.so
> /home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/intel-m
> kl-2019.3.199-kzcly5rtcjbkwtnm6tri6kkexnwoat5m/compilers_and_libraries_2019.3.199/linux/mkl/lib/intel64/libmkl_core.so
> /lib64/libpthread.so /lib64/
> libm.so /lib64/libdl.so" --with-x=0 --with-clanguage=C --with-scalapack=0
> --with-metis=0 --with-hdf5=0 --with-hypre=0 --with-parmetis=0 --with-mump
> s=0 --with-trilinos=0 --with-fftw=1
> --with-fftw-dir=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/fftw-3.3.8-mkj4ho2jp6xfrnkl
> mrvhdfh73woer2s7 --with-superlu_dist=0 --with-suitesparse=0
> --with-zlib-include=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0
> /zlib-1.2.11-jqrcjdjnrxvouufhjtxbfvfms23fsqpx/include
> --with-zlib-lib="-L/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/zlib-1
> .2.11-jqrcjdjnrxvouufhjtxbfvfms23fsqpx/lib -lz"
> --with-zlib=1
>
> [0]PETSC ERROR: #1 MatMult() line 2385 in
> /tmp/sajid/spack-stage/spack-stage-mTD0AV/petsc-3.11.1/src/mat/interface/matrix.c
>
> [0]PETSC ERROR: #2 main() line 94 in
> /home/sajid/packages/aux_xwp_petsc/bugfix/ex112.c
>
> [0]PETSC ERROR: No PETSc Option Table
> entries
>
> [0]PETSC ERROR: End of Error Message ---send entire
> error message to petsc-ma...@mcs.anl.gov--
>
> application called MPI_Abort(MPI_COMM_WORLD, 60) - process
> 0
>
> [unset]: write_line error; fd=-1 buf=:cmd=abort
> exitcode=60
>
> :
>
> system msg for write_line failure : Bad file
> descriptor
>
>
> ```
>
> I came across this because I saw that MatMult was failing for a new test
> related to a PR I was working on. Is this a bug ?
>
> Thank You,
> Sajid Ali
> Applied Physics
> Northwestern University
>


-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Possible Bug ?

2019-04-22 Thread Sajid Ali via petsc-users
Hi,

I see that src/mat/examples/tests/ex112.c is failing for petsc@3.11.1
configured without complex scalars. With complex scalars everything works
fine.

The error I see is :
```
[sajid@xrmlite bugfix]$
./ex112

No protocol
specified



 1-D: FFTW on vector of size
10

  Error norm of |x - z|
5.37156

  Error norm of |x - z|
5.90871

  Error norm of |x - z|
5.96243

[0]PETSC ERROR: - Error Message
--

[0]PETSC ERROR: Nonconforming object
sizes

[0]PETSC ERROR: Mat mat,Vec x: global dim 20
10

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12,
2019

[0]PETSC ERROR: ./ex112 on a  named xrmlite.phys.northwestern.edu by sajid
Mon Apr 22 16:58:41 2019
[0]PETSC ERROR: Configure options
--prefix=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/petsc-3.11.1-5bdbcozu3labtbbi7gtq4xa
knay24lo6 --with-ssl=0 --download-c2html=0 --download-sowing=0
--download-hwloc=0 CFLAGS="-O2 -march=native" FFLAGS="-O2 -march=native"
CXXFLAGS="-
O2 -march=native"
--with-cc=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/mpich-3.3-ig4cr2xw2x63bqs5rnmhfshln4iv7av5/bin/mpic
c
--with-cxx=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/mpich-3.3-ig4cr2xw2x63bqs5rnmhfshln4iv7av5/bin/mpic++
--with-fc=/h
ome/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/mpich-3.3-ig4cr2xw2x63bqs5rnmhfshln4iv7av5/bin/mpif90
--with-precision=double --w
ith-scalar-type=real --with-shared-libraries=1 --with-debugging=1
--with-64-bit-indices=0
--with-blaslapack-lib="/home/sajid/packages/spack/opt/spa
ck/linux-centos7-x86_64/gcc-8.3.0/intel-mkl-2019.3.199-kzcly5rtcjbkwtnm6tri6kkexnwoat5m/compilers_and_libraries_2019.3.199/linux/mkl/lib/intel64/li
bmkl_intel_lp64.so
/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/intel-mkl-2019.3.199-kzcly5rtcjbkwtnm6tri6kkexnwoat5m/compil
ers_and_libraries_2019.3.199/linux/mkl/lib/intel64/libmkl_sequential.so
/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/intel-m
kl-2019.3.199-kzcly5rtcjbkwtnm6tri6kkexnwoat5m/compilers_and_libraries_2019.3.199/linux/mkl/lib/intel64/libmkl_core.so
/lib64/libpthread.so /lib64/
libm.so /lib64/libdl.so" --with-x=0 --with-clanguage=C --with-scalapack=0
--with-metis=0 --with-hdf5=0 --with-hypre=0 --with-parmetis=0 --with-mump
s=0 --with-trilinos=0 --with-fftw=1
--with-fftw-dir=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/fftw-3.3.8-mkj4ho2jp6xfrnkl
mrvhdfh73woer2s7 --with-superlu_dist=0 --with-suitesparse=0
--with-zlib-include=/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0
/zlib-1.2.11-jqrcjdjnrxvouufhjtxbfvfms23fsqpx/include
--with-zlib-lib="-L/home/sajid/packages/spack/opt/spack/linux-centos7-x86_64/gcc-8.3.0/zlib-1
.2.11-jqrcjdjnrxvouufhjtxbfvfms23fsqpx/lib -lz"
--with-zlib=1

[0]PETSC ERROR: #1 MatMult() line 2385 in
/tmp/sajid/spack-stage/spack-stage-mTD0AV/petsc-3.11.1/src/mat/interface/matrix.c

[0]PETSC ERROR: #2 main() line 94 in
/home/sajid/packages/aux_xwp_petsc/bugfix/ex112.c

[0]PETSC ERROR: No PETSc Option Table
entries

[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 60) - process
0

[unset]: write_line error; fd=-1 buf=:cmd=abort
exitcode=60

:

system msg for write_line failure : Bad file
descriptor


```

I came across this because I saw that MatMult was failing for a new test
related to a PR I was working on. Is this a bug ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Error with VecDestroy_MPIFFTW+0x61

2019-04-17 Thread Sajid Ali via petsc-users
Hi Matt/Barry,

I've implemented this for 1D-complex-mpi vec and tested it.

Here is the modified source file ->
https://bitbucket.org/sajid__ali/petsc/src/86fb19b57a7c4f8f42644e5160d2afbdc5e03639/src/mat/impls/fft/fftw/fftw.c

Functions definitions at
https://bitbucket.org/sajid__ali/petsc/src/86fb19b57a7c4f8f42644e5160d2afbdc5e03639/src/mat/impls/fft/fftw/fftw.c#lines-395

New op at
https://bitbucket.org/sajid__ali/petsc/src/86fb19b57a7c4f8f42644e5160d2afbdc5e03639/src/mat/impls/fft/fftw/fftw.c#lines-514

If this looks good, I can extend it to all cases (1/2/3 dims +
real/complex) and add a vecdupliate/vecdestroy pair in the tests.


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Sajid Ali via petsc-users
>Perhaps if spack had an easier mechanism to allow the user to "point to"
local git clones it could get closer to the best of both worlds. Maybe
spack could support a list of local repositories and branches in the yaml
file.

I wonder if a local git clone of petsc can become a "mirror" for petsc
spack package, though this is not the intended use of mirrors. Refer to
https://spack.readthedocs.io/en/latest/mirrors.html


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Sajid Ali via petsc-users
> develop > 3.11.99 > 3.10.xx > maint (or other strings)
Just discovered this issue when trying to build with my fork of spack at [1
].


So, ideally each developer has to have their develop point to the branch
they want to build ? That would make communication a little confusing since
spack's develop version is some package's master and now everyone wants a
different develop so as to not let spack apply any patches for string
version sorted lower than lowest numeric version.

>Even if you change commit from 'abc' to 'def'spack won't recognize this
change and use the cached tarball.
True, but since checksum changes and the user has to constantly zip and
unzip, I personally find git cloning easier to deal with so it's just a
matter of preference.


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Sajid Ali via petsc-users
@Barry: Thanks for the bugfix!

@Satish: Thanks for pointing out this method!

My preferred way previously was to download the source code, unzip, edit,
zip. Now ask spack to not checksum (because my edit has changed stuff) and
build. Lately, spack has added git support and now I create a branch of
spack where I add my bugfix branch as the default build git repo instead of
master to now deal with checksum headaches.


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Sajid Ali via petsc-users
Quick question : To drop a print statement at the required location, I need
to modify the source code, build petsc from source and compile with this
new version of petsc, right or is there an easier way? (Just to confirm
before putting in the effort)

On Tue, Apr 16, 2019 at 8:42 PM Smith, Barry F.  wrote:

>
>   Dang, I  ranted too soon.
>
>   I built mpich  using spack (master branch) and a very old Gnu C compiler
> and it produced valgrind clean code. Spack definitely is not passing the
> --enable-g=meminit to MPICH ./configure so this version of MPICH valgrind
> must be clean by default? MPICH's ./configure has
>
> meminit  - Preinitialize memory associated structures and unions to
>eliminate access warnings from programs like valgrind
>
> The default for enable-g is most and
>
> most|yes)
> perform_memtracing=yes
> enable_append_g=yes
> perform_meminit=yes
> perform_dbgmutex=yes
> perform_mutexnesting=yes
> perform_handlealloc=yes
> perform_handle=yes
>
> So it appears that at least some releases of MPICH are suppose to be
> valgrind clean by default ;).
>
> Looking back at Sajid's valgrind output more carefully
>
> Conditional jump or move depends on uninitialised value(s)
> ==15359==at 0x1331069A: __intel_sse4_strncmp (in
> /opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin/libintlc.so.5)
>
> is the only valgrind error. Which I remember seeing from using Intel
> compilers for a long time, nothing to do with MPICH
>
> Thus I conclude that Sajid's code is actually valgrind clean; and I
> withdraw my rant about MPICH/spack
>
> Barry
>
>
>
> > On Apr 16, 2019, at 5:13 PM, Smith, Barry F.  wrote:
> >
> >
> >  So valgrind is printing all kinds of juicy information about
> uninitialized values but it is all worthless because MPICH was not built by
> spack to be valgrind clean. We can't know if any of the problems valgrind
> flags are real. MPICH needs to be configured with the option
> --enable-g=meminit to be valgrind clean. PETSc's --download-mpich always
> installs a valgrind clean MPI.
> >
> > It is unfortunate Spack doesn't provide a variant of MPICH that is
> valgrind clean; actually it should default to valgrind clean MPICH.
> >
> >  Barry
> >
> >
> >
> >
> >> On Apr 16, 2019, at 2:43 PM, Sajid Ali via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >>
> >> So, I tried running the debug version with valgrind to see if I can
> find the chunk size that's being set but I don't see it. Is there a better
> way to do it ?
> >>
> >> `$ mpirun -np 32 valgrind ./ex_ms -prop_steps 1 -info &> out`. [The out
> file is attached.]
> >> 
> >
>
>

-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Error with VecDestroy_MPIFFTW+0x61

2019-04-16 Thread Sajid Ali via petsc-users
Hi Barry/Matt,

Since VecDuplicate calls v->ops->duplicate, can't we just add custom
duplicate ops to the (f_in/f_out/b_out) vectors when they are created via
MatCreateFFTW? (just like the custom destroy ops are defined)

Also, what is the PetscObjectStateIncrease function doing ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Sajid Ali via petsc-users
Hi Matt,

I tried running the same example with a smaller grid on a workstation and I
see that for a grid size of 8192x8192 (vector write dims 67108864, 2), the
output file has a chunk size of (16777215, 2).

I can’t see HDF5_INT_MAX in the spack build-log (which includes configure).
Is there a better way to look it up?

[sajid@xrmlite .spack]$ cat build.out | grep "HDF"
#define PETSC_HAVE_HDF5 1
#define PETSC_HAVE_LIBHDF5HL_FORTRAN 1
#define PETSC_HAVE_LIBHDF5 1
#define PETSC_HAVE_LIBHDF5_HL 1
#define PETSC_HAVE_LIBHDF5_FORTRAN 1
#define PETSC_HAVE_HDF5_RELEASE_VERSION 5
#define PETSC_HAVE_HDF5_MINOR_VERSION 10
#define PETSC_HAVE_HDF5_MAJOR_VERSION 1

Thank You,
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Sajid Ali via petsc-users
Hi PETSc developers,

I’m trying to write a large vector created with VecCreateMPI (size
32768x32768) concurrently from 4 nodes (+32 tasks per node, total 128
mpi-ranks) and I see the following (indicative) error : [Full error log is
here : https://file.io/CdjUfe]

HDF5-DIAG: Error detected in HDF5 (1.10.5) MPI-process 52:
  #000: H5D.c line 145 in H5Dcreate2(): unable to create dataset
major: Dataset
minor: Unable to initialize object
  #001: H5Dint.c line 329 in H5D__create_named(): unable to create and
link to dataset
major: Dataset
minor: Unable to initialize object
  #002: H5L.c line 1557 in H5L_link_object(): unable to create new
link to object
major: Links
minor: Unable to initialize object
  #003: H5L.c line 1798 in H5L__create_real(): can't insert link
major: Links
minor: Unable to insert object
  #004: H5Gtraverse.c line 851 in H5G_traverse(): internal path traversal failed
major: Symbol table
HDF5-DIAG: Error detected in HDF5 (1.10.5) MPI-process 59:
  #000: H5D.c line 145 in H5Dcreate2(): unable to create dataset
major: Dataset
minor: Unable to initialize object
  #001: H5Dint.c line 329 in H5D__create_named(): unable to create and
link to dataset
major: Dataset
minor: Unable to initialize object
  #002: H5L.c line 1557 in H5L_link_object(): unable to create new
link to object
major: Links
minor: Unable to initialize object
  #003: H5L.c line 1798 in H5L__create_real(): can't insert link
major: Links
minor: Unable to insert object
  #004: H5Gtraverse.c line 851 in H5G_traverse(): internal path
traversal failed
major: Symbol table
minor: Object not found
  #005: H5Gtraverse.c line 627 in H5G__traverse_real(): traversal
operator failed
major: Symbol table
minor: Callback failed
  #006: H5L.c line 1604 in H5L__link_cb(): unable to create object
major: Links
minor: Unable to initialize object
  #007: H5Oint.c line 2453 in H5O_obj_create(): unable to open object
major: Object header
minor: Can't open object
  #008: H5Doh.c line 300 in H5O__dset_create(): unable to create
dataset
minor: Object not found
  #005: H5Gtraverse.c line 627 in H5G__traverse_real(): traversal
operator failed
major: Symbol table
minor: Callback failed
  #006: H5L.c line 1604 in H5L__link_cb(): unable to create object
major: Links
minor: Unable to initialize object
  #007: H5Oint.c line 2453 in H5O_obj_create(): unable to open object
major: Object header
minor: Can't open object
  #008: H5Doh.c line 300 in H5O__dset_create(): unable to create
dataset
major: Dataset
minor: Unable to initialize object
  #009: H5Dint.c line 1274 in H5D__create(): unable to construct
layout information
major: Dataset
minor: Unable to initialize object
  #010: H5Dchunk.c line 872 in H5D__chunk_construct(): unable to set chunk sizes
major: Dataset
minor: Bad value
  #011: H5Dchunk.c line 831 in H5D__chunk_set_sizes(): chunk size must be < 4GB
major: Dataset
minor: Unable to initialize object
major: Dataset
minor: Unable to initialize object
  #009: H5Dint.c line 1274 in H5D__create(): unable to construct
layout information
major: Dataset
minor: Unable to initialize object
  #010: H5Dchunk.c line 872 in H5D__chunk_construct(): unable to set chunk sizes
major: Dataset
minor: Bad value
  #011: H5Dchunk.c line 831 in H5D__chunk_set_sizes(): chunk size must be < 4GB
major: Dataset
minor: Unable to initialize object
...

I spoke to Barry last evening who said that this is a known error that was
fixed for DMDA vecs but is broken for non-dmda vecs.

Could this be fixed ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Error with VecDestroy_MPIFFTW+0x61

2019-04-15 Thread Sajid Ali via petsc-users
Hi Barry & Matt,

I'd be happy to contribute a patch once I understand what's going on.

@Matt, Where is the padding occurring? In the VecCreateFFTW I see that each
process looks up the dimension of array it's supposed to hold and asks for
memory to hold that via fftw_malloc (which as you say is just a wrapper to
simd-aligned malloc). Is the crash occurring because the first vector was
created via fftw_malloc and duplicated via PETScMalloc and they happen to
have different alignment sizes (FFTW was compiled with simd=avx2 since I'm
using a Broadwell-Xeon and PETScMalloc aligns to PETSC_MEMALIGN ?)

PS: I've only ever used FFTW via the python interface (and automated the
build & but couldn't automate testing of pyfftw-mpi since cython coverage
reporting is confusing).

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Error with VecDestroy_MPIFFTW+0x61

2019-04-14 Thread Sajid Ali via petsc-users
Thanks for the temporary fix.

(PS: I was wondering if it would be trivial to just extend the code to have
four mallocs and create a new function but it looks like the logic is much
more complicated.)


Re: [petsc-users] Error with VecDestroy_MPIFFTW+0x61

2019-04-14 Thread Sajid Ali via petsc-users
 Hi Matt,

While in theory, that sounds perfect I still get the same error. I'm
attaching a minimal test program which creates 3 vectors x,y,z via the
petsc-fftw interface and a test vector via VecDuplicate and then destroy
all the vectors. Without the test vector everything works fine.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


ex_modify.c
Description: Binary data


[petsc-users] Error with VecDestroy_MPIFFTW+0x61

2019-04-14 Thread Sajid Ali via petsc-users
Hi PETSc Developers,

I happen to be a user who needs 4 vectors whose layout is aligned with
FFTW. The usual MatCreateVecsFFTW allows one to make 3 such vectors. To get
around this I call the function twice, once with three vectors, once with
one vector (and 2x NULL). This causes a strange segfault when freeing the
memory with VecDestroy (for the lone vector I'm guessing).

My program runs with no issues if I create the fourth vector with
VecCreateMPI (I need the 4th vector for point-wise multiply so this would
be ineffiient).

To get around this problem, is there a way to ask for 4 vectors aligned to
the FFTW matrix? If not is there a way to get the intended behavior from
VecCreateMPI, (perhaps by using a helper function to determine the data
alignment and pass to it instead of using PETSC_DECIDE)?

I'm attaching my code just in case what I'm thinking is wrong and anyone
would be kind enough to point it out to me. The issue is at line 87/88.
With 87, the program crashes, with 88 it works fine.

Thanks in advance for the help!

-- 
Sajid Ali
Applied Physics
Northwestern University


ex_ms.c
Description: Binary data


Re: [petsc-users] How to build FFTW3 interface?

2019-04-12 Thread Sajid Ali via petsc-users
Hi Balay,

Confirming that the spack variant works. Thanks for adding it.


[petsc-users] How to build FFTW3 interface?

2019-04-11 Thread Sajid Ali via petsc-users
Hi PETSc Developers,

To run an example that involves the petsc-fftw interface, I loaded both
petsc and fftw modules (linked of course to the same mpi) but the compiler
complains of having no knowledge of functions like MatCreateVecsFFTW which
happens to be defined at (in the source repo)
petsc/src/mat/impls/fft/fftw.c. I don't see a corresponding definition in
the install folder (I may be wrong, but i just did a simple grep to find
the definition of the function I'm looking for and didn't find it while it
was present in the header and example files).

>From previous threads on this list-serv I see that the developers asked
users to use --download-fftw at configure time, but for users that already
have an fftw installed, is there an option to ask petsc to build the
interfaces as well (I didn't see any such option listed either here:
https://www.mcs.anl.gov/petsc/documentation/installation.html or a variant
in spack) ?

Also, could the fftw version to download be bumped to 3.3.8 (here :
petsc/config/BuildSystem/config/packages/fftw.py) since 3.3.7 gives
erroneous results with gcc-8.

Bug in fftw-3.3.7+gcc-8 :
https://github.com/FFTW/fftw3/commit/19eeeca592f63413698f23dd02b9961f22581803


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Argument out of range error only in certain mpi sizes

2019-04-11 Thread Sajid Ali via petsc-users
One last question I have is : Does PETSc automatically chose a good chunk
size for the size of the vector it has and use it to write the dataset ? Or
is this something I shouldn't really worry about (not that it affects me
now but it would be good to not have a slow read from a python script for
post-processing).


Re: [petsc-users] Argument out of range error only in certain mpi sizes

2019-04-10 Thread Sajid Ali via petsc-users
Thanks a lot for the advice Matt and Barry.

One thing I wanted to confirm is that when I change from using a regular
Vec to a Vec created using DMDACreateGlobalVector, to fill these with data
from hdf5, I have to change the dimensions of hdf5 vectors from
(dim_x*dim_y) to (dim_x,dim_y), right?

Because I see that if I write to hdf5 from a complex vector created using
DMDA, I get a vector that has dimensions (dim_x,dim_y,2) but before I saw
the dimension of the same to be (dim_x*dim_y,2).

-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Argument out of range error only in certain mpi sizes

2019-04-10 Thread Sajid Ali via petsc-users
Hi PETSc developers,

I wanted to convert my code that in which I was using general Vec/Mat to
DMDA based grid management (nothing fancy just a 5-point complex stencil).
For this, I created a DA object and created the global solution vector
using this. This worked fine.

Now, I created the matrix using DMCreateMatrix and fill it using the
regular function. I get no error when I run the problem using mpirun -np 1
and I thought my matrix filling logic aligns with the DM non-zero locations
(Star Stencil, width 1). With mpirun -np 2, no errors either. But with
mpirun -np 4 or 8, I get errors which say : "Argument out of range/
Inserting a new nonzero at global row/column ... into matrix".

I would switch over the logic provided in KSP/ex46.c for filling the Matrix
via the MatSetStencil logic but I wanted to know what I was doing wrong in
my current code since I've never seen an error that depends on number of
mpi processes.

I'm attaching the code ( which works if the matrix is created without using
the DA, i.e. comment out line 159, uncomment 161/162 and I'm doing this on
a small grid to catch errors.).

Thanks in advance for the help.


-- 
Sajid Ali
Applied Physics
Northwestern University


ex_dmda.c
Description: Binary data


[petsc-users] Estimate memory needs for large grids

2019-04-05 Thread Sajid Ali via petsc-users
Hi,

I've solving a simple linear equation [ u_t = A*u_xx + A*u_yy + F_t*u ] on
a grid size of 55296x55296. I'm reading a vector of that size from an hdf5
file and have the jacobian matrix as a modified 5-point stencil which is
preallocated with the following
```
  ierr = MatCreate(PETSC_COMM_WORLD,);CHKERRQ(ierr);
  ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,M,M);CHKERRQ(ierr);
  ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);
  ierr = MatSetFromOptions(A);CHKERRQ(ierr);
  ierr = MatMPIAIJSetPreallocation(A,5,NULL,5,NULL);CHKERRQ(ierr);
  ierr = MatSeqAIJSetPreallocation(A,5,NULL);CHKERRQ(ierr);
```
Total number of elements is ~3e9 and the matrix size is ~9e9 (but only 5
diagonals are non zeros). I'm reading F_t which has ~3e9 elements. I'm
using double complex numbers and I've compiled with int64 indices.

Thus, for the vector I need, 55296x55296x2x8 bytes ~ 50Gb and for the F
vector, another 50 Gb. For the matrix I need ~250 Gb and some overhead for
the solver.

How do I estimate this overhead (and estimate how many nodes I would need
to run this given the maximum memory per node (as specified by slurm's
--mem option)) ?

Thanks in advance for the help!

-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Converting complex PDE to real for KNL performance ?

2019-03-27 Thread Sajid Ali via petsc-users
 Hi,

I'm able to solve the following equation using complex numbers (with
ts_type cn and pc_type gamg) :
  u_t = A*u'' + F_t*u;
(where A = -1j/(2k) amd u'' refers to u_xx+u_yy implemented with the
familiar 5-point stencil)

Now, I want to solve the same problem using real numbers. The equivalent
equations are:
u_t_real   =  1/(2k) * u''_imag + F_real*u_real   - F_imag*u_imag
u_t_imag = -1/(2k) * u''_real   + F_imag*u_real - F_real*u_imag

Thus, if we now take our new u vector to have twice the length of the
problem we're solving, keeping the first half as real and the second half
as imaginary, we'd get a matrix that had matrices computing the laplacian
via the 5-point stencil in the top-right and bottom-left corners and a
diagonal [F_real+F_imag, F_real-F_imag] term.

I tried doing this and the gamg preconditioner complains about an
unsymmetric matrix. If i use the default preconditioner, I get
DIVERGED_NONLINEAR_SOLVE.

Is there a way to better organize the matrix ?

PS: I'm trying to do this using only real numbers because I realized that
the optimized avx-512 kernels for KNL are not implemented for complex
numbers. Would that be implemented soon ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Direct PETSc to use MCDRAM on KNL and other optimizations for KNL

2019-03-01 Thread Sajid Ali via petsc-users
Hi Hong,

So, the speedup was coming from increased DRAM bandwidth and not the usage
of MCDRAM.

There is moderate MPI imbalance, large amount of Back-End stalls and good
vectorization.

I'm attaching my submit script, PETSc log file and Intel APS summary (all
as non-HTML text). I can give more detailed analysis via Intel Vtune if
needed.


Thank You,
Sajid Ali
Applied Physics
Northwestern University


submit_script
Description: Binary data


intel_aps_report
Description: Binary data


knl_petsc
Description: Binary data


Re: [petsc-users] Direct PETSc to use MCDRAM on KNL and other optimizations for KNL

2019-02-28 Thread Sajid Ali via petsc-users
Hi Hong,

Thanks for the advice. I see that the example takes ~180 seconds to run but
I can't see the DRAM vs MCDRAM info from Intel APS. I'll try to fix the
profiling and get back with further questions.

Also, the intel-mpi manpages say that the use of tmi is now deprecated :
https://software.intel.com/en-us/mpi-developer-guide-linux-fabrics-control


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Direct PETSc to use MCDRAM on KNL and other optimizations for KNL

2019-02-27 Thread Sajid Ali via petsc-users
Hi Junchao,

I’m confused with the syntax. If I submit the following as my job script, I
get an error :

#!/bin/bash
#SBATCH --job-name=petsc_test
#SBATCH -N 1
#SBATCH -C knl,quad,flat
#SBATCH -p apsxrmd
#SBATCH --time=1:00:00

module load intel/18.0.3-d6gtsxs
module load intel-parallel-studio/cluster.2018.3-xvnfrfz
module load numactl-2.0.12-intel-18.0.3-wh44iog
srun -n 64 -c 64 --cpu_bind=cores numactl -m 1 aps ./ex_modify
-ts_type cn -prop_steps 25 -pc_type gamg -ts_monitor -log_view

The error is :
srun: cluster configuration lacks support for cpu binding
srun: error: Unable to create step for job 916208: More processors
requested than permitted

I’m following the advice as given at slide 33 of
https://www.nersc.gov/assets/Uploads/02-using-cori-knl-nodes-20170609.pdf

For further info, I’m using LCRC at ANL.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Link to the latest tutorials broken

2019-02-07 Thread Sajid Ali via petsc-users
Hi,

The links to the Jan 2019 presentations at
https://www.mcs.anl.gov/petsc/documentation/tutorials/index.html are
broken. Could these be fixed ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] petsc4py for numpy array <-> MATDENSE binary

2019-02-01 Thread Sajid Ali via petsc-users
The vector is essentially snapshots in time of a data array. I should
probably store this as a 2D dense matrix of dimensions (dim_x*dim_y) *
dim_z. Now I can pick one column at a time and use it for my TS Jacobian.
Apologies for being a little unclear.

-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Reading a complex vector from HDF5

2019-02-01 Thread Sajid Ali via petsc-users
I think I understand what's happening here, when I look at a data file I
create similar to the aforementioned example, I see a complex=1 attribute
that I'm missing when I make my hdf5 file.


Re: [petsc-users] Reading a complex vector from HDF5

2019-02-01 Thread Sajid Ali via petsc-users
Column 1 contains the real value and column 2 contains the imaginary value,
correct?

I did that last time as well (and opened it using h5py just to be sure that
the shape is indeed dim x 2 and the datatype is f8),  yet I get the error.

The error comes from these lines in PETSc :

#if defined(PETSC_USE_COMPLEX)
if (!h->complexVal) {
H5T_class_t clazz = H5Tget_class(datatype);
if (clazz == H5T_FLOAT) SETERRQ(PetscObjectComm((PetscObject)viewer),
PETSC_ERR_SUP,"File contains real numbers but PETSc is configured for
complex. The conversion is not yet implemented. Configure with
--with-scalar-type=real.");
}

Am I setting the dtype incorrectly?


[petsc-users] Reading a complex vector from HDF5

2019-02-01 Thread Sajid Ali via petsc-users
Hi,

I'm trying to load a complex vector from hdf5 and I get and I get the
following error:

[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: File contains real numbers but PETSc is configured for
complex. The conversion is not yet implemented. Configure with
--with-scalar-type=real.
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision:
ba892ce6f06cbab637e31ab2bf33fd996c92d19d  GIT Date: 2019-01-29 05:55:16
+0100
[0]PETSC ERROR: ./ex_modify on a  named xrm by sajid Fri Feb  1 14:18:30
2019
[0]PETSC ERROR: Configure options
--prefix=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/petsc-develop-gh5lollh4dzklppm7kb7wpjtyh4gi4mi
--with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0
CFLAGS="-march=native -O2" FFLAGS="-march=native -O2"
CXXFLAGS="-march=native -O2"
--with-cc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/mpich-3.3-x4tesuo4sxsomfxns5i26vco7ywojdmz/bin/mpicc
--with-cxx=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/mpich-3.3-x4tesuo4sxsomfxns5i26vco7ywojdmz/bin/mpic++
--with-fc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/mpich-3.3-x4tesuo4sxsomfxns5i26vco7ywojdmz/bin/mpif90
--with-precision=double --with-scalar-type=complex
--with-shared-libraries=1 --with-debugging=0 --with-64-bit-indices=0
COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS=
--with-blaslapack-lib=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/openblas-0.3.5-heqggft2eii2xktdbyp2zovez2yj6ugd/lib/libopenblas.so
--with-x=1 --with-clanguage=C --with-scalapack=0 --with-metis=1
--with-metis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/metis-5.1.0-mugo6fn7u2nlz72vo4ajlhbba5q7a5jn
--with-hdf5=1
--with-hdf5-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/hdf5-1.10.4-pes54zx5fsbgdg2nr7reyopc4zfc43wo
--with-hypre=0 --with-parmetis=1
--with-parmetis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/parmetis-4.0.3-kizvrkoos4ddlfc7jsnatvh3mdk3lgvg
--with-mumps=0 --with-trilinos=0 --with-cxx-dialect=C++11
--with-superlu_dist-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/superlu-dist-develop-d6dqq3ribpkyoawnwuckbtydgddijj6j/include
--with-superlu_dist-lib=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/superlu-dist-develop-d6dqq3ribpkyoawnwuckbtydgddijj6j/lib/libsuperlu_dist.a
--with-superlu_dist=1 --with-suitesparse=0
--with-zlib-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/zlib-1.2.11-v4lhujkxdy6ukr5ymjxgxcfb2qoo6vf3/include
--with-zlib-lib="-L/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-8.2.0/zlib-1.2.11-v4lhujkxdy6ukr5ymjxgxcfb2qoo6vf3/lib
-lz" --with-zlib=1
[0]PETSC ERROR: #1 PetscViewerHDF5Load() line 1348 in
/tmp/sajid/spack-stage/spack-stage-hy86Yb/petsc/src/sys/classes/viewer/impls/hdf5/hdf5v.c
[0]PETSC ERROR: #2 VecLoad_HDF5() line 170 in
/tmp/sajid/spack-stage/spack-stage-hy86Yb/petsc/src/vec/vec/utils/vecio.c
[0]PETSC ERROR: #3 VecLoad_Default() line 288 in
/tmp/sajid/spack-stage/spack-stage-hy86Yb/petsc/src/vec/vec/utils/vecio.c
[0]PETSC ERROR: #4 VecLoad() line 933 in
/tmp/sajid/spack-stage/spack-stage-hy86Yb/petsc/src/vec/vec/interface/vector.c
[0]PETSC ERROR: #5 main() line 132 in
/raid/home/sajid/packages/xwp_petsc/2d/matter_repeat/ex_modify.c
[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--

As per src/vec/vec/examples/tutorials/ex10.c , a complex 1D vector is saves
as a 2D real valued vector of shape dim,2. I've done the same with my data
and yet I get the above error. Is there any special precaution I should
take when making the HDF5 file ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] petsc4py for numpy array <-> MATDENSE binary

2019-01-31 Thread Sajid Ali via petsc-users
Hi,

I have a large (~25k x 25k x 50) 3D array that I want to store as binary
PETSc readable matrix using the MATDENSE format. I've currently achieved
this (for smaller arrays) by writing the matrix out to an ASCII file and
converting this ASCII file to binary, one number at a time using a C
script.

Does petsc4py support converting a 3d numpy array (complex valued) to
MATDENSE (MPI) binary format directly ?

I saw some discussion on this topic back in 2012 on the mailing list but
it's not clear to me what happened after that.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about TSSetRHSJacobian for linear time dependent problem

2019-01-27 Thread Sajid Ali via petsc-users
The form is u_t = A(t)u.

On Sun, Jan 27, 2019 at 4:24 PM Smith, Barry F.  wrote:

>
>
> > On Jan 25, 2019, at 4:51 PM, Sajid Ali via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Hi,
> >
> > If I have a linear time dependent equation I'm trying to solve using TS,
> I can use :
> > TSSetProblemType(ts,TS_LINEAR);
> > TSSetRHSFunction(ts,NULL,TSComputeRHSFunctionLinear,NULL);
> > TSSetRHSJacobian(ts,A,A,YourComputeRHSJacobian, );
> >
> > If the matrix that's being evaluated by YourComputeRHSJacobian is such
> that the non-zero structure stays the same and only the diagonal changes
> with time, is there a way to optimize the function so that it doesn't
> create the whole matrix from scratch each time ?
>
> If it is a linear PDE u_t = A u  then how can A change with time? It
> sounds like it really isn't a linear problem?
>
>Barry
>
> >
> > Naively I can make a dummy matrix and store the copy from t=0 and change
> the diagonal at each iteration but that unnecessarily doubles the memory
> consumption, is there a better way?
> >
> >
> > Thank You,
> > Sajid Ali
> > Applied Physics
> > Northwestern University
>
>

-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Question about TSSetRHSJacobian for linear time dependent problem

2019-01-25 Thread Sajid Ali via petsc-users
Hi,

If I have a linear time dependent equation I'm trying to solve using TS, I
can use :
TSSetProblemType(ts,TS_LINEAR);
TSSetRHSFunction(ts,NULL,TSComputeRHSFunctionLinear,NULL);
TSSetRHSJacobian(ts,A,A,YourComputeRHSJacobian, );

If the matrix that's being evaluated by YourComputeRHSJacobian is such that
the non-zero structure stays the same and only the diagonal changes with
time, is there a way to optimize the function so that it doesn't create the
whole matrix from scratch each time ?

Naively I can make a dummy matrix and store the copy from t=0 and change
the diagonal at each iteration but that unnecessarily doubles the memory
consumption, is there a better way?


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] VecCopy fails after VecDuplicate

2019-01-17 Thread Sajid Ali via petsc-users
Nevermind, it was my fault in thinking the error was with u_abs and not u.
I switched from local array based value setting for initial conditions to
VecSetValues when converting the uniprocessor example to an MPI program.
While I removed VecRestoreArray and swapped u_local[*ptr] assignments with
VecSetValues, I missed out on adding VecAssembleBegin/End to compensate.
Thanks for pointing out that the error was with u and not u_abs.

On Thu, Jan 17, 2019 at 1:29 PM Sajid Ali 
wrote:

> As requested :
>
> [sajid@xrm free_space]$ ./ex_modify
> Solving a linear TS problem on 1 processor
> m : 256, slices : 1000.00, lambda : 1.239800e-10
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Object is in wrong state
> [0]PETSC ERROR: Not for unassembled vector
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision:
> 0cd88d33dca7e1f18a10cbb6fcb08f83d068c5f4  GIT Date: 2019-01-06 13:27:26
> -0600
> [0]PETSC ERROR: ./ex_modify on a  named xrm by sajid Thu Jan 17 13:29:12
> 2019
> [0]PETSC ERROR: Configure options
> --prefix=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj
> --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0
> CFLAGS= FFLAGS= CXXFLAGS=
> --with-cc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpicc
> --with-cxx=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpic++
> --with-fc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpif90
> --with-precision=double --with-scalar-type=complex
> --with-shared-libraries=1 --with-debugging=1 --with-64-bit-indices=0
> --with-debugging=%s
> --with-blaslapack-lib="/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_intel_lp64.so
> /raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_sequential.so
> /raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_core.so
> /lib64/libpthread.so /lib64/libm.so /lib64/libdl.so" --with-x=1
> --with-clanguage=C --with-scalapack=0 --with-metis=1
> --with-metis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/metis-5.1.0-nhgzn4kjskctzmzv35mstvd34nj2ugek
> --with-hdf5=1
> --with-hdf5-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/hdf5-1.10.4-ltstvsxvyjue2gxfegi4nvr6c5xg3zww
> --with-hypre=0 --with-parmetis=1
> --with-parmetis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/parmetis-4.0.3-hw3j2ss7mjsc5x5f2gaflirnuufzptil
> --with-mumps=0 --with-trilinos=0 --with-cxx-dialect=C++11
> --with-superlu_dist-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/superlu-dist-develop-cpspq4ca2hnyvhx4mz7zsupbj3do6md3/include
> --with-superlu_dist-lib=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/superlu-dist-develop-cpspq4ca2hnyvhx4mz7zsupbj3do6md3/lib/libsuperlu_dist.a
> --with-superlu_dist=1 --with-suitesparse=0
> --with-zlib-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/zlib-1.2.11-ldu43taplg2nbkxtem346zq4ibhad64i/include
> --with-zlib-lib="-L/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/zlib-1.2.11-ldu43taplg2nbkxtem346zq4ibhad64i/lib
> -lz" --with-zlib=1
> [0]PETSC ERROR: #1 VecCopy() line 1571 in
> /tmp/sajid/spack-stage/spack-stage-nwxY3Q/petsc/src/vec/vec/interface/vector.c
> [0]PETSC ERROR: #2 Monitor() line 296 in
> /raid/home/sajid/packages/xwp_petsc/1d/free_space/ex_modify.c
> [0]PETSC ERROR: #3 TSMonitor() line 3929 in
> /tmp/sajid/spack-stage/spack-stage-nwxY3Q/petsc/src/ts/interface/ts.c
> [0]PETSC ERROR: #4 TSSolve() line 3843 in
> /tmp/sajid/spack-stage/spack-stage-nwxY3Q/petsc/src/ts/interface/ts.c
> [0]PETSC ERROR: #5 main() line 188 in
> /raid/home/sajid/packages/xwp_petsc/1d/free_space/ex_modify.c
> [0]PETSC ERROR: No PETSc Option Table entries
> [0]PETSC ERROR: End of Error Message ---send entire
> error message to petsc-ma...@mcs.anl.gov--
> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0
> [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=73
> :
> system msg for write_line failure : Bad file descriptor
>
>

-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] VecCopy fails after VecDuplicate

2019-01-17 Thread Sajid Ali via petsc-users
As requested :

[sajid@xrm free_space]$ ./ex_modify
Solving a linear TS problem on 1 processor
m : 256, slices : 1000.00, lambda : 1.239800e-10
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Object is in wrong state
[0]PETSC ERROR: Not for unassembled vector
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision:
0cd88d33dca7e1f18a10cbb6fcb08f83d068c5f4  GIT Date: 2019-01-06 13:27:26
-0600
[0]PETSC ERROR: ./ex_modify on a  named xrm by sajid Thu Jan 17 13:29:12
2019
[0]PETSC ERROR: Configure options
--prefix=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj
--with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0
CFLAGS= FFLAGS= CXXFLAGS=
--with-cc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpicc
--with-cxx=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpic++
--with-fc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpif90
--with-precision=double --with-scalar-type=complex
--with-shared-libraries=1 --with-debugging=1 --with-64-bit-indices=0
--with-debugging=%s
--with-blaslapack-lib="/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_intel_lp64.so
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_sequential.so
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_core.so
/lib64/libpthread.so /lib64/libm.so /lib64/libdl.so" --with-x=1
--with-clanguage=C --with-scalapack=0 --with-metis=1
--with-metis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/metis-5.1.0-nhgzn4kjskctzmzv35mstvd34nj2ugek
--with-hdf5=1
--with-hdf5-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/hdf5-1.10.4-ltstvsxvyjue2gxfegi4nvr6c5xg3zww
--with-hypre=0 --with-parmetis=1
--with-parmetis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/parmetis-4.0.3-hw3j2ss7mjsc5x5f2gaflirnuufzptil
--with-mumps=0 --with-trilinos=0 --with-cxx-dialect=C++11
--with-superlu_dist-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/superlu-dist-develop-cpspq4ca2hnyvhx4mz7zsupbj3do6md3/include
--with-superlu_dist-lib=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/superlu-dist-develop-cpspq4ca2hnyvhx4mz7zsupbj3do6md3/lib/libsuperlu_dist.a
--with-superlu_dist=1 --with-suitesparse=0
--with-zlib-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/zlib-1.2.11-ldu43taplg2nbkxtem346zq4ibhad64i/include
--with-zlib-lib="-L/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/zlib-1.2.11-ldu43taplg2nbkxtem346zq4ibhad64i/lib
-lz" --with-zlib=1
[0]PETSC ERROR: #1 VecCopy() line 1571 in
/tmp/sajid/spack-stage/spack-stage-nwxY3Q/petsc/src/vec/vec/interface/vector.c
[0]PETSC ERROR: #2 Monitor() line 296 in
/raid/home/sajid/packages/xwp_petsc/1d/free_space/ex_modify.c
[0]PETSC ERROR: #3 TSMonitor() line 3929 in
/tmp/sajid/spack-stage/spack-stage-nwxY3Q/petsc/src/ts/interface/ts.c
[0]PETSC ERROR: #4 TSSolve() line 3843 in
/tmp/sajid/spack-stage/spack-stage-nwxY3Q/petsc/src/ts/interface/ts.c
[0]PETSC ERROR: #5 main() line 188 in
/raid/home/sajid/packages/xwp_petsc/1d/free_space/ex_modify.c
[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=73
:
system msg for write_line failure : Bad file descriptor


[petsc-users] VecCopy fails after VecDuplicate

2019-01-17 Thread Sajid Ali via petsc-users
Hi,

I have the following 2 lines in a function in my code :

 ierr = VecDuplicate(u,_abs);CHKERRQ(ierr);
 ierr = VecCopy(u,u_abs);CHKERRQ(ierr);

The VecCopy fails with the error message :
[0]PETSC ERROR: Object is in wrong state
[0]PETSC ERROR: Not for unassembled vector

adding the following statements doesn't help either :
ierr = VecAssemblyBegin(u_abs);CHKERRQ(ierr);
ierr = VecAssemblyEnd(u_abs);CHKERRQ(ierr);

If needed, the entire file is at :
https://github.com/s-sajid-ali/xwp_petsc/blob/master/1d/free_space/ex_modify.c

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about correctly catching fp_trap

2019-01-07 Thread Sajid Ali via petsc-users
 >   Anyway the FPE occurs in MatSolve_SeqAIJ_NaturalOrdering which usually
indicates a zero on the diagonal of the matrix. Is that possible?
It looks like this is indeed the case here. Thanks for the hint.

@Satish Balay  : I tried building with the patch and
don't see any difference. Do you want me to send you the config and build
logs to investigate further? Apart from the -g flag, as I've stated above
another bug is that petsc uses the system gdb (rhel-7) and not the gdb
associated with the gcc that was used to build petsc.


Re: [petsc-users] Question about correctly catching fp_trap

2019-01-04 Thread Sajid Ali via petsc-users
Trying it slightly differently, I do see it's a SIGFPE, arithmetic
exception but all it shows is that it's an error in TSSolve but no further
than that.

[sajid@xrm free_space]$ gdb ex_modify
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from
/raid/home/sajid/packages/xwp_petsc/2d/free_space/ex_modify...done.
(gdb) run -ts_type cn --args -fp_trap
Starting program:
/raid/home/sajid/packages/xwp_petsc/2d/free_space/ex_modify -ts_type cn
--args -fp_trap
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
warning: File
"/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-4.8.5/gcc-7.3.0-qrjpi76aeo4bysagruwwfii6oneh56lj/lib64/libstdc++.
so.6.0.24-gdb.py" auto-loading has been declined by your `auto-load
safe-path' set to "$debugdir:$datadir/auto-load:/usr/bin/mono-gdb.py".
To enable execution of this file add
add-auto-load-safe-path
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-4.8.5/gcc-7.3.0-qrjpi76aeo4bysagruwwfii6oneh56lj/lib64/libstdc++.
so.6.0.24-gdb.py
line to your configuration file "/raid/home/sajid/.gdbinit".
To completely disable this security protection add
set auto-load safe-path /
line to your configuration file "/raid/home/sajid/.gdbinit".
For more information about this security protection see the
"Auto-loading safe path" section in the GDB manual.  E.g., run from the
shell:
info "(gdb)Auto-loading safe path"
Solving a linear TS problem on 1 processor
mx : 512, my: 512 lambda : 1.239840e-10

Program received signal SIGFPE, Arithmetic exception.
__muldc3 (a=-6.6364040265716871e-306, b=1.1689456061105587e-305,
c=-0.0024992568840190117, d=0.024886737403015963)
at
/raid/home/sajid/packages/spack/var/spack/stage/gcc-7.3.0-qrjpi76aeo4bysagruwwfii6oneh56lj/gcc-7.3.0/libgcc/libgcc2.c:1978
1978
/raid/home/sajid/packages/spack/var/spack/stage/gcc-7.3.0-qrjpi76aeo4bysagruwwfii6oneh56lj/gcc-7.3.0/libgcc/libgcc2.c:
No such file or directory.
Missing separate debuginfos, use: debuginfo-install blas-3.4.2-8.el7.x86_64
bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.168-8.el7.x86_64
elfutils-libs-0.168-8.el7.x86_64 glibc-2.17-196.el7.x86_64
lapack-3.4.2-8.el7.x86_64 libattr-2.4.46-12.el7.x86_64
libcap-2.22-9.el7.x86_64 libgfortran-4.8.5-16.el7.x86_64
libxml2-2.9.1-6.el7_2.3.x86_64 systemd-libs-219-42.el7_4.4.x86_64
xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0  __muldc3 (a=-6.6364040265716871e-306, b=1.1689456061105587e-305,
c=-0.0024992568840190117, d=0.024886737403015963)
at
/raid/home/sajid/packages/spack/var/spack/stage/gcc-7.3.0-qrjpi76aeo4bysagruwwfii6oneh56lj/gcc-7.3.0/libgcc/libgcc2.c:1978
#1  0x7630fe87 in MatSolve_SeqAIJ_NaturalOrdering ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#2  0x75ba61a8 in MatSolve ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#3  0x76bc8a55 in PCApply_ILU ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#4  0x76cde6eb in PCApply ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#5  0x76e3ad4a in KSP_PCApply ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#6  0x76e3bc36 in KSPInitialResidual ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#7  0x76dc0736 in KSPSolve_GMRES ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#8  0x76e1158e in KSPSolve ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#9  0x76fac311 in SNESSolve_KSPONLY ()
   from
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj/lib/libpetsc.so.3.010
#10 0x76f346c7 in SNESSolve ()
   from

Re: [petsc-users] Question about correctly catching fp_trap

2019-01-04 Thread Sajid Ali via petsc-users
Going by the advice given here : https://stackoverflow.com/a/2792735

I tried running the program, hoping to catch the FP exception and I got the
error I had before (which turned out to be due to an FP exception as per
previous petsc thread):

Missing separate debuginfos, use: debuginfo-install blas-3.4.2-8.el7.x86_64
bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.168-8.el7.x86_64
elfutils-libs-0.168-8.el7.x86_64 glibc-2.17-196.el7.x86_64
lapack-3.4.2-8.el7.x86_64 libattr-2.4.46-12.el7.x86_64
libcap-2.22-9.el7.x86_64 libgfortran-4.8.5-16.el7.x86_64
libxml2-2.9.1-6.el7_2.3.x86_64 systemd-libs-219-42.el7_4.4.x86_64
xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) run -ts_type cn
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Killed
[sajid@xrm free_space]$ Starting program:
/raid/home/sajid/packages/xwp_petsc/2d/free_space/./ex_modify -ts_type cn
[setting tty state failed in terminal_inferior: Input/output error]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
warning: File
"/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-4.8.5/gcc-7.3.0-qrjpi76aeo4bysagruwwfii6oneh56lj/lib64/libstdc++.
so.6.0.24-gdb.py" auto-loading has been declined by your `auto-load
safe-path' set to "$debugdir:$datadir/auto-load:/usr/bin/mono-gdb.py".
Solving a linear TS problem on 1 processor
mx : 512, my: 512 lambda : 1.239840e-10
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR:
[0]PETSC ERROR: TSStep has failed due to DIVERGED_NONLINEAR_SOLVE, increase
-ts_max_snes_failures or make negative to attempt recovery
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision:
bd27d3f284687498e4c4678d234c0e308a5bc236  GIT Date: 2019-01-02 11:00:20
-0600
[0]PETSC ERROR:
/raid/home/sajid/packages/xwp_petsc/2d/free_space/./ex_modify on a  named
xrm by sajid Fri Jan  4 14:47:56 2019
[0]PETSC ERROR: Configure options
--prefix=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-develop-2u6vuwagkoczyvnpsubzrubmtmpfhhkj
--with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0
CFLAGS= FFLAGS= CXXFLAGS= COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS=
--with-cc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpicc
--with-cxx=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpic++
--with-fc=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpif90
--with-precision=double --with-scalar-type=complex
--with-shared-libraries=1 --with-debugging=1 --with-64-bit-indices=0
--with-blaslapack-lib="/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_intel_lp64.so
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_sequential.so
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/intel-mkl-2019.0.117-wzqlcijwx7odz2x5chembudo5leqpfh2/compilers_and_libraries_2019.0.117/linux/mkl/lib/intel64/libmkl_core.so
/lib64/libpthread.so /lib64/libm.so /lib64/libdl.so" --with-x=1
--with-clanguage=C --with-scalapack=0 --with-metis=1
--with-metis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/metis-5.1.0-nhgzn4kjskctzmzv35mstvd34nj2ugek
--with-hdf5=1
--with-hdf5-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/hdf5-1.10.4-ltstvsxvyjue2gxfegi4nvr6c5xg3zww
--with-hypre=0 --with-parmetis=1
--with-parmetis-dir=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/parmetis-4.0.3-hw3j2ss7mjsc5x5f2gaflirnuufzptil
--with-mumps=0 --with-trilinos=0 --with-cxx-dialect=C++11
--with-superlu_dist-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/superlu-dist-develop-cpspq4ca2hnyvhx4mz7zsupbj3do6md3/include
--with-superlu_dist-lib=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/superlu-dist-develop-cpspq4ca2hnyvhx4mz7zsupbj3do6md3/lib/libsuperlu_dist.a
--with-superlu_dist=1 --with-suitesparse=0
--with-zlib-include=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/zlib-1.2.11-ldu43taplg2nbkxtem346zq4ibhad64i/include
--with-zlib-lib="-L/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/zlib-1.2.11-ldu43taplg2nbkxtem346zq4ibhad64i/lib
-lz" --with-zlib=1
[0]PETSC ERROR: #1 TSStep() line 3673 in
/tmp/sajid/spack-stage/spack-stage-0boEG4/petsc/src/ts/interface/ts.c
[0]PETSC ERROR: #2 TSSolve() line 3847 in

Re: [petsc-users] Question about correctly catching fp_trap

2019-01-04 Thread Sajid Ali via petsc-users
I asked in the spack-slack channel and was told to include the explicitly
link the gfortran associated with the gcc version I build petsc with.
Following that advice, I'm using gcc@7.3.0, linking gcc@7.3.0/lib64 in
FFLAGS and setting -g option to CFLAGS in my makefile.

Yet, somehow the executable still picks up the system gfortran (It picks
both libgfortran.so.4 and libgfortran.so.3). Anything else I missed trying?

PS: My makefile looks like this:

[sajid@xrm free_space]$ cat makefile

ALL:

CFLAGS  = -g
FFLAGS  =
-L/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-4.8.5/gcc-7.3.0-qrjpi76aeo4bysagruwwfii6oneh56lj/lib64/
CPPFLAGS=
FPPFLAGS=
CLEANFILES  = ex_matlab

include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules

# your targets if any

include ${PETSC_DIR}/lib/petsc/conf/test

I've looked into the superludist and can confirm that it is indeed linked
to the system gfortran due to it's build error and since it's a petsc
dependency, it might be that this is where the system gfortran is coming in
instead of the newer gfortran associated with gcc-7.3.0 but I'm not sure.


Re: [petsc-users] Question about correctly catching fp_trap

2019-01-04 Thread Sajid Ali via petsc-users
Should gdb : https://spack.readthedocs.io/en/latest/package_list.html#gdb
be listed as an explicit dependency ?

Also, why is the error message asking me to re-install my system
blas/lapack ? I installed petsc with intel-mkl as blas/lapack provider and
the linking failed for superlu-dist. Is that carrying over here ?


Re: [petsc-users] Question about correctly catching fp_trap

2019-01-04 Thread Sajid Ali via petsc-users
Spack changes the configuration if debug variant is enabled as per this
line :

'--with-debugging=%s' % ('1' if '+debug' in spec else '0'),

I'm not sure what needs to be done to change this.

In the meantime, I added -g to CFLAGS in the makefile and now I have this :
https://pastebin.com/wxf05BH0


[petsc-users] Question about correctly catching fp_trap

2019-01-03 Thread Sajid Ali via petsc-users
Hi,

I've compiled a program using petsc with debugging enabled.
Confirmation of the same : https://pastebin.com/aa0XDheD (check if the
loaded module is indeed petsc+debug, compile)

When I run it in a debugger, I get the error as shown below:
https://pastebin.com/GJtB2Ghz

What am I missing?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Set diagonals other than main diagonal

2019-01-03 Thread Sajid Ali via petsc-users
Got it. Thank you!

On Thu, Jan 3, 2019 at 11:46 AM Matthew Knepley  wrote:

> On Thu, Jan 3, 2019 at 12:24 PM Sajid Ali via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hi,
>>
>> Is there any simple way of setting minor diagonals. The main diagonal can
>> be set by MatDiagonalSet but there's no equivalent way of doing it for for
>> minor diagonals. Does the preferred way to do this involve using
>> MatSetValues or there a simpler way?
>>
>
> No, we do not have an interface for this. You really only see this
> structure in 1D problems, so we have
> not spent time on it.
>
>   Thanks.
>
>  Matt
>
>
>> Thank You,
>> Sajid Ali
>> Applied Physics
>> Northwestern University
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Set diagonals other than main diagonal

2019-01-03 Thread Sajid Ali via petsc-users
Hi,

Is there any simple way of setting minor diagonals. The main diagonal can
be set by MatDiagonalSet but there's no equivalent way of doing it for for
minor diagonals. Does the preferred way to do this involve using
MatSetValues or there a simpler way?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Some questions about TS/examples/tutorials/ex13

2018-12-22 Thread Sajid Ali via petsc-users
>   No, A(t) has five diagonals as indicated by the calls to MatSetValues()
with five entries per row; they correspond to the usual 5 pt stencil in 2d.
This might not be the right place to ask this, but I'm new to PDE's and I
expected to find something like the matrix to be used as the one derived in
slides 10-12 of this presentation :
https://www.math.ust.hk/~mawang/teaching/math532/mgtut.pdf

Thanks Matt for the advice on the book. I'll have a look at it.


[petsc-users] Some questions about TS/examples/tutorials/ex13

2018-12-22 Thread Sajid Ali via petsc-users
Hi,

I have a few questions about ex13 from TS explaining how to solve the
following PDE in 2D :

u_t = uxx + uyy

1) Can I get a reference/explanation for how the RHSJacobian function is
derived ?

2) If the problem is linear, why is TSSetProblemType never set to TS_LINEAR?

3) Since this is just a 2D version of ex3, can't this be set up by only
providing a function for the evaluation for TSSetRHSJacobian with the
TSSetRHSFunction set to TSComputeRHSFunctionLinear ?

If this is the case and a matrix could be written that equivalently states
the problem as u_t = A(t)*u, then how would one make such a matrix A(t) ?
(In essence, what I'm asking is how does petsc define 2D vectors. Would
A(t) be the familiar block tridiagonal matrix ?)

4) In lines 153/154, why does the outer loop iterate over columns and the
inner loop iterate over rows? Does this again have something to do with how
petsc stores 2d vectors?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about PetscViewerHDF5SetTimestep usage

2018-12-20 Thread Sajid Ali via petsc-users
I tried running the same with petsc@develop and it worked. But if the fix
was committed on Dec5, then it should in theory work with petsc@3.10.3 .
I'm not sure why it doesn't work either.


Re: [petsc-users] Question about PetscViewerHDF5SetTimestep usage

2018-12-20 Thread Sajid Ali via petsc-users
The example is the same as above. Link : https://pastebin.com/cAeZsUgA

Here's what happens when I try to compile and run using petsc@3.10.3:
https://pastebin.com/EGVSxJu5

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about PetscViewerHDF5SetTimestep usage

2018-12-20 Thread Sajid Ali via petsc-users
This error persists after updating petsc to 3.10.3.

On Fri, Dec 14, 2018 at 4:17 PM Matthew Knepley  wrote:

> I think this was the last commit
> (5eab803f81ff61e3e81a6486d2410623a017e36d) in the branch,
> but just using the current master should work.
>
>   Matt
>
> On Fri, Dec 14, 2018 at 4:04 PM Sajid Ali <
> sajidsyed2...@u.northwestern.edu> wrote:
>
>> Hi Matthew,
>>
>> Thanks for the info. Could you give a specific commit that I should
>> rebuild PETSc with ? I’m currently building with 
>> commit=83593104eb216a023c00ab8a3f9897d55e531412'.
>>
>>
>> On Fri, Dec 14, 2018 at 12:56 PM Matthew Knepley 
>> wrote:
>>
>>> I believe this is now fixed in 'master', It was a bug in HDF5 attribute
>>> creation.
>>>
>>>   Thanks,
>>>
>>> Matt
>>>
>>> On Fri, Dec 14, 2018 at 1:16 PM Sajid Ali via petsc-users <
>>> petsc-users@mcs.anl.gov> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have a question about PetscViewerHDF5SetTimestep usage.
>>>>
>>>> I have tried the following minimal example which tries to write a
>>>> vector twice to an hdf5 file and it fails with the error message shown
>>>> below.
>>>>
>>>> Example : https://pastebin.com/cAeZsUgA
>>>>
>>>> Error :
>>>>
>>>> writing vector in hdf5 to vector.dat ...
>>>> HDF5-DIAG: Error detected in HDF5 (1.10.4) MPI-process 0:
>>>>   #000: H5A.c line 279 in H5Acreate2(): unable to create attribute
>>>> major: Attribute
>>>> minor: Unable to initialize object
>>>>   #001: H5Aint.c line 323 in H5A__create(): unable to create attribute
>>>> major: Attribute
>>>> minor: Unable to create file
>>>>   #002: H5Aint.c line 167 in H5A__create_common(): attribute already
>>>> exists
>>>> major: Attribute
>>>> minor: Object already exists
>>>> [0]PETSC ERROR: - Error Message
>>>> --
>>>> [0]PETSC ERROR: Error in external library
>>>> [0]PETSC ERROR: Error in HDF5 call H5Acreate2() Status -1
>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>>> for trouble shooting.
>>>> [0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
>>>>
>>>>
>>>> The error is related to the second VecView after setting the timestep
>>>> to 1. Am i doing something wrong ?
>>>>
>>>> Thank You,
>>>> Sajid Ali
>>>> Applied Physics
>>>> Northwestern University
>>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>
>>
>> --
>> Sajid Ali
>> Applied Physics
>> Northwestern University
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about PetscViewerHDF5SetTimestep usage

2018-12-14 Thread Sajid Ali via petsc-users
Hi Matthew,

Thanks for the info. Could you give a specific commit that I should rebuild
PETSc with ? I’m currently building with
commit=83593104eb216a023c00ab8a3f9897d55e531412'.


On Fri, Dec 14, 2018 at 12:56 PM Matthew Knepley  wrote:

> I believe this is now fixed in 'master', It was a bug in HDF5 attribute
> creation.
>
>   Thanks,
>
> Matt
>
> On Fri, Dec 14, 2018 at 1:16 PM Sajid Ali via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hi,
>>
>> I have a question about PetscViewerHDF5SetTimestep usage.
>>
>> I have tried the following minimal example which tries to write a vector
>> twice to an hdf5 file and it fails with the error message shown below.
>>
>> Example : https://pastebin.com/cAeZsUgA
>>
>> Error :
>>
>> writing vector in hdf5 to vector.dat ...
>> HDF5-DIAG: Error detected in HDF5 (1.10.4) MPI-process 0:
>>   #000: H5A.c line 279 in H5Acreate2(): unable to create attribute
>> major: Attribute
>> minor: Unable to initialize object
>>   #001: H5Aint.c line 323 in H5A__create(): unable to create attribute
>> major: Attribute
>> minor: Unable to create file
>>   #002: H5Aint.c line 167 in H5A__create_common(): attribute already
>> exists
>> major: Attribute
>> minor: Object already exists
>> [0]PETSC ERROR: - Error Message
>> --
>> [0]PETSC ERROR: Error in external library
>> [0]PETSC ERROR: Error in HDF5 call H5Acreate2() Status -1
>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> [0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
>>
>>
>> The error is related to the second VecView after setting the timestep to
>> 1. Am i doing something wrong ?
>>
>> Thank You,
>> Sajid Ali
>> Applied Physics
>> Northwestern University
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Question about PetscViewerHDF5SetTimestep usage

2018-12-14 Thread Sajid Ali via petsc-users
Hi,

I have a question about PetscViewerHDF5SetTimestep usage.

I have tried the following minimal example which tries to write a vector
twice to an hdf5 file and it fails with the error message shown below.

Example : https://pastebin.com/cAeZsUgA

Error :

writing vector in hdf5 to vector.dat ...
HDF5-DIAG: Error detected in HDF5 (1.10.4) MPI-process 0:
  #000: H5A.c line 279 in H5Acreate2(): unable to create attribute
major: Attribute
minor: Unable to initialize object
  #001: H5Aint.c line 323 in H5A__create(): unable to create attribute
major: Attribute
minor: Unable to create file
  #002: H5Aint.c line 167 in H5A__create_common(): attribute already exists
major: Attribute
minor: Object already exists
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Error in external library
[0]PETSC ERROR: Error in HDF5 call H5Acreate2() Status -1
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.2, unknown


The error is related to the second VecView after setting the timestep to 1.
Am i doing something wrong ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] X-Viewer for complex numbers

2018-12-12 Thread Sajid Ali via petsc-users
Specifically in the context of drawing a vector using the X-viewer (
PetscDraw ).

On Wed, Dec 12, 2018 at 9:33 AM Sajid Ali 
wrote:

> Hi ,
>
> Does petsc automatically convert complex valued vectors to absolute value
> vectors before passing it to a viewer ? Is there a way to view the phase of
> a vector ?
>
> Thank You,
> Sajid Ali
> Applied Physics
> Northwestern University
>


-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] X-Viewer for complex numbers

2018-12-12 Thread Sajid Ali via petsc-users
Hi ,

Does petsc automatically convert complex valued vectors to absolute value
vectors before passing it to a viewer ? Is there a way to view the phase of
a vector ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] PETSc binary write format

2018-12-07 Thread Sajid Ali via petsc-users
For future reference, attached is a snippet of code in C to convert ascii
numbers (integers and doubles) to petsc readable binary vectors.

-- 
Sajid Ali
Applied Physics
Northwestern University


ascii_to_bin.c
Description: Binary data


Re: [petsc-users] PETSc binary write format

2018-12-06 Thread Sajid Ali via petsc-users
I'm attaching the file vector.dat and the file petsc saves (vec_old.dat)
with this email.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


vector.dat
Description: Binary data


vec_old.dat
Description: Binary data


Re: [petsc-users] PETSc binary write format

2018-12-05 Thread Sajid Ali via petsc-users
To clarify :
The data from xxd -b petsc_output_dat is what I've reordered. The binary
dump is confusing to read since it contains 6 bytes per line (alongside the
unnecessary ascii conversions) and the numbers start and end in odd places.
I wanted each line to have only one number and copy pasted the data into a
new file. Then I changed the spacing so that the first two lines have 4
bytes (corresponding to int-32) each and correspond to VEC_FILE_CLASSID and
num_elements (20 in this case). This is followed by 8 bytes (64-bit double)
per line which store the actual elements of the vector.


On Wed, Dec 5, 2018 at 12:57 PM Sajid Ali 
wrote:

> It's just a cleaner re-write of the petsc output that I can understand
> (which I intend to modify from python in the future). I tried removing the
> new line characters but that didn't work either.
>
> Looking at the first line of the binary dump from petsc output :
>
> [sajid at xrm  temp]$ 
> xxd -b vector.dat
> 000:  00010010 0011 01001110    ..{N..
>
>
> The first 4 bytes combine to give
>
> *00010010001101001110*
>
> Which is int-32 representation of VEC_FILE_CLASSID and this is the first
> line of my file as well.
>
>
> Thank You,
> Sajid Ali
> Applied Physics
> Northwestern University
>


-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] PETSc binary write format

2018-12-05 Thread Sajid Ali via petsc-users
It's just a cleaner re-write of the petsc output that I can understand
(which I intend to modify from python in the future). I tried removing the
new line characters but that didn't work either.

Looking at the first line of the binary dump from petsc output :

[sajid at xrm 
temp]$ xxd -b vector.dat
000:  00010010 0011 01001110    ..{N..


The first 4 bytes combine to give

*00010010001101001110*

Which is int-32 representation of VEC_FILE_CLASSID and this is the first
line of my file as well.


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] PETSc binary write format

2018-12-05 Thread Sajid Ali via petsc-users
Exactly.

When I run ex10 and inspect it with cat, I get garbage since the data is in
binary :

[sajid@xrm temp]$ cat vector.dat
{N?�▒@@ @"@$@&@(@*@,@.@0@1@2@3[sajid@xrm temp]$


But doing a binary dump shows the data :
[sajid@xrm temp]$ xxd -b vector.dat
000:  00010010 0011 01001110    ..{N..
006:  00010100      ..
00c:     0011   ?.
012:        ..
018: 0100       @.
01e:   0100 1000    ..@...
024:     0100 0001  @.
02a:        ..
030: 0100 00010100      @.
036:   0100 00011000    ..@...
03c:     0100 00011100  @.
042:        ..
048: 0100 0010      @ 
04e:   0100 00100010    ..@"..
054:     0100 00100100  @$
05a:        ..
060: 0100 00100110      @&
066:   0100 00101000    ..@(..
06c:     0100 00101010  @*
072:        ..
078: 0100 00101100      @,
07e:   0100 00101110    ..@...
084:     0100 0011  @0
08a:        ..
090: 0100 00110001      @1
096:   0100 00110010    ..@2..
09c:     0100 00110011  @3
0a2:        ..

I think that the special characters on the right are just an attempt at
converting the binary data to ascii so they're not relevant to us.

I've created the same file as per the format specified in this thread, i.e.
VEC_FILE_CLASSID, num_elements followed by the elements in ieee-754 64-bit
format.

I've saved everything as 1's and 0's in a file as shown above. If it were
indeed a binary file, doing a cat on it should output garbage as it did for
the file above. But it instead interprets each 0 and 1 as an ascii
character.

The specific error is :
[sajid@xrm temp]$ ./ex10 -binary
reading vector in binary from vector.dat ...
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: Not a vector next in file
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
[0]PETSC ERROR: ./ex10 on a  named xrm by sajid Wed Dec  5 12:39:40 2018
[0]PETSC ERROR: Configure options
--prefix=/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/petsc-3.10.2.1-xnahapbbgtr1
[0]PETSC ERROR: #1 PetscViewerBinaryReadVecHeader_Private() line 26 in
/tmp/sajid/spack-stage/spack-stage-7h2QI3/petsc/src/vec/vec/utils/vecc
[0]PETSC ERROR: #2 VecLoad_Binary() line 84 in
/tmp/sajid/spack-stage/spack-stage-7h2QI3/petsc/src/vec/vec/utils/vecio.c
[0]PETSC ERROR: #3 VecLoad_Default() line 516 in
/tmp/sajid/spack-stage/spack-stage-7h2QI3/petsc/src/vec/vec/utils/vecio.c
[0]PETSC ERROR: #4 VecLoad() line 933 in
/tmp/sajid/spack-stage/spack-stage-7h2QI3/petsc/src/vec/vec/interface/vector.c
[0]PETSC ERROR: #5 main() line 155 in
/raid/home/sajid/packages/xwp_petsc/temp/ex10.c
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -binary
[0]PETSC ERROR: -vecload_block_size 1
[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 62) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=62
:
system msg for write_line failure : Bad file descriptor


Re: [petsc-users] PETSc binary write format

2018-12-05 Thread Sajid Ali via petsc-users
I have created a file as per the specification as shown below

[sajid@xrm temp]$ cat vector.dat
00010010001101001110
00010100

0011
0100
01001000
0101
01010100
01011000
01011100
0110
01100010
01100100
01100110
01101000
01101010
01101100
01101110
0111
01110001
01110010
01110011

How do I save it as binary so that petsc can understand it?

xxd -b takes each element as ascii 0 or 1 and converts that to binary and
neither does petsc understand the file.


Re: [petsc-users] PETSc binary write format

2018-12-04 Thread Sajid Ali via petsc-users
For future reference, the numbers do look like they're stored as per IEEE
754 64-bit convention.

I don't know what the special characters in the xxd output are but the size
of the file is consistent with 2*int_32+num_elements*dobule_64.

Thank you !
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] PETSc binary write format

2018-12-03 Thread Sajid Ali via petsc-users
Apologies for the error on my part.

Does the vector binary format also work the same way :  VEC_FILE_CLASSID
(32 bit int), num_elements (32 bit int) , value of the elements in double
(num_elements*64 bit double) ?

Are the doubles stored in IEEE_754 format ?

On Mon, Dec 3, 2018 at 1:43 PM Smith, Barry F.  wrote:

>
>You saved a Vec to the file, not a Mat.
>
>
>
> > On Dec 3, 2018, at 1:38 PM, Sajid Ali via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Hi,
> >
> > I ran ex10 from /vec/examples/tutorials and saved the matrix in binary
> format.
> >
> > Looking at the matrix in binary using xxd, I see
> >
> > [sajid@xrm temp]$ xxd -b vector.dat
> > 000:  00010010 0011 01001110    ..{N..
> > 006:  00010100      ..
> > 00c:     0011   ?.
> > 012:        ..
> > 018: 0100       @.
> > 01e:   0100 1000    ..@...
> > 024:     0100 0001  @.
> > 02a:        ..
> > 030: 0100 00010100      @.
> > 036:   0100 00011000    ..@...
> > 03c:     0100 00011100  @.
> > 042:        ..
> > 048: 0100 0010      @ 
> > 04e:   0100 00100010    ..@"..
> > 054:     0100 00100100  @$
> > 05a:        ..
> > 060: 0100 00100110      @&
> > 066:   0100 00101000    ..@(..
> > 06c:     0100 00101010  @*
> > 072:        ..
> > 078: 0100 00101100      @,
> > 07e:   0100 00101110    ..@...
> > 084:     0100 0011  @0
> > 08a:        ..
> > 090: 0100 00110001      @1
> > 096:   0100 00110010    ..@2..
> > 09c:     0100 00110011  @3
> > 0a2:        ..
> >
> >
> > The format of the binary file is supposed to be
> > MAT_FILE_CLASSID, num_rows, num_cols, -1 followed by data.
> >
> > Converting the first 4 bytes (32 bits) of binary to int, the number
> > 00010010001101001110 becomes 1211214, but the value of
> MAT_FILE_CLASSID according to the header file petscmat.h is 1211216 (
> petsc@3.10.2.1 built without complex support). What causes this
> discrepancy ?
> >
> > Also, what are the special characters at the end of every line ? Am i
> not reading the binary file correctly ?
> >
> >
> > Thank You,
> > Sajid Ali
> > Applied Physics
> > Northwestern University
>
>

-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] PETSc binary write format

2018-12-03 Thread Sajid Ali via petsc-users
Hi,

I ran ex10 from /vec/examples/tutorials and saved the matrix in binary
format.

Looking at the matrix in binary using xxd, I see

[sajid@xrm temp]$ xxd -b vector.dat
000:  00010010 0011 01001110    ..{N..
006:  00010100      ..
00c:     0011   ?.
012:        ..
018: 0100       @.
01e:   0100 1000    ..@...
024:     0100 0001  @.
02a:        ..
030: 0100 00010100      @.
036:   0100 00011000    ..@...
03c:     0100 00011100  @.
042:        ..
048: 0100 0010      @ 
04e:   0100 00100010    ..@"..
054:     0100 00100100  @$
05a:        ..
060: 0100 00100110      @&
066:   0100 00101000    ..@(..
06c:     0100 00101010  @*
072:        ..
078: 0100 00101100      @,
07e:   0100 00101110    ..@...
084:     0100 0011  @0
08a:        ..
090: 0100 00110001      @1
096:   0100 00110010    ..@2..
09c:     0100 00110011  @3
0a2:        ..


The format of the binary file is supposed to be
MAT_FILE_CLASSID, num_rows, num_cols, -1 followed by data.

Converting the first 4 bytes (32 bits) of binary to int, the number
00010010001101001110 becomes 1211214, but the value of
MAT_FILE_CLASSID according to the header file petscmat.h is 1211216 (
petsc@3.10.2.1 built without complex support). What causes this discrepancy
?

Also, what are the special characters at the end of every line ? Am i not
reading the binary file correctly ?


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Error in vec/vec/examples/tutorials/ex19.c

2018-11-30 Thread Sajid Ali via petsc-users
I thought I fixed it by using VecDuplicate instead of setting up x2 &
copying but I got more confused as to why the VecWrite had different
dimensions in each case.

Does blocking take a Nx1 vector and convert it to N/BLOCK_SIZE x BLOCK_SIZE
vector (that's how it looked it in the output hdf5 files) ?

Since I didn't find TimeStep to be a feature of HDF5, am I right in
assuming that it's a PETSc feature that creates an extra dimension to store
multiple vectors (with the new dimension being indexed by
PetscViewerHDF5SetTimeStep) ?

On Fri, Nov 30, 2018 at 5:17 PM Smith, Barry F.  wrote:

>
>
> > On Nov 30, 2018, at 1:30 PM, Sajid Ali via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Hi,
> >
> > I tried running ex19.c and I get the following error (I added a small
> snippet to print the local size on rank0 as well) :
> >
> > [sajid@xrm temp]$ mpirun -np 2 ./ex19
> > local_sizes : 3 4
> > [1]PETSC ERROR: - Error Message
> --
> > [1]PETSC ERROR: Arguments are incompatible
> > [1]PETSC ERROR: Incompatible vector local lengths parameter # 1 local
> size 3 != parameter # 2 local size 2
> > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> > [1]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> > [1]PETSC ERROR: ./ex19 on a  named xrm by sajid Fri Nov 30 13:28:36 2018
> > [1]PETSC ERROR: [0]PETSC ERROR: - Error Message
> --
> > [0]PETSC ERROR: Arguments are incompatible
> > [0]PETSC ERROR: Incompatible vector local lengths parameter # 1 local
> size 3 != parameter # 2 local size 4
> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> > [0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> > [0]PETSC ERROR: ./ex19 on a  named xrm by sajid Fri Nov 30 13:28:36 2018
> >
> > Does this mean that PETSC_DECIDE is inconsistent between two calls in
> the same file ?
>
>No. The reason for the error is a bug in the code. The x1 vector has a
> block size of 1 hence when it is divided among two processes it gets 3
> entries on each process. The x2 vector has a block size of 2 so when it is
> divided among two processes it gets 4 entries on the first process and 2 on
> the second. The call to VecCopy() does not work because the two vectors
> have different layouts.
>
>I have attached a fixed version of the code (that does not copy from x1
> to x2).
>
>  Barry
>
>
>
> >
> > Thank You,
> > Sajid Ali
> > Applied Physics
> > Northwestern University
>
>

-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Load dense matrices from hdf5

2018-11-30 Thread Sajid Ali via petsc-users
I use spack variants which do the same thing behind the scenes. Thanks !

On Fri, Nov 30, 2018 at 5:10 PM Smith, Barry F.  wrote:

>
>I assume you are ./configure PETSc with --with-scalar-type=complex? If
> so, the values in the file are the real and imaginary parts interlaced:
> That is
> r0 i0 r1 i1    where r0 is the first matrix entry's real part and i0
> is the first matrix imaginary part.
>
> Barry
>
>
> > On Nov 30, 2018, at 5:04 PM, Sajid Ali 
> wrote:
> >
> > If the matrix is filled with complex numbers is each complex number
> stored as a sequences of doubles ? Or is it better to split the matrix into
> real/imaginary and store each part separately?
>
>

-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Load dense matrices from hdf5

2018-11-30 Thread Sajid Ali via petsc-users
If the matrix is filled with complex numbers is each complex number stored
as a sequences of doubles ? Or is it better to split the matrix into
real/imaginary and store each part separately?


Re: [petsc-users] Load dense matrices from hdf5

2018-11-30 Thread Sajid Ali via petsc-users
Thanks for the advice! I'll look into using the binary format then.

On Fri, Nov 30, 2018 at 4:47 PM Smith, Barry F.  wrote:

>
> I would start by simply using the PETSc binary dense format for the
> matrices; it will work find for 1d (and 2d) development work with dozens of
> processes. Latter, if need be, you can switch to the HDF5, the only
> difference in your code would be the viewer you MatLoad() from will be an
> HDF5 file instead of a PETSc binary file.
>
> The format of a PETSc binary dense file is
>
> 4 integer values (the header) followed by the matrix as one contiguous
> array of doubles (stored by row). The header is of the form
>
> MAT_FILE_CLASSID, M, N, -1(where M and N are number of rows and
> columns in the matrix.)
>
> You can easily create this file from Python (which I assume is where
> you are getting your h(t) values from).
>
>     Barry
>
>
> > On Nov 30, 2018, at 4:27 PM, Sajid Ali via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Hi,
> >
> > I'm trying to solve the Helmholtz equation in 1D for x-rays which
> roughly like :
> >
> > u_dot = a*u_xx + h(t)*u
> >
> > I've already implemented the case where h(t) is always zero (free-space)
> in PETSc as per the last box on page 156 of the manual.
> >
> > For the time dependent version of the same the next step is to modify
> the RHSMatrix by adding h(t) via MatDiagonalSet (With h(t)=0, the
> RHS matrix is just a (scaled) Laplacian matrix)
> >
> > The h(t) is related to x-ray refractive index at that time. These x-ray
> indices need to be (preferably) stored in matrix beforehand. Each column of
> the matrix would be the refractive index to be used in one iteration. Using
> hdf5 is something I'd like to do because I can make/view the grid using
> numpy in python and solve the PDE using PETSc in C.
> >
> > PS: I've seen past questions on this thread strongly discouraging users
> from storing dense matrices in hdf5 format.
> >
> > PPS : This 1D x-ray scattering problem is just a stepping stone to doing
> the same in 2D so if there's a better approach to adopt, I'm be open to new
> ideas.
> >
> > Thank You,
> > Sajid Ali
> > Applied Physics
> > Northwestern University
>
>

-- 
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Load dense matrices from hdf5

2018-11-30 Thread Sajid Ali via petsc-users
Hi,

I'm trying to solve the Helmholtz equation in 1D for x-rays which roughly
like :

u_dot = a*u_xx + h(t)*u

I've already implemented

the case where h(t) is always zero (free-space) in PETSc as per the last
box on page 156 of the manual.

For the time dependent version of the same the next step is to modify the
RHSMatrix by adding h(t) via MatDiagonalSet (With h(t)=0, the RHS
matrix is just a (scaled) Laplacian matrix)

The h(t) is related to x-ray refractive index at that time. These x-ray
indices need to be (preferably) stored in matrix beforehand. Each column of
the matrix would be the refractive index to be used in one iteration. Using
hdf5 is something I'd like to do because I can make/view the grid using
numpy in python and solve the PDE using PETSc in C.

PS: I've seen past questions on this thread strongly discouraging users
from storing dense matrices in hdf5 format.

PPS : This 1D x-ray scattering problem is just a stepping stone to doing
the same in 2D so if there's a better approach to adopt, I'm be open to new
ideas.

Thank You,
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Error in vec/vec/examples/tutorials/ex19.c

2018-11-30 Thread Sajid Ali via petsc-users
Hi,

I tried running ex19.c and I get the following error (I added a small
snippet to print the local size on rank0 as well) :

[sajid@xrm temp]$ mpirun -np 2 ./ex19
local_sizes : 3 4
[1]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: Arguments are incompatible
[1]PETSC ERROR: Incompatible vector local lengths parameter # 1 local size
3 != parameter # 2 local size 2
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[1]PETSC ERROR: Petsc Release Version 3.10.2, unknown
[1]PETSC ERROR: ./ex19 on a  named xrm by sajid Fri Nov 30 13:28:36 2018
[1]PETSC ERROR: [0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Arguments are incompatible
[0]PETSC ERROR: Incompatible vector local lengths parameter # 1 local size
3 != parameter # 2 local size 4
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
[0]PETSC ERROR: ./ex19 on a  named xrm by sajid Fri Nov 30 13:28:36 2018

Does this mean that PETSC_DECIDE is inconsistent between two calls in the
same file ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Some clarifications about TS ex3.c

2018-11-29 Thread Sajid Ali via petsc-users
Another question I want to ask is about dttol that is used in the Monitor
function as follows :

417:   dttol = .0001;
418:   PetscOptionsGetReal(NULL,NULL,"-dttol",,NULL);
419:   if (dt < dttol) {
420: dt  *= .999;
421: TSSetTimeStep(ts,dt);
422:   }

Is this decreasing dt if it's less than tolerance ?

I commented out the norm calculation (lines 404-415) the time is stuck at 0
( as per the output of -ts_monitor). But if I comment out the above lines
(417-422) as well, everything works fine. What explains this behaviour ?

Thank You!
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Some clarifications about TS ex3.c

2018-11-27 Thread Sajid Ali via petsc-users
Hi,

I wanted to ask a few questions about the TS example ex3.c when using the
ifunc option

190: Mat 

J;
192: RHSMatrixHeat(ts,0.0,u,A,A,);193: MatDuplicate
(A,MAT_DO_NOT_COPY_VALUES
,);194:
TSSetIFunction
(ts,NULL,IFunctionHeat,);195:
TSSetIJacobian
(ts,J,J,IJacobianHeat,);196:
MatDestroy 
();
198: PetscObjectReference
((PetscObject
)A);199:
appctx.A = A;200: appctx.oshift = PETSC_MIN_REAL;201:   }


1) When TSSetIJacobian is called on line 195, J is empty. Why is J deleted
after calling the routing but before TSSolve is called ? (Isn't J supposed
to hold the jacobian ? )

2) The shift is not calculated before calling TSSetIJacobian. Does this
mean that the TSSetIJacobian takes care of the shift ?

3) What is happening on line 198 ?


Thank You,
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about DMDAGetElements

2018-11-19 Thread Sajid Ali via petsc-users
Bingo!

So, DMDAGetElements gives the indices of the mesh, right ?


Thank you !


Re: [petsc-users] Question about DMDAGetElements

2018-11-19 Thread Sajid Ali via petsc-users
So, DMDA is used for sparse matrices arising from FD/FE and MatCreateMPIAIJ
can be used for dense matrices (though it is strongly discouraged).

My confusion stemmed from DMDAGetElements giving the element indices for
the 1D mesh/vector of size N(when the DMDACreate1d is used). But these
indices were then used as local indices to set the values for a 2D matrix
(via MatSetValuesLocal ) created using the same DM (now we have NxN
elements).


Re: [petsc-users] Question about DMDAGetElements

2018-11-19 Thread Sajid Ali via petsc-users
I think what confused me was the fact that using DMCreateMatrix(da,)
created a 12x12 matrix with 144 elements but summing up nel*nen from each
rank gives only 2*2+3*2+3*2+3*2=20 elements. So this means that
DMDAGetElements returns the elements for the vector created on the mesh and
this happens to be used for indexing the matrix (created using the same DM
object) via the local indices. Luckily since this code just creates a
tridiagonal matrix, this works here (since if I wanted to create a dense
matrix I'd want to have access to all the indices which would happen if the
addressable indices at each rank (for the submatrix stored locally) add up
to the size of the total matrix). Is my understanding correct?

Thanks for the help!

-- 
Sajid Ali
Applied Physics
Northwestern University


Re: [petsc-users] Question about DMDAGetElements

2018-11-13 Thread Sajid Ali via petsc-users
I'm still confused and have the following questions :

1) Suppose as in the case above a DM object (created using DMCreate1D) is
used to created a matrix A using DMCreateMatrix, how does one convert the
indices obtained from DMDAGetElements to the row and column indices for the
matrix A ? Is there a function telling me exactly which sub-matrix each
rank is storing ?

2) From the ex6.c example above, it looks like e corresponds to row index
and nen is the number of columns and i is the column index (which runs from
0 to nel) when addressing the matrix elements via MatSetValuesLocal. Is
this correct?

3) How would the indices obtained from DMDAGetElements correspond to the
indices for a vector created using DMCreateGlobalVector ?

Thank You,
Sajid Ali
Applied Physics
Northwestern University


[petsc-users] Question about DMDAGetElements

2018-11-12 Thread Sajid Ali via petsc-users
Hi,

I'm trying to understand this example

from a tutorial. DM is used as an alternative to setting up the matrix and
vector separately (as was done in the previous example
).


Since DMDACreate1d was used to create the mesh, can someone explain this
logic more clearly :

  /*
Gets an array containing the indices (in local coordinates)
of all the local elements.
nel - number of local elements
nen - number of one element's nodes
e   - the local indices of the elements' vertices
  */
  ierr = DMDAGetElements(da, , , );CHKERRQ(ierr);

  /*
 Assemble matrix and RHS
  */
  value[0] = 1.0; value[1] = -1.0; value[2] = -1.0; value[3] = 1.0;
  bvalue[0] = 1.0; bvalue[1] = 1.0;
  for (i=0; i