Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Smith, Barry F.
> On Feb 13, 2018, at 8:56 PM, Mark Adams wrote: > > I agree with Matt, flat 64 will be faster, I would expect, but this code has > global metadata that would have to be replicated in a full scale run.\ Use MPI 3 shared memory to expose the "global metadata" and forget

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Mark Adams
I agree with Matt, flat 64 will be faster, I would expect, but this code has global metadata that would have to be replicated in a full scale run. We are just doing single socket test now (I think). We have been tracking down what look like compiler bugs and we have only taken at peak performance

Re: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse

2018-02-13 Thread Smith, Barry F.
In general you probably don't want to do this. Most good preconditioners (like AMG) rely on the matrix having the "natural" scaling that arises from the discretization and doing a scaling like you describe destroys that natural scaling. You can use PCPBJACOBI to use point block Jacobi

[petsc-users] multiply a mpibaij matrix by its block diagonal inverse

2018-02-13 Thread Xiangdong
Hello everyone, I have a block sparse matrices A created from the DMDA3d. Before passing the matrix to ksp solver, I want to apply a transformation to this matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the block diagonal of A. What is the best way to get the transformed

Re: [petsc-users] Accessing a field values of Staggered grid

2018-02-13 Thread Mohammad Hassan Baghaei
Thanks for your great note, Dave. Yeah! I would be able to at least view the edge values, although I could not specify the data. Previously I searched in points data, I now view the values by Surface With Edges option. From: Dave May [mailto:dave.mayhe...@gmail.com] Sent: Wednesday,

Re: [petsc-users] Accessing a field values of Staggered grid

2018-02-13 Thread Dave May
On 13 February 2018 at 21:17, Matthew Knepley wrote: > On Tue, Feb 13, 2018 at 3:21 PM, Mohammad Hassan Baghaei < > mhbagh...@mail.sjtu.edu.cn> wrote: > >> Hi >> >> I am filling the local vector from dm , has a section layout. The thing >> is I want to know how I can see the

Re: [petsc-users] Accessing a field values of Staggered grid

2018-02-13 Thread Matthew Knepley
On Tue, Feb 13, 2018 at 3:21 PM, Mohammad Hassan Baghaei < mhbagh...@mail.sjtu.edu.cn> wrote: > Hi > > I am filling the local vector from dm , has a section layout. The thing is > I want to know how I can see the field variable values defined on edges, > the staggered grid. In fact, Whenever I

[petsc-users] Accessing a field values of Staggered grid

2018-02-13 Thread Mohammad Hassan Baghaei
Hi I am filling the local vector from dm , has a section layout. The thing is I want to know how I can see the field variable values defined on edges, the staggered grid. In fact, Whenever I output to VTK in preview, I would be able to see the main grid. But the values which are defined on edges,

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Kong, Fande
Curious about the comparison of 16x4 VS 64. Fande, On Tue, Feb 13, 2018 at 11:44 AM, Bakytzhan Kallemov wrote: > Hi, > > I am not sure about 64 flat run, > > unfortunately I did not save logs since it's easy to run, but for 16 - > here is the plot I got for different number

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Matthew Knepley
On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. wrote: > > > On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: > > > > FYI, we were able to get hypre with threads working on KNL on Cori by > going down to -O1 optimization. We are getting about 2x speedup with

Re: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector

2018-02-13 Thread Jed Brown
Ali Berk Kahraman writes: > OK, here is the thing. I have a 2D cartesian regular grid. I am working > on wavelet method collocation method, which creates an irregular > adaptive grid by turning grid points on an off on the previously > mentioned cartesian grid. I

Re: [petsc-users] check status of reading matrix from a file

2018-02-13 Thread Smith, Barry F.
Hmm, it shouldn't hang but should crash if the file does not exist. If you want the code to continue running with or without the file you can use PetscTestFile() to see if the file exists and do something else if it does not exist. Barry > On Feb 13, 2018, at 11:10 AM, Michael

[petsc-users] check status of reading matrix from a file

2018-02-13 Thread Michael Povolotskyi
Dear Petsc developers, I'm reading a matrix from a file like this: PetscViewer viewer; PetscErrorCode ierr; MatCreate(comm,); MatSetType(matrix,MATDENSE); ierr = PetscViewerBinaryOpen(comm,file_name,FILE_MODE_READ,); ierr = MatLoad(matrix,viewer); Sometimes the file that is needed is not

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Mark Adams
> > > > > > There error, flatlined or slightly diverging hypre solves, occurred even > in flat MPI runs with openmp=1. > > But the answers are wrong as soon as you turn on OpenMP? > > No, that is the funny thing, the problem occurs with flat MPI, no OMP. Just an openmp=1 build. I am trying to

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Smith, Barry F.
> On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: > > FYI, we were able to get hypre with threads working on KNL on Cori by going > down to -O1 optimization. We are getting about 2x speedup with 4 threads and > 16 MPI processes per socket. Not bad. In other works using 16

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Mark Adams
FYI, we were able to get hypre with threads working on KNL on Cori by going down to -O1 optimization. We are getting about 2x speedup with 4 threads and 16 MPI processes per socket. Not bad. There error, flatlined or slightly diverging hypre solves, occurred even in flat MPI runs with openmp=1.

Re: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector

2018-02-13 Thread Matthew Knepley
On Tue, Feb 13, 2018 at 10:12 AM, Ali Berk Kahraman < aliberkkahra...@yahoo.com> wrote: > OK, here is the thing. I have a 2D cartesian regular grid. I am working on > wavelet method collocation method, which creates an irregular adaptive grid > by turning grid points on an off on the previously

Re: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector

2018-02-13 Thread Ali Berk Kahraman
OK, here is the thing. I have a 2D cartesian regular grid. I am working on wavelet method collocation method, which creates an irregular adaptive grid by turning grid points on an off on the previously mentioned cartesian grid. I store the grid and the values as sparse Mat objects, where each

Re: [petsc-users] Transform scipy sparse to partioned, parallel petsc matrix in PETSc4py

2018-02-13 Thread Lisandro Dalcin
On 11 February 2018 at 20:35, Jan Grießer wrote: > Hey, > > i have a precomputed scipy sparse matrix for which I want to solve the > eigenvalue problem for a matrix of size 35000x35000. I don´t really get how > to parallelize this problem correctly. > Similar to

Re: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector

2018-02-13 Thread Jed Brown
Ali Kahraman writes: > > Dear All, > > My problem definition is as follows, > > I  have an MPI matrix with a random sparsity pattern i.e. I do not know how > many nonzeros there are on any row unless I call MatGetRow to learn it. There > are possibly unequal