Re: [petsc-users] Internal product through a matrix norm

2020-01-22 Thread Jed Brown
Jeremy Theler  writes:

> Sorry for the basic question, but here it goes.
> Say I have a vector u and a matrix K and I want to compute the scalar
>
> e = u^T K u
>
> (for example the strain energy if u are displacements a K is the
> stiffness matrix).
>
> Is there anything better (both in elegance and efficiency) than doing
> this?
>
> PetscScalar e;
> Vec Kx;
>   
> VecDuplicate(x, );
> MatMult(K, x, Kx);
> VecDot(x, Kx, );

Nope; this is standard.


[petsc-users] Internal product through a matrix norm

2020-01-22 Thread Jeremy Theler
Sorry for the basic question, but here it goes.
Say I have a vector u and a matrix K and I want to compute the scalar

e = u^T K u

(for example the strain energy if u are displacements a K is the
stiffness matrix).

Is there anything better (both in elegance and efficiency) than doing
this?

PetscScalar e;
Vec Kx;
  
VecDuplicate(x, );
MatMult(K, x, Kx);
VecDot(x, Kx, );


--
jeremy theler
www.seamplex.com




Re: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product

2020-01-22 Thread Jed Brown
Stefano Zampini  writes:

>> On Jan 22, 2020, at 6:11 PM, Felix Huber  
>> wrote:
>> 
>> Hello,
>> 
>> I currently investigate why our code does not show the expected weak scaling 
>> behaviour in a CG solver. Therefore I wanted to try out different 
>> communication methods for the VecScatter in the matrix-vector product. 
>> However, it seems like PETSc (version 3.7.6) always chooses either 
>> MPI_Alltoallv or MPI_Alltoallw when I pass different options via the 
>> PETSC_OPTIONS environment variable. Does anybody know, why this doesn't work 
>> as I expected?
>> 
>> The matrix is a MPIAIJ matrix and created by a finite element discretization 
>> of a 3D Laplacian. Therefore it only communicates with 'neighboring' MPI 
>> ranks. Not sure if it helps, but the code is run on a Cray XC40.
>> 
>> I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options from 
>> https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html
>>  which all result in a MPI_Alltoallv. When combined with `nopack` the 
>> communication uses MPI_Alltoallw.
>> 
>> Best regards,
>> Felix
>> 
>
> 3.7.6 is a quite old version. You should consider upgrading

VecScatter has been greatly refactored (and the default implementation
is entirely new) since 3.7.  Anyway, I'm curious about your
configuration and how you determine that MPI_Alltoallv/MPI_Alltoallw is
being used.  This has never been a default code path, so I suspect
something in your environment or code making this happen.


Re: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product

2020-01-22 Thread Jed Brown
Victor Eijkhout  writes:

> On , 2020Jan22, at 09:11, Felix Huber 
> mailto:st107...@stud.uni-stuttgart.de>> wrote:
>
> weak scaling behaviour in a CG solver
>
> Norms and inner products have Log(P) complexity, so you’ll never get perfect 
> weak scaling.

Allreduce is nearly constant time with hardware collectives on nice
networks.  The increased cost frequently observed is due to load
imbalance causing different processes to enter at different times.

https://www.mcs.anl.gov/~fischer/gop/


Re: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product

2020-01-22 Thread Victor Eijkhout


On , 2020Jan22, at 09:11, Felix Huber 
mailto:st107...@stud.uni-stuttgart.de>> wrote:

weak scaling behaviour in a CG solver

Norms and inner products have Log(P) complexity, so you’ll never get perfect 
weak scaling.

Victor.



Re: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product

2020-01-22 Thread Stefano Zampini



> On Jan 22, 2020, at 6:11 PM, Felix Huber  
> wrote:
> 
> Hello,
> 
> I currently investigate why our code does not show the expected weak scaling 
> behaviour in a CG solver. Therefore I wanted to try out different 
> communication methods for the VecScatter in the matrix-vector product. 
> However, it seems like PETSc (version 3.7.6) always chooses either 
> MPI_Alltoallv or MPI_Alltoallw when I pass different options via the 
> PETSC_OPTIONS environment variable. Does anybody know, why this doesn't work 
> as I expected?
> 
> The matrix is a MPIAIJ matrix and created by a finite element discretization 
> of a 3D Laplacian. Therefore it only communicates with 'neighboring' MPI 
> ranks. Not sure if it helps, but the code is run on a Cray XC40.
> 
> I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options from 
> https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html
>  which all result in a MPI_Alltoallv. When combined with `nopack` the 
> communication uses MPI_Alltoallw.
> 
> Best regards,
> Felix
> 

3.7.6 is a quite old version. You should consider upgrading



Re: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product

2020-01-22 Thread Dave May
On Wed 22. Jan 2020 at 16:12, Felix Huber 
wrote:

> Hello,
>
> I currently investigate why our code does not show the expected weak
> scaling behaviour in a CG solver.


Can you please send representative log files which characterize the lack of
scaling (include the full log_view)?

Are you using a KSP/PC configuration which should weak scale?

Thanks
Dave


Therefore I wanted to try out
> different communication methods for the VecScatter in the matrix-vector
> product. However, it seems like PETSc (version 3.7.6) always chooses
> either MPI_Alltoallv or MPI_Alltoallw when I pass different options via
> the PETSC_OPTIONS environment variable. Does anybody know, why this
> doesn't work as I expected?
>
> The matrix is a MPIAIJ matrix and created by a finite element
> discretization of a 3D Laplacian. Therefore it only communicates with
> 'neighboring' MPI ranks. Not sure if it helps, but the code is run on a
> Cray XC40.
>
> I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options
> from
>
> https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html
> which all result in a MPI_Alltoallv. When combined with `nopack` the
> communication uses MPI_Alltoallw.
>
> Best regards,
> Felix
>
>


[petsc-users] Choosing VecScatter Method in Matrix-Vector Product

2020-01-22 Thread Felix Huber

Hello,

I currently investigate why our code does not show the expected weak 
scaling behaviour in a CG solver. Therefore I wanted to try out 
different communication methods for the VecScatter in the matrix-vector 
product. However, it seems like PETSc (version 3.7.6) always chooses 
either MPI_Alltoallv or MPI_Alltoallw when I pass different options via 
the PETSC_OPTIONS environment variable. Does anybody know, why this 
doesn't work as I expected?


The matrix is a MPIAIJ matrix and created by a finite element 
discretization of a 3D Laplacian. Therefore it only communicates with 
'neighboring' MPI ranks. Not sure if it helps, but the code is run on a 
Cray XC40.


I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options 
from 
https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html 
which all result in a MPI_Alltoallv. When combined with `nopack` the 
communication uses MPI_Alltoallw.


Best regards,
Felix



Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-22 Thread Smith, Barry F. via petsc-users


> On Jan 22, 2020, at 3:49 AM, Дмитрий Мельничук 
>  wrote:
> 
> Thank you for your help!
> 
> I ran ./configure with  flags --with-64-bit-indices --download-fblaslapack.
> The logs files are called configure_fblaslapack_64-bit-indices.log and 
> test_fblaslapack_64-bit-indices.log respectively.
> Fortran test example runs successfully, but solver does not compiled with 
> PETSc correctly:
> 
>  
>  if (j==1) call MatSetValue(Mat_K,j3,j3,f0,Add_Values,ierr_g)


  So Add_values should be eight bytes but for some reason it is four. 

   Try ADD_VALUES here, it should not matter but. Also try putting 

#include 
ex9f.F90:  use petscvec

at the beginning of the routine to make sure ADD_VALUES is defined. 

Make sure you don't have a local variable named Add_Values


>   1
> Error: Type mismatch in argument «i» at (1); passed INTEGER(4) to INTEGER(8)
> 
> Also changing the ierr_g declaration from PetscErrorCode to PetscInt has no 
> influence on compilation result.
> 
> 
> Manual compliation of OpenBLAS and appropriate changes in ./configure solved 
> my problem.
> So I attached the associated log files named as 
> configure_openblas_64-bit-indices.log and test_openblas_64-bit-indices.log
> 
> 
> All operations were performed with barry/2020-01-15/support-default-integer-8 
> version of PETSc.
> 
> 
> Kind regards,
> Dmitry Melnichuk
>  
>  
>  
> 21.01.2020, 16:57, "Smith, Barry F." :
> 
>I would avoid OpenBLAS it just introduces one new variable that could 
> introduce problems.
> 
>PetscErrorCode is ALWAYS 32 bit, PetscInt becomes 64 bit with 
> --with-64-bit-indices, PETScMPIInt is ALWAYS 32 bit, PetscBLASInt is usually 
> 32 bit unless you build with a special BLAS that supports 64 bit indices.
> 
>In theory the ex5f should be fine, we test it all the time with all 
> possible values of the integer. Please redo the ./configure with 
> --with-64-bit-indices --download-fblaslapack and send the configure.log this 
> provides the most useful information on the decisions configure has made.
> 
> Barry
> 
>  
> 
>  On Jan 21, 2020, at 4:28 AM, Дмитрий Мельничук 
>  wrote:
> 
>  > First you need to figure out what is triggering:
> 
>  > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot 
> open shared object file: No such file or directory
> 
>  > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe 
> the debugger will help.
> 
>  > Second
>  > VecNorm_Seq line 221 
> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c
> 
> 
>  > Debugger is best to find out what is triggering this. Since it is the C 
> side of things it would be odd that the Fortran change affects it.
> 
>  > Barry
> 
> 
>  I am in the process of finding out the causes of these errors.
> 
>  I'm inclined to the fact that BLAS has still some influence on what is 
> happening.
>  Because testing of 32-bit version of PETSc gives such weird error with 
> mpiexec.exe, but Fortran example ex5f completes succeccfully.
> 
>  I need to say that my solver compiled with 64-bit version of PETSc failed 
> with Segmentation Violation error (the same as ex5f) when calling 
> KSPSolve(Krylov,Vec_F,Vec_U,ierr).
>  During the execution KSPSolve appeals to VecNorm_Seq in bvec2.c. Also 
> VecNorm_Seq uses several types of integer: PetscErrorCode, PetscInt, 
> PetscBLASInt.
>  I suspect that PetscBLASInt may conflict with PetscInt.
>  Also I noted that execution of KSPSolve() does not even start , so arguments 
> (Krylov,Vec_F,Vec_U,ierr) cannot be passed to KSPSolve().
>  (inserted fprint() in the top of KSPSolve and saw no output)
> 
> 
>  So I tried to configure PETSc with --download-fblaslapack 
> --with-64-bit-blas-indices, but got an error that
> 
>  fblaslapack does not support -with-64-bit-blas-indices
> 
>  Switching to flags --download-openblas -with-64-bit-blas-indices was 
> unsuccessfully too because of error:
> 
>  Error during download/extract/detection of OPENBLAS:
>  Unable to download openblas
>  Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git 
> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']":
>  fatal: destination path 
> '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas'
>  already exists and is not an empty directory.
>  Unable to download package OPENBLAS from: 
> git://https://github.com/xianyi/OpenBLAS.git
>  * If URL specified manually - perhaps there is a typo?
>  * If your network is disconnected - please reconnect and rerun ./configure
>  * Or perhaps you have a firewall blocking the download
>  * You can run with --with-packages-download-dir=/adirectory and ./configure 
> will instruct you what packages to download manually
>  * or you can download the above URL manually, to /yourselectedlocation
>and use