Re: [petsc-users] questions about vectorization

2017-11-13 Thread Xiangdong
1) How about the vectorization of BAIJ format? If the block size s is 2 or 4, would it be ideal for AVXs? Do I need to do anything special (more than AVX flag) for the compiler to vectorize it? 2) Could you please update the linear solver table to label the preconditioners/solvers compatible with

Re: [petsc-users] Building library with PETSc makefile

2017-11-13 Thread Stefano Zampini
Here's another example https://bitbucket.org/dalcinl/petiga/ Il 13 Nov 2017 10:33 PM, "Satish Balay" ha scritto: > You might want to check on ctetgen on how its using PETSc makefiles to > build ctetgen library. > > You can get this with --download-ctetgen or

Re: [petsc-users] Building library with PETSc makefile

2017-11-13 Thread Satish Balay
You might want to check on ctetgen on how its using PETSc makefiles to build ctetgen library. You can get this with --download-ctetgen or https://bitbucket.org/petsc/ctetgen [this uses the 'all-legacy' infrastructure - not the currently used 'all-gnumake'] Satish On Mon, 13 Nov 2017, Greg

[petsc-users] Building library with PETSc makefile

2017-11-13 Thread Greg Meyer
Hi, I'm extending PETSc for my particular application and looking to make my own library. It would be great to do this using PETSc's makefile structure, since I would like to build it based on how PETSc was configured (static vs. shared, with appropriate linker flags, etc). However I've had a bit

Re: [petsc-users] Lapack with Quadruple Precision in PETSc and SLEPc

2017-11-13 Thread Jose E. Roman
Yes. To complement Barry’s answer: The matrix exponential is a particular case, since it is not directly available in LAPACK. First of all, I would suggest to upgrade to slepc-3.8 that has a new implementation of Higham’s method (Padé up to order 13). This might be more accurate than the basic

Re: [petsc-users] Lapack with Quadruple Precision in PETSc and SLEPc

2017-11-13 Thread Smith, Barry F.
Tobias, When you use PETSc in quad precision you need to ./configure with --download-f2cblaslapack this uses a version of BLAS/LAPACK obtained by running f2c on the reference version of BLAS/LAPACK (that is, fortran code from netlib) and then massages the source code for quad precision.

[petsc-users] Lapack with Quadruple Precision in PETSc and SLEPc

2017-11-13 Thread Tobias Jawecki
Dear all, I am interested in computations with higher precision. The application is mainly error analysis of high order Magnus integrators. In some cases the asymptotic behavior of the error can only be observed when the error is already on double precision and round-off errors of the

Re: [petsc-users] questions about vectorization

2017-11-13 Thread Zhang, Hong
Most operations in PETSc would not benefit much from vectorization since they are memory-bounded. But this does not discourage you from compiling PETSc with AVX2/AVX512. We have added a new matrix format (currently named ELL, but will be changed to SELL shortly) that can make MatMult ~2X faster

Re: [petsc-users] questions about vectorization

2017-11-13 Thread Jed Brown
Mark Adams writes: > On Sun, Nov 12, 2017 at 11:35 PM, Xiangdong wrote: > >> Hello everyone, >> >> Can someone comment on the vectorization of PETSc? For example, for the >> MatMult function, will it perform better or run faster if it is compiled >> with

Re: [petsc-users] questions about vectorization

2017-11-13 Thread Mark Adams
On Sun, Nov 12, 2017 at 11:35 PM, Xiangdong wrote: > Hello everyone, > > Can someone comment on the vectorization of PETSc? For example, for the > MatMult function, will it perform better or run faster if it is compiled > with avx2 or avx512? > There are no AVX instructions

Re: [petsc-users] Coloring of a finite volume unstructured mesh

2017-11-13 Thread Smith, Barry F.
> On Nov 13, 2017, at 2:10 AM, SIERRA-AUSIN Javier > wrote: > > Hi thanks for your answer, > > I would like to precise that in my particular case I deal with an > unstructured grid with an stencil that takes two distance neighbour (center > of the