1) How about the vectorization of BAIJ format? If the block size s is 2 or
4, would it be ideal for AVXs? Do I need to do anything special (more than
AVX flag) for the compiler to vectorize it?
2) Could you please update the linear solver table to label the
preconditioners/solvers compatible with
Here's another example
https://bitbucket.org/dalcinl/petiga/
Il 13 Nov 2017 10:33 PM, "Satish Balay" ha scritto:
> You might want to check on ctetgen on how its using PETSc makefiles to
> build ctetgen library.
>
> You can get this with --download-ctetgen or
You might want to check on ctetgen on how its using PETSc makefiles to
build ctetgen library.
You can get this with --download-ctetgen or https://bitbucket.org/petsc/ctetgen
[this uses the 'all-legacy' infrastructure - not the currently used
'all-gnumake']
Satish
On Mon, 13 Nov 2017, Greg
Hi,
I'm extending PETSc for my particular application and looking to make my
own library. It would be great to do this using PETSc's makefile structure,
since I would like to build it based on how PETSc was configured (static
vs. shared, with appropriate linker flags, etc). However I've had a bit
Yes. To complement Barry’s answer:
The matrix exponential is a particular case, since it is not directly available
in LAPACK. First of all, I would suggest to upgrade to slepc-3.8 that has a new
implementation of Higham’s method (Padé up to order 13). This might be more
accurate than the basic
Tobias,
When you use PETSc in quad precision you need to ./configure with
--download-f2cblaslapack this uses a version of BLAS/LAPACK obtained by running
f2c on the reference version of BLAS/LAPACK (that is, fortran code from netlib)
and then massages the source code for quad precision.
Dear all,
I am interested in computations with higher precision. The application
is mainly error analysis of high order Magnus integrators. In some cases
the asymptotic behavior of the error can only be observed when the error
is already on double precision and round-off errors of the
Most operations in PETSc would not benefit much from vectorization since they
are memory-bounded. But this does not discourage you from compiling PETSc with
AVX2/AVX512. We have added a new matrix format (currently named ELL, but will
be changed to SELL shortly) that can make MatMult ~2X faster
Mark Adams writes:
> On Sun, Nov 12, 2017 at 11:35 PM, Xiangdong wrote:
>
>> Hello everyone,
>>
>> Can someone comment on the vectorization of PETSc? For example, for the
>> MatMult function, will it perform better or run faster if it is compiled
>> with
On Sun, Nov 12, 2017 at 11:35 PM, Xiangdong wrote:
> Hello everyone,
>
> Can someone comment on the vectorization of PETSc? For example, for the
> MatMult function, will it perform better or run faster if it is compiled
> with avx2 or avx512?
>
There are no AVX instructions
> On Nov 13, 2017, at 2:10 AM, SIERRA-AUSIN Javier
> wrote:
>
> Hi thanks for your answer,
>
> I would like to precise that in my particular case I deal with an
> unstructured grid with an stencil that takes two distance neighbour (center
> of the
11 matches
Mail list logo