The snes options are not relevant since the parts of a PCFIELDSPLIT are
always linear problems.
By default PCFIELDSPLIT uses a KSP type of preonly on each split (that is
it applies the preconditioner exactly once inside the PCApply_FieldSplit()
hence the -fieldsplit_*_ksp_ options are
Perhaps you need
> make PETSC_DIR=~/asd/petsc-3.19.3 PETSC_ARCH=arch-mswin-c-opt all
> On Jul 24, 2023, at 1:11 PM, Константин via petsc-users
> wrote:
>
> Good evening. After configuring petsc I had to write this comand on cygwin64.
> $ make PETSC_DIR=/home/itugr/asd/petsc-3.19.3
On Tue, Jul 11, 2023 at 3:58 PM Константин via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hello, I'm trying to build petsc on windows. And when I make it I have
> such problem
>
>
Did you run configure first?
Thanks,
Matt
> --
> Константин
>
--
What most experimenters take for
Please keep on list,
On Thu, Feb 17, 2022 at 12:36 PM Bojan Niceno <
bojan.niceno.scient...@gmail.com> wrote:
> Dear Mark,
>
> Sorry for mistakenly calling you Adam before.
>
> I was thinking about the o_nnz as you suggested, but then something else
> occurred to me. So, I determine the d_nnz
On Thu, Feb 17, 2022 at 11:46 AM Bojan Niceno <
bojan.niceno.scient...@gmail.com> wrote:
> Dear all,
>
>
> I am experiencing difficulties when using PETSc in parallel in an
> unstructured CFD code. It uses CRS format to store its matrices. I use
> the following sequence of PETSc call in the
Thank you, I'll try that.
Best,
Li
On Wed, Dec 4, 2019 at 5:34 AM Smith, Barry F. wrote:
>
> From the code:
>
> if (snes->lagjacobian == -2) {
> snes->lagjacobian = -1;
>
> ierr = PetscInfo(snes,"Recomputing Jacobian/preconditioner because lag
> is -2 (means compute Jacobian, but
From the code:
if (snes->lagjacobian == -2) {
snes->lagjacobian = -1;
ierr = PetscInfo(snes,"Recomputing Jacobian/preconditioner because lag is
-2 (means compute Jacobian, but then never again) \n");CHKERRQ(ierr);
} else if (snes->lagjacobian == -1) {
ierr =
> On Dec 2, 2019, at 2:30 PM, Li Luo wrote:
>
> -snes_mf fails to converge in my case, but -ds_snes_mf_operator works,
> when the original analytic matrix is still used as the preconditioner.
> The timing is several times greater than using the analytic matrix for both
> Jacobian and
Please send a run with optimization turned on (--with-debugging=0 in
./configure) and -log_view without the actual timing information we are just
guessing where the time is spent.
If your problem has a natural block size then using baij should be a bit
faster than aij, but not
Thank you very much! It looks forming an analytic Jacobian is the only
choice.
Best,
Li
On Mon, Dec 2, 2019 at 3:21 PM Matthew Knepley wrote:
> On Mon, Dec 2, 2019 at 4:04 AM Li Luo wrote:
>
>> Thank you for your reply.
>>
>> The matrix is small with only 67500 rows, but is relatively dense
On Mon, Dec 2, 2019 at 4:04 AM Li Luo wrote:
> Thank you for your reply.
>
> The matrix is small with only 67500 rows, but is relatively dense since a
> second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
>
This is very dense, 0.5% fill or 340 nonzeros per row.
> The number
Thank you for your reply.
The matrix is small with only 67500 rows, but is relatively dense since a
second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
The number of colors is 539 as shown by using -mat_fd_coloring_view:
MatFDColoring Object: 64 MPI processes
type not yet set
How many colors is it requiring? And how long is the MatGetColoring()
taking? Are you running in parallel? The MatGetColoring() MATCOLORINGSL uses a
sequential coloring algorithm so if your matrix is large and parallel the
coloring will take a long time. The parallel colorings are
Mark Adams writes:
> FM matrices are slow and meant for debugging mostly (I thought,
> although the docs just give this warning if coloring is not available).
>
> I would check the timings from -log_view and verify that the time is spent
> in MatFDColoringApply. Running with -info should print
FM matrices are slow and meant for debugging mostly (I thought,
although the docs just give this warning if coloring is not available).
I would check the timings from -log_view and verify that the time is spent
in MatFDColoringApply. Running with -info should print the number of colors
(C). The
Can you send configure.log from this build?
Satish
On Thu, 28 Feb 2019, DAFNAKIS PANAGIOTIS via petsc-users wrote:
> Hi everybody,
>
> I am trying to install PETSc version 3.10.3 on OSX Sierra 10.13.6 with the
> following configure options
> ./configure --CC=mpicc --CXX=mpicxx --FC=mpif90
Message-
> From: Smith, Barry F. [mailto:bsm...@mcs.anl.gov]
> Sent: Thursday, 1 November 2018 9:28 AM
> To: Wenjin Xing
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] (no subject)
>
>
>This option only works with AIJ matrices; you must be using
This option only works with AIJ matrices; you must be using either BAIJ or
SBAIJ matrices? (or a shell matrix)
Barry
> On Oct 31, 2018, at 5:45 AM, Wenjin Xing via petsc-users
> wrote:
>
> My issue is summarized in the picture and posted in the link
>
Thanks. I think I find the right way.
Wayne
On Fri, Sep 16, 2016 at 11:33 AM, Ji Zhang wrote:
> Thanks for your warm help. Could you please show me some necessary
> functions or a simple demo code?
>
>
> Wayne
>
> On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith
Thanks for your warm help. Could you please show me some necessary
functions or a simple demo code?
Wayne
On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith wrote:
>
> You should create your small m_ij matrices as just dense two dimensional
> arrays and then set them into the
You should create your small m_ij matrices as just dense two dimensional
arrays and then set them into the big M matrix. Do not create the small dense
matrices as PETSc matrices.
Barry
> On Sep 15, 2016, at 9:21 PM, Ji Zhang wrote:
>
> I'm so apologize for the
I'm so apologize for the ambiguity. Let me clarify it.
I'm trying to simulation interactions among different bodies. Now I have
calculated the interaction between two of them and stored in the sub-matrix
m_ij. What I want to do is to consider the whole interaction and construct
all sub-matrices
On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang wrote:
> Thanks Matt. It works well for signal core. But is there any solution if I
> need a MPI program?
>
It unclear what the stuff below would mean in parallel.
If you want to assemble several blocks of a parallel matrix that looks
Thanks Matt. It works well for signal core. But is there any solution if I
need a MPI program?
Thanks.
Wayne
On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley wrote:
> On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang wrote:
>
>> Dear all,
>>
>> I'm using petsc4py
On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang wrote:
> Dear all,
>
> I'm using petsc4py and now face some problems.
> I have a number of small petsc dense matrices mij, and I want to construct
> them to a big matrix M like this:
>
> [ m11 m12 m13 ]
> M = | m21
Re,
Makes sense to read the documentation, I will try with another
preconditioners.
Thanks for the support.
2016-06-02 18:15 GMT+02:00 Matthew Knepley :
> On Thu, Jun 2, 2016 at 11:10 AM, neok m4700 wrote:
>
>> Hi Satish,
>>
>> Thanks for the
On Thu, Jun 2, 2016 at 11:10 AM, neok m4700 wrote:
> Hi Satish,
>
> Thanks for the correction.
>
> The error message is now slightly different, but the result is the same
> (serial runs fine, parallel with mpirun fails with following error):
>
Now the error is correct. You
Hi Satish,
Thanks for the correction.
The error message is now slightly different, but the result is the same
(serial runs fine, parallel with mpirun fails with following error):
[0] KSPSolve() line 599 in <...>/src/ksp/ksp/interface/itfunc.c
[0] KSPSetUp() line 390 in
with petsc-master - you would have to use petsc4py-master.
i.e try petsc-eab7b92 with petsc4py-6e8e093
Satish
On Thu, 2 Jun 2016, neok m4700 wrote:
> Hi Matthew,
>
> I've rebuilt petsc // petsc4py with following versions:
>
> 3.7.0 // 3.7.0 => same runtime error
> 00c67f3 // 3.7.1 => fails
Hi Matthew,
I've rebuilt petsc // petsc4py with following versions:
3.7.0 // 3.7.0 => same runtime error
00c67f3 // 3.7.1 => fails to build petsc4py (error below)
00c67f3 // 6e8e093 => same as above
f1b0812 (latest commit) // 6e8e093 (latest commit) => same as above
In file included from
On Thu, Jun 2, 2016 at 9:12 AM, neok m4700 wrote:
> Hi,
>
> I built petsc 3.7.1 and petsc4py 3.7.0 (with openmpi 1.10.2) and ran the
> examples in the demo directory.
>
I believe this was fixed in 'master':
On Mon, May 4, 2015 at 5:57 PM, Reza Yaghmaie reza.yaghma...@gmail.com
wrote:
Dear Matt,
The initial guess was zero for all cases of SNES solvers. The initial
jacobian was identity for all cases. The system is small and is ran
sequentially.
I have to add that I use FDColoring routine for
Dear Matt,
The initial guess was zero for all cases of SNES solvers. The initial
jacobian was identity for all cases. The system is small and is ran
sequentially.
I have to add that I use FDColoring routine for the jacobian as well.
Regards,
Ray
On Monday, May 4, 2015, Matthew Knepley
On Mon, May 4, 2015 at 2:11 PM, Reza Yaghmaie reza.yaghma...@gmail.com
wrote:
Dear PETSC representatives,
I am solving a nonlinear problem with SNESNGMRES and it converges faster
with less iterations compared to otehr SNES methods. Any idea why that is
the case?
It is impossible to tell
Dear Matt,
Actually the initial jacobian was identity. Regular SNES converges in 48
iterations, GMRES in 19, NCG in 67,...
Do you think SNESQN with the basiclineseach was the problem for divergence?
If I use SNESQN by default should not it converge with initial identity
jacobian?
Best regards,
On Mon, May 4, 2015 at 5:41 PM, Reza Yaghmaie reza.yaghma...@gmail.com
wrote:
Dear Matt,
Actually the initial jacobian was identity. Regular SNES converges in 48
iterations, GMRES in 19, NCG in 67,...
Do you think SNESQN with the basiclineseach was the problem for divergence?
If I use
Yes, just assemble the same sequential matrices on each rank. To do this,
create the matrix using the communicator PETSC_COMM_SELF
Cheers
Dave
On Monday, 23 June 2014, Bogdan Dita bog...@lmn.pub.ro wrote:
Hello,
I wanted to see how well Umfpack performs in PETSc but i encountered a
And use the same sequential communicator when you create the KSP on each
rank
On Monday, 23 June 2014, Dave May dave.mayhe...@gmail.com wrote:
Yes, just assemble the same sequential matrices on each rank. To do this,
create the matrix using the communicator PETSC_COMM_SELF
Cheers
Dave
ok,
so considering performance on MIC
can the library MAGMA be used as an alternate to Viennacl for PETSc or
FEniCS?
http://www.nics.tennessee.edu/files/pdf/hpcss/04_03_LinearAlgebraPar.pdf
(from slide 37 onwards)
MAGMA seems to have sparse version which i think is doing all that any
sparse
Hi,
so considering performance on MIC
can the library MAGMA be used as an alternate to Viennacl for PETSc or
FEniCS?
No, there is no interface to MAGMA in PETSc yet. Contributions are
always welcome, yet it is not our priority to come up with an interface
of our own. I don't think it will
Hi,
Im a masters student from Indian Institute of Technology delhi. Im
working on PETSc.. for performance, which is my area of interest. Can
you please help me in knowing 'How to run PETSc on MIC' .That would
be of great help to me.
my experience is that 'performance' and 'MIC' for
On Fri, Feb 21, 2014 at 12:34 PM, Chung-Kan Huang ckhua...@gmail.comwrote:
Hello,
In my application I like to use
PetscErrorCode KSPSetOperators(KSP ksp,Mat Amat,Mat Pmat,MatStructure
flag)
and have Pmat different from Amat
if Amat = L + D + U
then Pmat = Amat - L* - U* + rowsum(L* +
42 matches
Mail list logo