data structure for A and B would make MatAXPY faster,
but may not affect much of performance if you only use a single shift,
e.g., ex13.c.
Hong
Error messages: identical for KRYLOV-SCHUR and ARNOLDI
[0]PETSC ERROR: - Error Message
[0
Toon :
Sorry, I forgot to mention: I am looking for the Eigenvalues that are the
largest, not in absolute value, but along the real axis.
Then, you do not need shift-invert, therefore, should not use LU
matrix factorization.
Hong
On 18 August 2014 21:54, Toon Weyens twey...@fis.uc3m.es
start searching for
efficient solvers among those working ones obtained in step 2.
Hong
On Tue, Aug 19, 2014 at 10:38 AM, Jed Brown j...@jedbrown.org wrote:
Toon Weyens twey...@fis.uc3m.es writes:
Yes, you are probably right: My code is not yet bug free (by all means!).
However, I have been
You can call
MatGetSubMatrices()
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetSubMatrices.html
Hong
On Thu, Aug 21, 2014 at 6:25 PM, priyank patel priyankpate...@live.com wrote:
hi,
Ya this will give me only local entries. But what if I want the complete
matrix
Murat,
Do you need MatGetRowMinAbs() for dftb eigenvalue problem?
Hong
On Tue, Aug 26, 2014 at 9:49 PM, Barry Smith bsm...@mcs.anl.gov wrote:
On Aug 26, 2014, at 9:35 PM, murat keçeli kec...@gmail.com wrote:
Since sbaij only stores the above diagonal half of the matrix getting the
row
Evan,
Please comment out your own mumps parameters and run the code with the
default icnt and ival. Does it still crash? If so, please send us
entire error message. It it common to get memory error in numerical
factorization of mumps. I've rarely seen error occurs in the symbolic
phase.
Hong
We can add MatSolveTranspose() to the petsc interface with superlu_dist.
Jed,
Are you working on it? If not, I can work on it.
Hong
On Fri, Aug 29, 2014 at 6:14 PM, Gaetan Kenway gaet...@gmail.com wrote:
Hi Antoine
We are also using PETSc for solving adjoint systems resulting from CFD
I'll check it.
Hong
On Fri, Sep 12, 2014 at 3:40 PM, Dominic Meiser dmei...@txcorp.com wrote:
On 09/12/2014 02:11 PM, Barry Smith wrote:
James (and Hong),
Do you ever see this problem in parallel runs?
You are not doing anything wrong.
Here is what is happening
James :
I'm fixing it in branch
hzhang/matmatmult-bugfix
https://bitbucket.org/petsc/petsc/commits/a7c7454dd425191f4a23aa5860b8c6bac03cfd7b
Once it is further cleaned, and other routines are checked, I will
patch petsc-release.
Hong
Hi Barry,
Thanks for the response. You're right, it (both
I'll add it. It would not take too long, just matter of priority.
I'll try to get it done in a day or two, then let you know when it works.
Hong
On Mon, Sep 22, 2014 at 12:11 PM, Antoine De Blois
antoine.debl...@aero.bombardier.com wrote:
Dear all,
Sorry for the delay on this topic.
Thank
James,
The fix is pushed to petsc-maint (release)
https://bitbucket.org/petsc/petsc/commits/c974faeda5a26542265b90934a889773ab380866
Thanks for your report!
Hong
On Mon, Sep 15, 2014 at 5:05 PM, Hong hzh...@mcs.anl.gov wrote:
James :
I'm fixing it in branch
hzhang/matmatmult-bugfix
https
Antoine,
I just find out that superlu_dist does not support MatSolveTransport yet
(see Sherry's email below).
Once superlu_dist provides this support, we can add it to the
petsc/superlu_dist interface.
Thanks for your patience.
Hong
---
Hong,
Sorry
to mumps developer about it.
Hong
input block size bs=n, which I guess should be 1.
Hong
results in:
[0]PETSC ERROR: Arguments are incompatible
[0]PETSC ERROR: Cannot change block size 1 to 503
What would you advise me to do here? How can I make the KSPSolve faster
when it is a dense matrix?
I could guess
Luc:
Run your code with option '-help |grep mumps', then you'll see what prefix
should be used in your case with the mumps option
'-mat_mumps_icntl_14 30'.
You may try even larger icntl_14.
Hong
Hi, I am using Petsc to solver a multiphysics problem and I have the
following issue.
I partition
analysis and ictnl(29) ordering (None)
-mat_mumps_icntl_29 0: ICNTL(29): parallel ordering 1 = ptscotch 2 =
parmetis (None)
Hong
On Mon, Nov 17, 2014 at 9:12 AM, Mark Adams mfad...@lbl.gov wrote:
I have a code that repartitions the grid using PETSc. I would like to
know the simplest
via mumps.
Hong
Mark
On Mon, Nov 17, 2014 at 10:46 AM, Hong hzh...@mcs.anl.gov wrote:
Mark,
I use ParMetis with mumps direct solver. I can test their installation
with
petsc/src/ksp/ksp/examples/tutorials
mpiexec -n 2 ./ex2 -pc_type lu -pc_factor_mat_solver_package mumps
Ghosh:
For parallel dense matrix-matrix operations, suggest using Elemental
package http://libelemental.org
Hong
I am trying to calculate the transpose of a dense rectangular matrix
(pSddft-YOrb, size=Npts x Nstates) and then MatMatMult
I am creating the dense matrix first of size
Swarnava:
The matrix product A will be a dense matrix. You may consider using
Elemental package for such matrix product.
Hong
Dear all,
I am trying to compute matrices A = transpose(R)*H*R and M =
transpose(R)*R where
H is a sparse (banded) matrix in MATMPIAIJ format (5 million x 5
:-)
Hong
--
*From: *Hong hzh...@mcs.anl.gov
*To: *Swarnava Ghosh sghosh2...@gatech.edu
*Cc: *PETSc users list petsc-users@mcs.anl.gov
*Sent: *Thursday, February 5, 2015 8:22:13 PM
*Subject: *Re: [petsc-users] Large rectangular Dense Transpose
multiplication
take a look at Elemental and check which eigenvalue routine would
work for your problem. We may add it to the interface if it does not take
much of effort.
Hong
On Wed, Jan 21, 2015 at 9:55 AM, Luc Berger-Vergiat lb2...@columbia.edu
wrote:
You can also look into SLEPc to compute eigenvalues
Natacha:
I can repeat the error with your ex1f.F.
The lsqr solver in PETSc was contributed by a user a decade ago. I'll read
the original algorithm and investigate it.
I'll let you know the result.
Hong
Dear PETSc users,
I am trying to solve an over determined linear system of equations Ax
factorization, e.g., ILU, instead of full
factorization?
The backward subs are steps AFTER matrix factorization.
Hong
On Mar 8, 2015 6:26 PM, Barry Smith bsm...@mcs.anl.gov wrote:
PETSc provides sparse parallel LU (and Cholesky) factorizations and
solves via the external packages
, allocated nonzeros=334000
total number of mallocs used during MatSetValues calls =0
Elemental run parameters:
allocated entries=334000
grid height=1, grid width=3
linear system matrix = precond matrix:
...
Everything looks correct.
Hong
On Mon, Apr
Wen:
Petsc-Lapack interface functions are listed
in petsc/include/petscblaslapck.h.
I do not see dggev there.
You may add such interface yourself, or call it via SLEPc.
Hong
Could any one tell me how to call a lapack subroutine in PETSc? I would
like to use dggev to calculate a generalized
matrix, this distribution is
well-balanced. User can set their own distribution by input local rows and
set global rows as PETSC_DECIDE.
Hong
Carol,
Have you built your petsc with hypre?
You can use ''--download-hypre' during petsc configuration.
Hong
On Mon, Apr 27, 2015 at 9:28 AM, carol.brick...@awe.co.uk wrote:
Hi,
I am trying to run an executable built with petsc 3.5.3 and hypre 2.9.0b
with flags “-pc_type hypre –ksp_type cg
supports parallel LU, not Cholesky.
Hong
know what solvers being used. My guess is the default
gmres/bjacobi/ilu(0). Please run your code with option '-ts_view' or
'-snes_view' to find out.
Hong
be scalable?
The matrix factors for np=2 and 8 might be very different.
We would like to know what mumps' developer say about it.
Hong
Hi,
I have emailed the mumps-user list.
Actually the cluster has 8 nodes with 16 cores, and other codes scale
well.
I wanted to ask if this job takes much time
.
Hong
Dear petsc-users,
I am a beginner in petsc and I have some question with the python
interface. I am trying to solve problem with mumps (or other direct sparse
solver)
I have written the following piece of code
ksp = PETSc.KSP()
ksp.create(PETSc.COMM_WORLD)
ksp.setOperators
options
mpiexec -n 3 ./ex2 -pc_type lu -pc_factor_mat_solver_package elemental
-mat_type elemental
Norm of error 2.81086e-15 iterations 1
Please using petsc-dev (master branch) for petsc-elemental interface.
Hong
On Sun, Apr 12, 2015 at 6:57 PM, Preyas Shah shah.pre...@gmail.com wrote:
Hi,
I
what goes wrong?
Hong
On Sun, Apr 5, 2015 at 3:44 PM, Barry Smith bsm...@mcs.anl.gov wrote:
We would need to see the PETSc side of the code to see if there is
anything wrong there.
On Apr 5, 2015, at 3:35 PM, James A Charles charl...@purdue.edu wrote:
Hi Hong,
You can open up
James:
Thanks a lot for looking into this. I'm still working on debugging this on
our side. It might be an issue with us. I will keep you updated.
Take your time.
Hong
- Original Message -
From: Hong hzh...@mcs.anl.gov
To: Barry Smith bsm...@mcs.anl.gov
Cc: James A Charles
information we convert the previous matrix that A is formed
of A2 (A = A1*A2) to dense prior to the multiplication using MatConvert.
It seems both A and B are dense, complex square matrices.
Did you call MatMatMult() in sequential or parallel? What matrix format did
you use?
Hong
, and symmetric+spd
matrices. You may consult mumps user manual.
- I gather that SuperLU doesn't provide a symmetric factorization.
SuperLU does not support Cholesky factorization.
Hong
^T factorization. See MatGetFactor_xxx_mumps() in
petsc/src/mat/impls/aij/mpi/mumps/mumps.c:
...
B-factortype = MAT_FACTOR_CHOLESKY;
if (A-spd_set A-spd) mumps-sym = 1;
else mumps-sym = 2;
...
Hong
See
*petsc/src/snes/examples/tutorials/network/pflow*
*Hong*
On Wed, Apr 8, 2015 at 9:39 PM, Dharmendar Reddy dharmaredd...@gmail.com
wrote:
Hello,
Is there a Fortran or C code example illustrating the usage of
DMNetwork ?
Thanks
Reddy
#endif
and replacing d to s for double real:
#if defined(PETSC_USE_REAL_SINGLE)
#include smumps_c.h
#else
//#include dmumps_c.h // old
#include smumps_c.h // new
#endif
Hong
On Fri, Jun 5, 2015 at 6:26 PM, Evan Um eva...@gmail.com wrote:
Dear Barry and PETSC users,
I am revisiting
Venkatesh,
You may also test superlu_dist, which may use less memory.
Hong
On Mon, Jun 22, 2015 at 12:43 PM, Barry Smith bsm...@mcs.anl.gov wrote:
There is nothing we can really do to help on the PETSc side. I do note
from the output
REDISTRIB: TOTAL DATA LOCAL/SENT = 328575589
David:
PETSc library does not have the option '-pc_mg_monitor'.
Hong
On Thu, Jun 11, 2015 at 6:48 AM, David Scott d.sc...@ed.ac.uk wrote:
Hello,
I am using MINRES with GAMG and have supplied various options
#PETSc Option Table entries:
-ksp_max_it 500
-ksp_monitor_true_residual
venkatesh:
On Tue, May 26, 2015 at 9:02 PM, Hong hzh...@mcs.anl.gov wrote:
'A serial job in MATLAB for the same matrices takes 60GB. '
Can you run this case in serial? If so, try petsc, superlu or mumps to
make sure the matrix is non-singular.
B matrix is singular but I get my result
'
likely uses B^{-1} (have you read slepc manual?), which could be the source
of trouble.
Please investigate your model, understand why B is singular; if there is a
way to dump null space before submitting large size simulation.
Hong
On Sun, May 31, 2015 at 8:36 AM, Dave May dave.mayhe
://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCICC.html#PCICC:
D = L^T, implemented with forward and backward solves. Here L is an
incomplete Cholesky factor of H.
Hong
On Mon, Jun 29, 2015 at 9:22 AM, carol.brick...@awe.co.uk wrote:
All,
I am trying to use KSPSolve for a QCG
0.00
Solve flops 2.194000e+03 Mflops 5.14
Norm of error 1.18018e-15 iterations 1
Hong
On Tue, May 26, 2015 at 9:03 AM, venkatesh g venkateshg...@gmail.com
wrote:
I posted a while ago in MUMPS forums but no one seems to reply.
I am solving a large generalized Eigenvalue problem.
I am
sets matinput=DISTRIBUTED as default when
using more than one processes.
Did you either use '-mat_superlu_dist_parsymbfact' for sequential run or
set matinput=GLOBAL for parallel run?
I'll add an error flag for these use cases.
Hong
On Mon, Aug 3, 2015 at 9:17 AM, Xiaoye S. Li x...@lbl.gov wrote
not have experience on SuiteSparse. Testing MUMPS is worth it as well.
Hong
Hi
Thank you for your answer. I was asking help because I find LU
factorization 2-3 times faster than KLU. According to my problem size
(200*200) and type (power system simulation), I should get almost the same
Jed,
Do we support DAE with sundials?
Hong
On Tue, Jun 30, 2015 at 12:11 PM, Jed Brown j...@jedbrown.org wrote:
Hasan, Fahad mhas...@vols.utk.edu writes:
Hello PETSc team,
Do you have any example for solving ODE/DAE using TSSUNDIALS solver?
Run any TS example with -ts_type sundials.
petsc/src/ts/examples/tutorials/ex13.c for ODE using TSSUNDIALS solver.
We do not have interface for solving DAE with TSSUNDIALS.
Hong
On Tue, Jun 30, 2015 at 12:03 PM, Hasan, Fahad mhas...@vols.utk.edu wrote:
Hello PETSc team,
Do you have any example for solving ODE/DAE using TSSUNDIALS
address Satish's request, we'll update petsc interface to this
version of superlu_dist.
Anthony:
Please download the latest superlu_dist-v4.1,
then configure petsc with
'--download-superlu_dist=superlu_dist_4.1.tar.gz'
Hong
On Tue, Jul 28, 2015 at 11:11 AM, Satish Balay ba...@mcs.anl.gov wrote
superlu_dist v4.1
2. remove existing PETSC_ARCH directory, then configure petsc with
'--download-superlu_dist=superlu_dist_4.1.tar.gz'
3. build petsc
Let us know if the issue remains.
Hong
-- Forwarded message --
From: Xiaoye S. Li x...@lbl.gov
Date: Wed, Jul 29, 2015 at 2:24 PM
=13.738475134194639, LUstruct=0x9203c8, grid=0x9202c8,
stat=0x7fff9cd84880, info=0x7fff9cd848bc) at pzgstrf.c:1308
if (recv_req[0] != MPI_REQUEST_NULL) {
-- MPI_Wait (recv_req[0], status);
We will update petsc interface to superlu_dist v4.1.
Hong
On Mon, Jul 27
.
Hong
On Wed, Aug 5, 2015 at 4:42 AM, Cong Li solvercorle...@gmail.com wrote:
Hi
I tried the method you suggested. However, I got the error message.
My code and message are below.
K is the big matrix containing column matrices.
code:
call MatGetArray(K,KArray,KArrayOffset,ierr)
call
?
If not, replace MatCreateDense() with
MatMatMult(A,Km(stepIdx-1),MAT_INITIAL_MATRIX,...).
Is matrix A dense or sparse?
Hong
On Wed, Aug 5, 2015 at 9:43 AM, Cong Li solvercorle...@gmail.com wrote:
Hong,
Thanks for your answer.
However, in my problem, I have a pre-allocated matrix K, and its
SamePattern_SameRowPerm
I do not understand why your code uses matrix input mode = global.
Hong
*From:* Hong [mailto:hzh...@mcs.anl.gov]
*Sent:* den 3 augusti 2015 16:46
*To:* Xiaoye S. Li
*Cc:* Ülker-Kaustell, Mahir; Hong; PETSc users list
*Subject:* Re: [petsc-users] SuperLU MPI-problem
Anthony,
I pushed a fix
https://bitbucket.org/petsc/petsc/commits/ceeba3afeff0c18262ed13ef92e2508ca68b0ecf
Once it passes our nightly tests, I'll merge it to petsc-maint, then
petsc-dev.
Thanks for reporting it!
Hong
On Mon, Aug 10, 2015 at 4:27 PM, Barry Smith bsm...@mcs.anl.gov wrote
,..)
Hong
On Wed, Aug 5, 2015 at 8:56 PM, Cong Li solvercorle...@gmail.com wrote:
The entire source code files are attached.
Also I copy and paste the here in this email
thanks
program test
implicit none
#include finclude/petscsys.h
#include finclude/petscvec.h
#include finclude
I'll fix this in the release if no one has done it yet.
Hong
On Mon, Aug 10, 2015 at 4:27 PM, Barry Smith bsm...@mcs.anl.gov wrote:
Anthony,
This crash is in PETSc code before it calls the SuperLU_DIST numeric
factorization; likely we have a mistake such as assuming a process has
. This would enable petsc solvers, as well
as other packages.
Again, thanks for bug reporting.
Hong
On Tue, Aug 11, 2015 at 1:33 PM, Satish Balay ba...@mcs.anl.gov wrote:
yes - the patch will be in petsc 3.6.2.
However - you can grab the patch right now - and start using it
If using a 3.6.1
Barry:
Hong, we want to reuse the space in the Km(stepIdx-1) from which it was
created which means that MAT_INITIAL_MATRIX cannot be used. Since the
result is always dense it is not the difficult case when
a symbolic computation needs to be done initially so, at least in theory,
he should
Cong:
Hong,
Sure.
I want to extend the Krylov subspace by step_k dimensions by using
monomial, which can be defined as
K={Km(1)m Km(2), ..., Km(step_k)}
={Km(1), AKm(1), AKm(2), ... , AKm(step_k-1)}
={R, AR, A^2R, ... A^(step_k-1)R}
A subspace with dense matrices as basis?
How
Zin:
See
petsc/src/mat/examples/tests/ex130.c
petsc/src/ksp/ksp/examples/tutorials/ex52.c
Hong
Hi
I would like to know how I can retrieve the lower triangular matrix
(possibly permuted or preferably the inverse of L) and the upper triangular
matrix (preferably the inverse of U) from the LU
per process 1
...
I realize that I use superlu_dist v4.0. Would v4.1 works? I'll give it a
try tomorrow.
Hong
On Mon, Jul 27, 2015 at 1:25 PM, Anthony Paul Haas a...@email.arizona.edu
wrote:
Hi Hong,
No that is not the correct matrix. Note that I forgot to mention that it
is a complex matrix
*/
We do not change anything else.
Hong
On Wed, Jul 22, 2015 at 2:19 PM, Xiaoye S. Li x...@lbl.gov wrote:
I am trying to understand your problem. You said you are solving Naviers
equation (elastodynamics) in the frequency domain, using finite element
discretization. I wonder why you have about
to experiment your matrix on a target machine to find out.
Hong
Subroutine HowBigLUCanBe(rank)
IMPLICIT NONE
integer(i4b),intent(in) :: rank
integer(i4b):: i,ct
real(dp):: begin,endd
complex(dpc):: sigma
MatTransposeMatMult()
using
petsc/src/mat/examples/tests/ex94.c
Hong
I'm running some code written by myself, using PETSC with MPI. It runs
fine with less than or equal to 12 cores. However, if I ran it with 16
cores, it gives me an error. By looking at the error message, it seems
);CHKERRQ(ierr);
ex33.c:ierr = MatLoad(A,viewer);CHKERRQ(ierr);
ex33.c: ierr = MatLoad(B,viewer);CHKERRQ(ierr);
ex37.c: ierr = MatLoad(A,fd);CHKERRQ(ierr);
ex43.c: ierr = MatLoad(A,fd);CHKERRQ(ierr);
ex6.c: ierr = MatLoad(A,fd);CHKERRQ(ierr);
ex7.c: ierr = MatLoad(A,fd);CHKERRQ(ierr);
Hong
crash in the 1st symbolic factorization?
In your case, matrix data structure stays same when omega changes, so you
only need to do one matrix symbolic factorization and reuse it.
3. Use a machine that gives larger memory.
Hong
Dear Petsc-Users,
I am trying to use PETSc to solve a set of linear
Mehrzad :
The error occurs at MatCreateNormal(A,N), a function rarely used and not
well tested. We will fix it.
Do you need this function?
Hong
Hello everyone,
I'm really new to Petsc and when I try to run
ksp/ksp/examples/tutorials/ex27
I get this error
[0]PETSC ERROR: Object
Gideon:
-mat_mumps_icntl_4 0: ICNTL(4): level of printing (0 to 4) (None)
This is for algorithmic diagnosis, not for regular runs. Use default '0'
for it.
Hong
On Tue, Aug 25, 2015 at 9:06 AM, Gideon Simpson gideon.simp...@gmail.com
wrote:
Regarding the MUMPS issue, I’m not sure
is an MPIDENSE matrix and A is an MPIAIJ matrix.
Let us know if you see any bug or performance issues.
Hong
On Fri, Oct 16, 2015 at 10:25 AM, Jed Brown <j...@jedbrown.org> wrote:
> Hong <hzh...@mcs.anl.gov> writes:
>
> > Jed:
> >>
> >>
> >> > I plan
< 1.e-12
Is this the same matrix as you mentioned?
Hong
>
>
> On Tue, Oct 27, 2015 at 9:10 AM, Matthew Knepley <knep...@gmail.com>
> wrote:
>
> On Tue, Oct 27, 2015 at 9:06 AM, Gary Rebt <gary.r...@gmx.ch[
> gary.r...@gmx.ch]> wrote:
>
> Dear petsc
Matt:
> On Tue, Oct 27, 2015 at 11:13 AM, Hong <hzh...@mcs.anl.gov> wrote:
>
>> Gary :
>> I tested your mat.bin using
>> petsc/src/ksp/ksp/examples/tutorials/ex10.c
>> ./ex10 -f0 $D/mat.bin -rhs 0 -ksp_monitor_true_residual -ksp_view
>> ...
>> M
Object: 1 MPI processes
type: ilu
ILU: out-of-place factorization
...
Hong
On Tue, Oct 27, 2015 at 12:36 PM, Hong <hzh...@mcs.anl.gov> wrote:
> Matt:
>
>> On Tue, Oct 27, 2015 at 11:13 AM, Hong <hzh...@mcs.anl.gov> wrote:
>>
>>> Gary :
>>> I
norm 2.802972716423e+03
2 KSP Residual norm 2.039112137210e+03
...
24 KSP Residual norm 2.666350543810e-02
Number of iterations = 24
Residual norm 0.0179698
Hong
On Tue, Oct 27, 2015 at 1:50 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
>
> > On Oct 27, 2015, at 12:40 PM, Hong <
Denis:
Your code looks fine to me. There are examples under
slepc/src/eps/examples/tutorials
using ST with SHELL, e.g., ex10.c
Hong
Dear developers,
>
> I wonder if there are any restriction (apart from obvious) on the calling
> order of EPS functions?
> Is the following logic corre
on it, and forgot to check '-ksp_converged_reason'.
However, superlu_dist does not report zero pivot, might simply 'exit'.
I'll contact Sherry about it.
Hong
>
> The matrix has a zero pivot with the nd ordering
>
> $ ./ex10 -pc_type lu -ksp_monitor_true_residual -f0 ~/Downloads/mat.
uperlu -mat_superlu_conditionnumber
Recip. condition number = 1.137938e-03
Number of iterations = 1
Residual norm < 1.e-12
As you see, matrix is well-conditioned. Why is it so sensitive to matrix
ordering?
Hong
Using attached petsc4py code, matrix and right-hand side, SuperLU_dist
> returns t
Jared :
Either call KSPSetPCSide() or change
const char name[] = "-ksp_pc_side"
to a non-petsc option name, e.g., "-my_ksp_pc_side".
Hong
Hello,
> I am trying to use PetscOptionsGetString to retrieve the value of an
> option in the options database, but the
convergence behavior.
Hong
After running in debug mode it seems that the GAMG solver indeed did not
> converge, however throwing the error leads to SIGABRT (backtrace and frames
> are below).
> It is still very suspicious why would solving for (unchanged) mass matrix
> wouldn't con
e+00 4.e+00
5.e+00 5.e+00 5.e+00
5.e+00 5.e+00
i.e., elemental and petsc dense matrices have same ownership.
If there is no data movement for MatConvert(), then it would be easier to
use elemental.
Hong
I plan to implement MatTransposeMatMult_MPIDense_MPIDense via
1. add MatTransposeMatMult_elemental_elemental()
2. C_dense = P_dense^T * B_dense
via MatConvert_dense_elemental() and MatConvert_elemental_dense()
Let me know if you have better suggestions.
Hong
On Thu, Oct 15, 2015 at 1:49 PM
il.
> But is there any way I can use petsc to implement a 3d decomposed FFT?
>
I guess you do parallel 3D real transform, which is not supported by FFTW.
We are not experts on FFT. You have to search external packages that
implement it.
Hong
rs in
> a MATMPIDENSE matrix; or
> 2. Store the vectors in a MATMPIDENSE matrix and perform a MatMatMult
> operation.
>
2 would be more efficient.
Hong
multiplications. If I provide the correct
> operations when constructing my MatShell, can I expect the FEAST algorithm
> to compute each contour point on a different process?
>
Slepc developer might answer this question.
Hong
In this way, mumps would dump out
more information.
>
> Then I tried the same simulation on another machine using the same number
> of processors, it does not fail.
>
Does this machine have larger memory?
Hong
>
I do not think you need to change this part of code.
Does you code check convergence at each time step?
Hong
>
>
> On 15-12-02 08:39 AM, Hong wrote:
>
> Danyang :
>>
>> My code fails due to the error in external library. It works fine for the
>> previous 2000
168 - 172, got Recip. condition number
= 1.548816e-12.
You need check your model to understand why the matrices are so
ill-conditioned.
Hong
Hi Hong,
>
> Sorry to bother you again. The modified code works much better than before
> using both superlu or mumps. However, it still encounter
Danyang :
Further testing a_flow_check_168.bin,
./ex10 -f0 /Users/Hong/Downloads/matrix_and_rhs_bin/a_flow_check_168.bin
-rhs /Users/Hong/Downloads/matrix_and_rhs_bin/x_flow_check_168.bin -pc_type
lu -pc_factor_mat_solver_package superlu -ksp_monitor_true_residual
-mat_superlu_conditionnumber
Danyang:
Using petsc/src/ksp/ksp/examples/tutorials/ex10.c,
I tested a_flow_check_168.bin
mpiexec -n 4 ./ex10 -f0
/Users/Hong/Downloads/matrix_and_rhs_bin/a_flow_check_168.bin -rhs
/Users/Hong/Downloads/matrix_and_rhs_bin/x_flow_check_168.bin -pc_type lu
-pc_factor_mat_solver_package superlu_dist
Danyang:
Add 'call MatSetFromOptions(A,ierr)' to your code.
Attached below is ex52f.F modified from your ex52f.F to be compatible with
petsc-dev.
Hong
Hello Hong,
>
> Thanks for the quick reply and the option "-mat_superlu_dist_fact
> SamePattern" works like a charm, if I u
est, bye
Sherry may tell you why SamePattern_SameRowPerm cause the difference here.
Best on the above experiments, I would set following as default
'-mat_superlu_diagpivotthresh 0.0' in petsc/superlu interface.
'-mat_superlu_dist_fact SamePattern' in petsc/superlu_dist interface.
H
/results-check.tar.gz?dl=0>*
>
Can you send us matrix in petsc binary format?
e.g., call MatView(M, PETSC_VIEWER_BINARY_(PETSC_COMM_WORLD))
or '-ksp_view_mat binary'
Hong
>
>
> Below is a summary of the norm from the three solvers at timestep 29,
> newton iteration 1 to 5.
>
as written many years ago.
It is for parallel computation. Students contributed an example at
petsc/src/sys/classes/random/examples/tutorials/ex2.c
Very few users have ever used this interface.
If you encounter any problem, please report to us.
Hong
Barry :
>
> > there is a comment:
> >
> >This is NOT currently using a parallel random number generator. Sprng
> does have
> >an MPI version we should investigate.
>
Shall we remove this comment?
Hong
>
> >> On Dec 11, 2015, at 11:30 AM, Hong
I'll investigate this - had a day off since yesterday.
Hong
On Thu, May 26, 2016 at 12:04 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
>
> Hong needs to run with this matrix and add appropriate error checkers in
> the matrix routines to detect "incomplete" matrices an
Satish,
I tested your fix on ex51f.F90 (modified from
build_nullbasis_petsc_mumps.F90) --it gives clean results with valgrind.
Shall you patch it to petsc-maint?
I also like add ex51f.F90 (contributed by Constantin)
to petsc/src/ksp/ksp/examples/tests/.
Hong
On Thu, May 26, 2016 at 5:15 PM
of MATMPIAIJ/MATMPIDENSE
MATAIJ wraps MATSEQAIJ and MATMPIAIJ.
2)
MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x,ierr)
->
MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x,ierr)
see
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatConvert.html
Hong
On Thu, May 26, 2016 at 3:05 PM, Satish Ba
16 1.0 6.4163e-01
MatLUFactorSym 1 1.0 2.4772e+00
MatLUFactorNum 1 1.0 8.6419e-01
However, petsc only interfaces with sequential mkl_pardiso. Did you get
results in parallel or sequential?
Hong
>
>
>
>
> --
> *From:* Fara
Fahad:
Run your code with '-ts_view' to see what solvers being used for sequential
and parallel runs.
Hong
Hello,
>
>
>
> I have written a code to solve a simple differential equation (x’’+x’+6x=0
> with initial values, x(0)=2, x’(0)=3). It works well on a single core and
> pro
1 - 100 of 806 matches
Mail list logo