You should be using PETSc version 3.4 which was recently released and is
what the paper is based on.
Barry
On May 19, 2013, at 10:11 PM, Gaetan Kenway gaet...@gmail.com wrote:
Hi Everyone
I am trying to replicate the type of preconditioner described in
Hierarchical and Nested
with
the values being negative when using TS.
/nisse
On 2/27/06, Barry Smith bsmith at mcs.anl.gov wrote:
SNES works by computing p = -approxinv(J)*F(uold) and
then does a line search on unew = uold + lambda*p to get the
new u. First it uses a test value of 1 for lambda so it
tries to compute
PETSC_ERR_ARG_DOMAIN
Even thought I had petsc.h included.
Can I do this check when I use TS also? I have the sameproblem with
the values being negative when using TS.
/nisse
On 2/27/06, Barry Smith bsmith at mcs.anl.gov wrote:
SNES works by computing p = -approxinv(J)*F(uold) and
then does a line
/ksp/interface/itfunc.c
Quoting Barry Smith bsmith at mcs.anl.gov:
Run the parallel PETSc job with -ksp_type gmres -ksp_gmres_restart 200
-sub_pc_type lu
On Fri, 3 Mar 2006, billy at dem.uminho.pt wrote:
Hi,
I tried to solve the 18x18 matrix with laspack and the results
Add the additional option -pc_type asm
On Fri, 3 Mar 2006, billy at dem.uminho.pt wrote:
It has improved but the residual is still over the tolerance I want, which is
1E-6.
Number of iterations: 500 Residual: +5.450688E-04
Billy.
Quoting Barry Smith bsmith at mcs.anl.gov
is zero.
Quoting Barry Smith bsmith at mcs.anl.gov:
On Sat, 4 Mar 2006, billy at dem.uminho.pt wrote:
Hi,
I need the gradient of 5 scalars at ghost cells and I was thinking of
creating
5x3 vectors with VecCreateGhost(). Right now, I calculate the gradient
during
each iteration and I
,m,x,ierr)
could this be a source of trouble, solving 7 diffs at the same time?
Petsc still supply get_DRO with strange values, RK crashes directly
but beuler does about the double amount of iterations but still doesnt
to complete a timestep.
/nisse
On 3/20/06, Barry Smith bsmith
For sequential runs use -ksp_type preonly -pc_type lu
For parallel runs you need to first config/configure.py PETSc to use
a parallel direct solver. See
http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/linearsolvertable.html
for the choices.
Barry
On Mon, 20 Mar 2006, buket at
the objectfile, but I'll try again when I have
the code. I will check the makefile for the c-examples.
/nisse
On 3/16/06, Barry Smith bsmith at mcs.anl.gov wrote:
Nisse,
Just list it in your makefile with all your other object
files (that come from Fortran). Send the output
PetscSynchronizedFGets() then all processes use sscanf()
Good luck,
Barry
You won't need any
if (rank == 0) code
On Wed, 13 Sep 2006, Matt Funk wrote:
Hi,
i need to read in an input file. So to open my file i use PETSCFOpen which
works fine. But then my problems begin ... :)
Sorry it should be PetscSynchronizedFPrintf()
^^^
Barry
On Fri, 15 Sep 2006, Barry Smith wrote:
Matt,
if the element lives on my process then use VecGetArray() to access
the value and call
PetscSynchronizedPrintf()
Not completely
that there is a
comment in mg.c in PCSetUp_MG. Do not see whether there is a connection ?
Cheers Jens
-Original Message-
From: Barry Smith [mailto:bsmith at mcs.anl.gov]
Sent: Mon 9/11/2006 1:50 AM
To: petsc-users at mcs.anl.gov; jens.madsen at risoe.dk
Subject: Re: DMMG Time
Randy,
Can you try -ksp_type gmres and see if you still get the
affect?
Barry
On Mon, 25 Sep 2006, Randall Mackie wrote:
Barry Smith wrote:
Randy,
Now, if I want to solve (2) above, do I simply make a call to
KSPSolveTranspose(ksp,b,xsol), where I've set b
Randy,
Why not just call MatTranspose(A,A2,ierr)
call MatTranspose(P,P2,ierr)
KSPSetOperators(ksp,A2,P2... etc?
On Mon, 25 Sep 2006, Randall Mackie wrote:
If one calls KSPSolveTranspose, how does PETSc actually do the solve?
Does it simply use the transpose of the pre-computed
It is created.
On Mon, 25 Sep 2006, Randall Mackie wrote:
Barry,
Do I have to create A2 (or duplicate it or something)
before calling MatTranspose, or is it automatically created?
The documentation is unclear about this.
Randy
Barry Smith wrote:
Randy,
Why not just
On Fri, 29 Sep 2006, Yaron Kretchmer wrote:
If the latter (Petsc not configured to use Hypre), wouldn't Matt get an
error when specifying hypre as preconditioner?
Yes he would, but it could get lost among all the help messages
printed.
Barry
Yaron
On 9/29/06, Barry Smith
XSetFromOptions?
Do i need to set the PCType to hypre first and then
PCHYPRESetType(m_pc,boomeramg) and then it prints it out?
mat
On Friday 29 September 2006 15:35, Barry Smith wrote:
On Fri, 29 Sep 2006, Yaron Kretchmer wrote:
If the latter (Petsc not configured to use Hypre), wouldn't Matt
PetscOptionsSetValue() but recommend putting
them on the command line or in a file (pass the name
of the file into PetscInitialize()). Having to recompile
for every option is too painful.
Barry
On Wed, 11 Oct 2006, Julian wrote:
Hong,
The times are for the solution process alone and
,
Julian.
-Original Message-
From: owner-petsc-users at mcs.anl.gov
[mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith
Sent: Wednesday, October 11, 2006 2:58 PM
To: petsc-users at mcs.anl.gov
Subject: RE: General matrix interative solver
Parts of the PETSc API changes between releases; this is fundamental
to PETSc development and is clearly stated in the users manual. Thus
when you link against PETSc libraries you have to make sure you
are consistent. In
http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/changes/231.html
Billy,
Just do what it says, Call PCSetUp(pc) before PCASMGetSubKSP().
Barry
On Tue, 31 Oct 2006, billy at dem.uminho.pt wrote:
Hi,
When I use the option -sub_pc_type ilu or no option, I can't lower the
residual
of the pressure which is derived from a Laplace equation to the
, but the other functions (PetscMapInitialize() etc) are NOT undeclared,
so they do exist.
Barry
BTW, I am not clear about the block size.
Best,
Yixun
- Original Message -
From: Barry Smith bsmith at mcs.anl.gov
To: petsc-users at mcs.anl.gov
Sent: Friday, November 03, 2006 9
-
From: Barry Smith bsmith at mcs.anl.gov
To: petsc-users at mcs.anl.gov
Sent: Friday, November 03, 2006 10:30 AM
Subject: Re: how to use PetscMap?
On Fri, 3 Nov 2006, Yixun Liu wrote:
Hi,
I use PetscMap as below,
PetscMap map;
ierr = PetscMapInitialize(PETSC_COMM_WORLD
(MPI_COMM_WORLD, 59) -
process 0
Best,
Yixun
- Original Message -
From: Barry Smith bsmith at mcs.anl.gov
To: petsc-users at mcs.anl.gov
Sent: Friday, November 03, 2006 10:51 AM
Subject: Re: how to use PetscMap?
Sorry about this; you can just use map-bs = 1
, 1.3382706765032754e-300,
0, -3.7946354577853773e-304, 6.3527319749529548e-289,
-1.0316342902978391e-298, -2.7857994540718757e-292,
1.3926426017545835e-277,
-3.7119818003349503e-287, -9.8649586986765956e-281...}
Billy.
Quoting Barry Smith bsmith at mcs.anl.gov
that it is not
memory corruption making the crazy values.
Barry
On Wed, 15 Nov 2006, billy at dem.uminho.pt wrote:
Hi,
How do you suggest I track the origin of the problem. Which data should I
look at?
Thanks,
Billy.
Quoting Barry Smith bsmith at mcs.anl.gov:
These numbers
On Wed, 22 Nov 2006, Yaron Kretchmer wrote:
Thanks Matt.
And Vec sits on top of BLAS/MPI ?
Vec and Mat both talk directly to BLAS and MPI. It is rare that
PC talks directly to MPI/Blas, usually it goes through Mat. KSP, SNES
and TS hardly ever use BLAS/MPI directly, ideally KSP, SNES and
Randy,
Good point. I've added an entry to the FAQ and to the MatType and MatSetType
manual pages.
Barry
On Wed, 22 Nov 2006, Randall Mackie wrote:
Satish,
This questions seems to be asked on the average of once per week on
this list. Perhaps a section on symmetric matrics could
Unfortunately that routine is not currently wired; getting actual
memory usage is not portable and is a pain.
You can use PetscMallocGetMaximumUsage() to see the maximum amount
of memory PETSc has allocated at any one time (in all the PETSc objects).
Barry
On Mon, 8 May 2006, Sh.M
George,
First, is it a structured grid in 2 or 3 dimensions or is it unstructured
with
finite elements?
If a structured grid you should use an example like
src/ksp/ksp/examples/tutorials/ex29.c
or ex22f.F or ex22.c or ex34.c
If it is finite elements on an unstructured
difference this time is the size of the model I'm working
with, which is substantially larger than typical.
I'm going to try to run this in the debugger and see if I can get
anymore information.
Randy
Barry Smith wrote:
Randy,
The only PETSc related reason for this is that
xvec(i), i=1
Evrim,
From the manual page for MatLoad()
http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatLoad.html
Most users should not need to know the details of the binary storage format,
since MatLoad() and MatView() completely hide these details. But
Stephen,
This was an error in our last release and did not properly fixed
in a patch (why?). It will be in the next release.
You can edit include/finclude/petscmat.h and add
PetscEnum MAT_USE_COMPRESSEDROW,MAT_DO_NOT_USE_COMPRESSEDROW
PetscEnum
This an error message from BLAS.
BLAS has a terrible model for error handling; it just prints
a message to screen that you cannot control and does not
return an error code :-(
Are you
1) using a BAIJ matrix with a block size bigger than 8?
2) the bcgsl KSP solver or
3) dense matrices
Stephen,
The difficulty with this routine is that it returns an array (the procs
and numprocs argument) and an array of arrays (2d array) as the last argument.
Returning 2d arrays from C to Fortran is not doable.
Thus I have devised a slightly different calling sequence for Fortran;
the problem? Is it just a little
typo in your code, or something more serious?
Regards
Stephen
-Original Message-
From: Barry Smith [mailto:bsmith at mcs.anl.gov]
Sent: 11 June 2006 05:13
To: petsc-users at mcs.anl.gov
Subject: EXTERNAL: Re: IS routines without a Fortran interface
it to.
Regards
Stephen
-Original Message-
From: Barry Smith [mailto:bsmith at mcs.anl.gov]
Sent: 05 June 2006 17:21
To: petsc-users at mcs.anl.gov
Subject: EXTERNAL: RE: EXTERNAL: Re: Problems using external
packages Spooles and Hy pre with PETSc v2. 3.1
Stephen
KSPSolve_BCGSL() in src/ksp/ksp/impls/bcgsl/bcgsl.c uses calls to gemv()
directly.
Could you please try the following. Edit the file and for alll calls to
BLAS... and LAPACK... replace the character string by its first character.
For example replace noTr with N. Next before each
Bram,
I appologize for excluding those functions from the last release.
I will get them back into the development copy and send you a patch
as soon as I can.
Barry
On Mon, 19 Jun 2006, Bram Metsch wrote:
Hi,
in PETSc 2.3.1, PetscMap is not longer a public object. However,
The ltog has to exactly match the way the ghosts are provided to
VecCreateGhost(). Are they? Please send the exact example code.
Barry
On Tue, 20 Jun 2006, Thomas Geenen wrote:
Dear Petsc users,
I think that I do not use the ghosted vector concept in the right way.
I create a
Bram,
Sorry for the delay. I've attached a new src/vec/vec/impls/mpi/pmap.c
just drop it in and do make lib shared in that directory. Also add the
following lines to include/private/vecimpl.h
EXTERN PetscErrorCode PETSCVEC_DLLEXPORT
PetscMapSetLocalSize(PetscMap*,PetscInt);
EXTERN
You may need to update BuildSystem if you have not in a long time.
Barry
On Wed, 28 Jun 2006, Laslo Tibor Diosady wrote:
I get the following error when trying to run configure
*
Jordi,
If you want to provide the rules to compile your code you
can instead just include bmake/common/variables this sets the variables
like ${PETSC_KSP_LIB} but you provide the rules for compiling.
Satish,
Do we have written down somewhere on the website for the two
ways of
Mat,
There is no built in way to go from a bunch of seperately created
small vectors to one large vector, this is because the default VECMPI in
PETSc requires all the vector entries to be stored contiquously.
But if you start with the large vector you can chop up the big array
and put a
On Fri, 3 Feb 2006, Gang Cheng wrote:
Hi, guys,
I have some questions about the usage of petsc matrix and KSP. The
parallel Fortran code I'm working on simulates a biological process in
which a diffusion-reaction PDE needs to be solved at each time step in a
loop for many steps. Moreover,
On Mon, 6 Feb 2006, Roberto Gori wrote:
Hi, guys,
I'm trying to solve a linear system with M equations and N unknowns (M N)
using the LSQR method.
I have a C parallel sparse matrix MxN with NZ nonzero elements stored in this
way:
double values[NZ]; // the nonzero values
int
On Mon, 6 Feb 2006, Barry Smith wrote:
On Mon, 6 Feb 2006, Roberto Gori wrote:
Many thanks Barry,
MatMPIAIJSetPreallocationCSR could be a good solution for me.
It seems that to satisfy the MatMPIAIJSetPreallocationCSR requirements I
should convert the nz array from a relative
Satish,
Can you please check this? We should be able to
call KSPCGSetType() either before or afterwards. It could
be that KSPSetFromOptions_CG() wrongly resets to the default
value. I looked at the code and didn't see any wrong logic.
Probably it may be easiest to just run in the
linear system matrix = precond matrix:
Matrix Object:
type=aij, rows=9, cols=9
total: nonzeros=33, allocated nonzeros=45
not using I-node routines
Norm of error 0.1215E-05 iterations 4
asterix:/home/balay/spetsc/src/ksp/ksp/examples/tutorials
On Thu, 23 Feb 2006, Barry Smith
us know if this does not resolve the problem,
Barry
This will be fixed in our next 2.3.1 patch.
On Tue, 28 Feb 2006, Harald Pfeiffer wrote:
-start_in_debugger pathdb brings up gdb?! Ditto with noxterm.
Harald
Barry Smith wrote:
Then it should just work, what happens when you run
MatSolve 16000 1.0 1.1285e+02 1.4 1.50e+08 1.4 0.0e+00 0.0e+00
^
balance
Hmmm, I would guess that the matrix entries are not so well balanced?
One process takes 1.4 times as long for the
Mat,
This will not effect load balance or anything like that.
When you pass a communicator like MPI_COMM_WORLD to PETSc
we don't actually use that communicator (because you might
be using it and there may be tag collisions etc). So instead
we store our own communicator inside the
All processes that share the matrix must call MatGetSubMatrices()
the same number of times. If a process doesn't need a matrix it should
pass in zero length IS's. If you are always calling it with all processes
then you can run with -start_in_debugger and when it is hanging hit
control C in
Billy,
Since the three vectors are independent there is currently
no way to do this. We would need to add additional support to the
scatter operations to allow packing several scatters together
(actually not a bad idea).
This problem does not usually come up because we recommend
On Sat, 8 Apr 2006, Matthew Knepley wrote:
On 4/8/06, buket at be.itu.edu.tr buket at be.itu.edu.tr wrote:
Hi,
I have several questions;
Which ordering method does petsc use by default before matrix
factorization?
For ILU it is none (i.e. natural) for LU it is nested dissection.
Letian,
What do you mean be pre-conditioner matrix? It is very rare
that a preconditioner is explicitly represented as a matrix; it is almost
always just some code that applies the operator. In general an explicitly
represented preconditioner would actually be a dense matrix.
If you
to apply the saved matrix as pre-conditioner? Or I have
to program an user-defined PC routine?
Thanks.
Letian
-Original Message-
From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at
mcs.anl.gov]
On Behalf Of Barry Smith
Sent: Wednesday, April 12, 2006 7:56 PM
What controls the iteration max and tolerance for Richardson
solver is -inner_ksp_max_its its and -inner_ksp_rtol rtol
This is true for any preconditioner including hypre boomeramg.
The defaults are 1 and 1.e-5.
The problem is that the options
-inner_pc_hypre_boomeramg_max_iter 4
Randy,
1) I'd first change the scaling of the
with coefficient matrix values set to 1.0),
to match the other diagonal entries in the matrix. For example, if the
diagonal entries are usually 10,000 then multiply the boundary condition
rows by 10,000. (Note the diagonal entries may be
1) The PETSc LU and Cholesky solvers only run sequentially.
2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist,
MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for
them.
Hence we do not have any setup for doing parallel matrix
It is 5, also the length of GlobalColIndices is 5, not 10.
Each row you pass in has the values in the same columns.
Barry
On Mon, 8 Jan 2007, Matt Funk wrote:
Hi,
i was wondering in MatSetValues, when i need to specify the number of
columns,
whether that is the number of column
Isa,
The PETSc solvers are all deadlock free, this means that if
used with a proper MPI implementation it will never deadlock
inside the solvers. The easiest way (and best and the one you should
use) to determine what is causing the problem is to use the additional
option
:
Barry Smith a ?crit :
1) The PETSc LU and Cholesky solvers only run sequentially.
2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist,
MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for
them.
Hence we do not have any setup
On Thu, 11 Jan 2007, Dimitri Lecas wrote:
Barry Smith a ?crit :
Dimitri,
No, I think this is not the correct way to look at things. Load
balancing the original matrix is not neccessarily a good thing for
doing an LU factorization (in fact it is likely just to make the LU
the underlying grid (finite element etc) then generate
the matrix. If one does this then you do not repartition the matrix.
Barry
On Thu, 11 Jan 2007, Dimitri Lecas wrote:
Barry Smith a ?crit :
On Thu, 11 Jan 2007, Dimitri Lecas wrote:
Barry Smith a ?crit
Ben,
Sounds like you are using a logically rectangular grid in two dimensions?
(or 3). If so I highly recommend using the DMMG infrastructure in PETSc it
handles all the decomposition of the domain into subrectangles, preallocates the
matrix, handles ghost point updates and even lets you
Ben,
Please send the configure.log to petsc-maint at mcs.anl.gov
It is suppose to work with the ATLAS libraries, but they
are often temperamental.
Barry
On Sat, 13 Jan 2007, Ben Tay wrote:
Hi,
I've ATLAS on my server. Can I use it for the blas/lapack library?
I tried to specify
.
On 1/13/07, Barry Smith bsmith at mcs.anl.gov wrote:
Ben,
This is partially our fault. We never run make clean on MPICH after
the
install so there are lots of .o and .a files lying around. I've updated
BuildSystem/config/packages/MPI.py to run make clean after
Ben,
You definitely have to submit a PETSc job just like
any other MPI job. So please try using the script.
Barry
On Tue, 16 Jan 2007, Ben Tay wrote:
Hi Pan,
I also got very big library files if I use PETSc with mpich2.
Btw, I have tried several options but I still don't
other ways of compilation?
Btw, I am using ifc 7.0 and icc 7.0. The codes are written in fortran.
Thank you.
On 1/16/07, Barry Smith bsmith at mcs.anl.gov wrote:
Ben,
You definitely have to submit a PETSc job just like
any other MPI job. So please try using the script
Not, Superlu DIST requires a real MPI implementation (
but, of course, can be be run on just one process if you want). But
for sequential solves you can just use Superlu.
Barry
On Fri, 19 Jan 2007, #DOMINIC DENVER JOHN CHANDAR# wrote:
Hi
Is it possible to build the serial version
This routine does make sends and receives of length zero;
this is legal in MPI but possibly Open-MPI has bugs dealing with
this.
Barry
On Fri, 19 Jan 2007, Berend van Wachem wrote:
Hi,
valgrind shows something related to open-mpi, see below. I'll try mpich
to see if the problem
On Sat, 20 Jan 2007, Ben Tay wrote:
Hi,
I've encountered some MPI programming problems. Does anyone know of a
mailing list sort of discussion forum similar to this PETSc one which deals
with MPI?
I don't know. Open MPI has a users list, maybe that is appropriate
Ben,
You can use -pc_type redundant -redundant_pc_type lu to solve the
linear system with a sequential LU to test if that improves the situation.
Likely your generation of the matrix on 2 processes is wrong, you can run
with -mat_view on a small problem with both 1 and 2 processes and see
On Wed, 24 Jan 2007, Victor Sofonea wrote:
Hi,
I'm using two-dimensional DAs in my code to create global and local
vectors
DA da;
Vec nloc, lnloc, zero_nloc;
PetscInt imax = 102, jmax=3;
DACreate2d(PETSC_COMM_WORLD, DA_YPERIODIC, DA_STENCIL_BOX,
Jan 2007, Barry Smith wrote:
Note that the PETSc X model has ALL nodes connect to the X server (not
just where rank == 0). Thus all of them need
1) the correct display value (-display mymachine:0.0 is usually the best
way to
provide it)
2) permission to access the display (X
On Thu, 25 Jan 2007, Fernando Campos wrote:
Dear Nicolas,
It is a good question, but I don't know the answer...
Why don't you try to set the maximum levels to a large numbers in order to
see how many levels boomerAMG creates? I believe that it refines the grid up
to the maximum
It seems extremely likely that you do not want to do this.
If you want to use SOR or SSOR as a preconditioner for GMRES simply
use -pc_type sor; see the manual page for PCSOR for other options.
Barry
On Sun, 28 Jan 2007, jiaxun hou wrote:
Hi,
I want to add a smooth method in the
Lisandro,
Good point. One could use (always run with -ksp_view to insure it is doing
what you think).
-pc_type composite -ksp_type richardson -pc_composite_pcs ksp,sor
-sub_0_ksp_ksp_max_it 5
This would run 5 iterations of the default KSP and PC followed by a sweep of
SSOR followed
PETSc also does extensive additional assert type testing of input arguments
to functions etc in the debug version.
Barry
On Mon, 29 Jan 2007, Matthew Knepley wrote:
On 1/29/07, Ben Tay zonexo at gmail.com wrote:
Hi,
May I know what's the difference between the optimized and
-
Yes, each block is a rectangular portion of the domain. Not so small
though (more like 100 x 100 nodes)
Yaron
---Original Message---
From: Barry Smith
Subject: Re: Non-uniform 2D mesh questions
Sent: 29 Jan '07 19:40
Yaron,
Is each one of these blocks
G.D.
You can compile PETSc to use long double with the additional
configure/configure.py options --with-precision=longdouble
--download-c-blas-lapack
It compiles a special version of Blas/Lapack for long double.
I'm not sure this is really the best way to go.
I would suggest
Please see the message I posted yesterday.
Date: Mon, 4 Dec 2006 19:51:20 -0600 (CST)
From: Barry Smith bsmith at
mcs.anl.gov
Jianing,
There is no by hand in PETSc! All you do is acess the next
set of grid points to via the indices in the usual way. So
in two dimensions to access the values to the left of x[j][0]
use x[j][-1] to the right of x[j][nx-1] use x[j][nx]. (Recall the
i,j indices are reversed in the C
Ben,
First make sure that the residual has actually gotten
to the tolerance you want. PETSc KSP does NOT stop if the
linear system has not converged; you should call KSPGetConvergedReason()
after each solve to make sure it has converged (as a quick check
you can run with
The code is not scalable for large matrices; I would argue that
it doesn't make sense to visualize such large matrices in this way.
Each matrix entry is tiny compared to a single pixal on the screen.
For PDE problems and most others, the non-zero structure of the
matrix is well represented
David,
Depending on what you wish to do it may be very easy. The logging
is all done on a per process basis (each process just logs its stuff).
PetscLogPrintSummary() takes as an argument a communicator and summarizes
all the data over THAT communicator. So it may be as simple as calling
Please send all the output from running make to petsc-maint at mcs.anl.gov
Barry
On Thu, 7 Dec 2006, Saikrishna V. Marella wrote:
Barry,
I tried using -with-precision=matsingle but it gives the following error.
Cannot convert PetscScalar* to MatScalar* in assignment.Is
/06, Barry Smith bsmith at mcs.anl.gov wrote:
Ben,
First make sure that the residual has actually gotten
to the tolerance you want. PETSc KSP does NOT stop if the
linear system has not converged; you should call KSPGetConvergedReason()
after each solve to make sure it has
On Mon, 11 Dec 2006, Nicolas Bathfield wrote:
Hi,
I would like to compare PETSc DMMG convergence to some theoretical
convergence rate given by local Fourier analysis for a simple diffusion
problem.
To do so, I need to extract the residuals after each multigrid iteration.
Unfortunately,
On Wed, 13 Dec 2006, Ben Tay wrote:
Hi,
I have some problems with using mpi. My code works with one server but when
I test it on another system, I get the wrong answers. In that system, the
mpi is located at /usr/local/topspin/mpi/mpich/. I tried to specify it with
--with-mpi-dir but it
You can delete src and docs
Barry
On Sat, 3 Mar 2007, Ben Tay wrote:
Hi,
I've used --download-hypre=1 to integrate hypre into PETSc. I've found the
hypre directory in externalpackages. hypre takes up about 136mb. Can I
delete the files other than the library files (such as src)
Ben,
Sounds like you are using your own makefiles with your own list
of link libraries. Instead of removing the mpich.lib from the linking
options you must replace it with the mpiuni library (which is a stub
library for one process) libmpiuni.lib
Barry
On Mon, 5 Mar 2007, Ben Tay wrote:
Aaron,
You can use PCASMSetLocalSubdomains(pc,n,is[]) where is[] is an
array of one or more IS index sets that define the subdomains (with
overlap) for that process. Usually n is 1, which means one subdomain
per processor. The routine PCASMCreateSubdomains2D() may be helpful
for you in
On Sat, 10 Mar 2007, knutert at stud.ntnu.no wrote:
Thank you for your reply.
One boundary cell is defined to have constant pressure, since that makes the
equation system have a unique solution.
This is really not the best way to do it; it can produce very ill-conditioned
matrices.
I
Thomas,
Please send configure.log to petsc-maint at mcs.anl.gov
Thanks
Barry
On Fri, 16 Mar 2007, Thomas Geenen wrote:
dear petsc users,
I build petsc on a POWER5+ machine.
now i would like to add mumps.
if i do that the way it should de done i end up with a version in
PETSc-users,
A representative of Interactive Supercomputing has asked us to
gauge if there is interest among PETSc users in calling PETSc from
higher level systems such as Matlab (and presumably money to pay for
it :-). I've attached a portion of the message below.
If you are interested
MatGetArray() and MatGetRowIJ() together.
On Fri, 30 Mar 2007, yaron at oak-research.com wrote:
Hi Petsc-users
I'd like to get the 3-vector description (NZ values,Indexs,Pointers) from
a local sparse matrix- I need those in order to interface a Petsc matrix
object with another software
Yaron
---Original Message---
From: Barry Smith
Subject: Re: Non-uniform 2D mesh questions
Sent: 02 Feb '07 13:38
Yaron,
Anything is possible :-) and maybe not terribly difficult to get
started.
You could use DAGetMatrx() to give you the properly pre
, yaron at oak-research.com wrote:
Barry-
So far I only thought of having a single large sparse matrix.
Yaron
---Original Message---
From: Barry Smith
Subject: Re: Non-uniform 2D mesh questions
Sent: 30 Jan '07 10:58
Yaron,
Do you want to end up
1 - 100 of 5194 matches
Mail list logo