ERENCE;
> recompile and run again
>
>Barry
>
>
> > On Jun 19, 2018, at 3:28 PM, Gaetan Kenway wrote:
> >
> > I tried the patch. It initially didn't compile because of a conflict
> with the corresponding ftn-auto file. When i commented out the stuff ther
>
>
> Please let us know if it works,
>
> Thanks
>
> Barry
>
>
> > On Jun 19, 2018, at 12:36 PM, Gaetan Kenway wrote:
> >
> > Seems like the MatMFFDSetBase is now broken in petsc 3.9
> >
> > In 3.7 I used:
> >
&g
ot;null" since petsc_null_object is gone?
Thanks,
Gaetan
On Tue, Jun 19, 2018 at 10:36 AM Gaetan Kenway wrote:
> Seems like the MatMFFDSetBase is now broken in petsc 3.9
>
> In 3.7 I used:
>
> call MatMFFDSetBase(dRdW, wVec, PETSC_NULL_OBJECT, ierr)
>
> and the most lo
Seems like the MatMFFDSetBase is now broken in petsc 3.9
In 3.7 I used:
call MatMFFDSetBase(dRdW, wVec, PETSC_NULL_OBJECT, ierr)
and the most logical translation is now
call MatMFFDSetBase(dRdW, wVec, PETSC_NULL_VEC, ierr)
Unfortunately this just segfaults:
#0 VecAXPY (y=y@entry=0x10a3750,
I have compiled CGNS3.3 with hdf from PETSc but not using cmake from CGNS.
Here is what I used:
git clone https://github.com/CGNS/CGNS.git
cd CGNS/src
export CC=mpicc
export FC=mpif90
export FCFLAGS='-fPIC -O3'
export CFLAGS='-fPIC -O3'
export LIBS='-lz -ldl'
./configure
Interesting. The main thing is it's now sorted out and then solver is back
in production.
Thanks for your help
On Fri, Aug 11, 2017 at 1:05 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
>
> > On Aug 11, 2017, at 2:43 PM, Gaetan Kenway <gaet...@gmail.com> wrote:
> >
<bsm...@mcs.anl.gov> wrote:
>
>Thanks for confirming this. The change was actually in the 3.4 release.
> I have updated the 3.4 changes file to include this change in both the
> maint and master branches.
>
> Barry
>
> > On Aug 11, 2017, at 12:47 PM, Gaetan
this resolves the problem and we'll update the
> changes file.
>
>Barry
>
>
>
> > On Aug 11, 2017, at 12:14 PM, Gaetan Kenway <gaet...@gmail.com> wrote:
> >
> > Hi All
> >
> > I'm in the process of updating a code that uses PETSc for solving
Hi All
I'm in the process of updating a code that uses PETSc for solving linear
systems for an unstructured CFD code. As of recently, it was using an
ancient version (3.3). However, when I updated it up to 3.7.6 I ended up
running into issues with one of the KSP solves. The remainder of the code
Maybe try doing
pComm=MPI.COMM_WORLD instead of the PETSc.COMM_WORLD. I know it shouldn't
matter, but it's worth a shot. Also then you won't need the tompi4py() i
guess.
Gaetan
On Wed, Apr 12, 2017 at 10:17 AM, Gaetan Kenway <gaet...@gmail.com> wrote:
> Hi Rodrigo
>
> I just r
Hi Rodrigo
I just ran your example on Nasa's Pleiades system. Here's what I got:
PBS r459i4n11:~> time mpiexec -n 5 python3.5 another_split_ex.py
number of subcomms = 2.5
petsc rank=2, petsc size=5
sub rank 1/3, color:0
petsc rank=4, petsc size=5
sub rank 2/3, color:0
petsc rank=0, petsc size=5
One other quick note:
Sometimes it appears that mpi4py for petsc4py do not always play nicely
together. I think you want to do the petsc4py import first and then the
mpi4py import. Then you can split the MPI.COMM_WORLD all you want and
create any petsc4py objects on them. Or if that doesn't
Hi all
I think I remember having this or a similar issue at some point as well.
The issue was we had two (python wrapped) codes on the MPI_COMM_WORLD but
only one of them used PETSc. We never did figure out how to just get petsc
initialized on the one sub-comm. The workaround was just to do from
There shouldn't be any additional issue with the petsc4py wrapper. We do
this all the time. In fact, it's generally best to use the petsc4py to do
the initialization of petsc at the very top of your highest level python
script. You'll need to do this anyway if you want to use command line
source from run_analysis.f90. You
>>>>> still need to compile that yourself and include in the final link. In my
>>>>> example, all the "original" source code was precompiled into a library
>>>>> from
>>>>> a different makefil
llocation_SeqAIJ() line 3598 in
>> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/
>> impls/aij/seq/aij.c
>> [0]PETSC ERROR: #2 MatSeqAIJSetPreallocation() line 3570 in
>> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/
>> impls/aij/se
;
> Seems I may have gotten the linking wrong somehow. Will keep searching,
> but the simplified makefile that I used is attached in case anyone thinks
> they might be able to spot the issue in it. That said, I do realize that
> this may be starting to reach beyond the scope of this maili
the origin of this file, please do tell me!
>
> Thank you,
> Austin
>
> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway <gaet...@gmail.com> wrote:
>
>> Austin
>>
>> Here is the full makefile for a code we use. The variables defined
>> externally in a sep
Austin
Here is the full makefile for a code we use. The variables defined
externally in a separate config file are:
$(FF90)
$(FF90_FLAGS)
$(LIBDIR)
$(PETSC_LINKER_FLAGS)
$(LINKER_FLAGS)
$(CGNS_LINKER_FLAGS)
$(PYTHON)
$(PYTHON-CONIFG)
$(F2PY)
(These are usually use python, python-config and f2py.
We do this all the time. The trick is that you can't use f2py do actually
do *any* of the compiling/linking for you. You just have to use it to get
the module.c file and .f90 file. Then compile that yourself and
use whatever compile flags, linking etc you would normally use to get a
.so. We've had
<mfad...@lbl.gov> wrote:
> Gaetan, This was simple, if you are setup to easily check this you can
> test this in my branch mark/gamg-aijcheck -- you should get an error
> message "Require AIJ matrix."
> Thanks again,
>
>
> On Fri, Aug 12, 2016 at 11:08 AM, Ga
> On Fri, Aug 12, 2016 at 7:02 AM, Lawrence Mitchell <
> lawrence.mitch...@imperial.ac.uk> wrote:
>
>> [Added petsc-maint to cc, since I think this is an actual bug]
>>
>> > On 12 Aug 2016, at 01:16, Gaetan Kenway <gaet...@gmail.com> wrote:
>> >
>>
at 11:10 AM, Jed Brown j...@jedbrown.org wrote:
Gaetan Kenway gaet...@gmail.com writes:
The untransposed system converges about 6 orders of magnitude with
GRMES(100), ASM (overlap 1) and ILU(1) with RCM reordering. The test is
run
on 128 processors. There are no convergence difficulties
That is a good idea to try Francois. Do you know if there is a easy way to
try that in PETSc? However, in my case, I'm not using an upwind scheme, but
rather a 2nd order JST scheme for the preconditioner. Also, we have
observed the same behavior even for Euler systems, although both the
Hi everyone
I have a question relating to preconditioning effectiveness on large
transposed systems. The linear system I'm trying to solve is jacobian
matrix of 3D RANS CFD solver. The bock matrix consists of about 3 million
block rows with a block size of 6: 5 for the inviscid part and 1 for the
HI Xiangdong
We have use the Tapenade AD package to compute sparse matrix jacobians for
CFD problems. We use the result from the AD to populate a PETSc sparse
matrix and then use PETSc to solve the resulting linear system.
Hope that helps.
Gaetan
On Wed, Oct 1, 2014 at 4:23 PM, Barry Smith
Hi Antoine
We are also using PETSc for solving adjoint systems resulting from CFD. To
get around the matSolveTranspose issue we just assemble the transpose
matrix directly and then call KSPSolve(). If this is possible in your
application I think it is probably the best approach
Gaetan
On Fri,
Hi Jonathan
I've successfully used petsc shell matrices from Python. There is an
example of how to do it in demo/poisson2d/poisson2d.py. The example is a
little sparse, but it has all the important information. The most important
is to define the
def mult(self, mat, x, y):
routine inside of a
, Gaetan Kenway gaet...@gmail.com wrote:
Hi Jonathan
I've successfully used petsc shell matrices from Python. There is an
example of how to do it in demo/poisson2d/poisson2d.py. The example is a
little sparse, but it has all the important information. The most important
is to define the
def
on processor 0, is it guaranteed that a residual
on processor 1, that is influenced by the 0-colored cell on proc zero is
*only* influenced by the color 0 from proc 0 and not a 0-colored cell on
proc 1?
I hope that is clear
Thank you,
Gaetan Kenway
, 2013 at 11:39 AM, Peter Brune br...@mcs.anl.gov wrote:
On Tue, Nov 5, 2013 at 10:29 AM, Gaetan Kenway gaet...@gmail.com wrote:
Hi
I have a quick question regarding the MatGetColoring() function.
According to the documentation,
For parallel matrices currently converts to sequential matrix
, Gaetan Kenway gaet...@gmail.com wrote:
Hi Peter
Thanks for the info. I actually already have a coloring that is done
through the discretrization that is near optimal. The issue is that I have
a funny interprocessor dependence that is tricky to do my own sequential
coloring. So I was looking
Hi again
It runs if the mattype is mpiaij instead of mpibaij. I gather this is not
implemented for the blocked matrix types?
Gaetan
On Mon, May 20, 2013 at 9:26 AM, Gaetan Kenway gaet...@gmail.com wrote:
Hi again
I installed petsc3.4.0 and I am still getting the following error when
, 2013 at 9:26 AM, Gaetan Kenway gaet...@gmail.comwrote:
Hi again
I installed petsc3.4.0 and I am still getting the following error when
running with the following options (on 64 procs)
# Matrix Options
-matload_block_size 5 -mat_type mpibaij
# KSP solver options
-ksp_type fgmres -ksp_max_it
, I can use geometric multigrid,
provided I construct the restriction and prolongation operators myself.
I guess geometric multigrid is the best approach here.
Thank you
Gaetan
On Mon, Apr 29, 2013 at 9:40 AM, Jed Brown jedbrown at mcs.anl.gov wrote:
Gaetan Kenway gaetank at gmail.com writes
efficiently.
Gaetan
On Mon, Apr 29, 2013 at 10:16 AM, Jed Brown jedbrown at mcs.anl.gov wrote:
Gaetan Kenway gaetank at gmail.com writes:
The problems I am looking at are steady state or quasi-steady state (time
spectral approach). The example I sent before was steady state. The
nonlinear
at all, but
there's not much else you can do.
Gaetan
On Mon, Apr 29, 2013 at 10:31 AM, Jed Brown jedbrown at mcs.anl.gov wrote:
Gaetan Kenway gaetank at gmail.com writes:
For the forward solve I use ASM+ILU in the same manner as for the adjoint
problem.
The ASM not a bottleneck per se
, that is certainly something to try.
Thank you for your help,
Gaetan
On Mon, Apr 29, 2013 at 10:51 AM, Jed Brown jedbrown at mcs.anl.gov wrote:
Gaetan Kenway gaetank at gmail.com writes:
It is an SA turbulence model and the discrete adjoint computed exactly
with
AD. Certainly the grids are highly
?
Thank you,
Gaetan Kenway
On Mon, Apr 29, 2013 at 11:19 AM, Gaetan Kenway gaetank at gmail.com wrote:
It is possible the information provided by the discrete adjoint here is
somewhat less meaningful, but I need to analyze them for
off-design conditions for optimizations. I am using
a centered
That makes sense. Is there a reasonably easy way of doing that in PETSc
currently for reasonably large systems?
Gaetan
On Mon, Apr 29, 2013 at 4:32 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
Gaetan Kenway gaetank at gmail.com writes:
I would be very interested in looking
, Gaetan Kenway gaetank at gmail.com wrote:
That makes sense. Is there a reasonably easy way of doing that in PETSc
currently for reasonably large systems?
Gaetan
On Mon, Apr 29, 2013 at 4:32 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
Gaetan Kenway gaetank at gmail.com writes:
I would
problems; all of the various options result in a preconditioner that is not
significantly better tan not using any preconditioner at all.
Any suggestions would be greatly appreciated.
Thank you,
Gaetan Kenway
The compete source code listing is below:
program main
! Test different petsc solution
Hi everyone
I have the exactly same issue actually. When I updated to petsc-3.3,
SuperLU_dist was giving me random answers to KSPSolve(). Maybe half of the
time you would get the same result as 3.2, other times it was a little off
and other times widely differnet. I am using SuperLU_dist with a
= VecAYPX(r,-1.0,b);CHKERRQ(ierr);
}
Also, it makes the degenerate case of 1 iteration have same cost as if the
KSPRichardson wasn't there.
Hopefully this is useful.
Gaetan Kenway
Link to source code:
http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/impls/rich/rich.c.html
On Fri, Aug 31, 2012 at 7:27 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Hi
I'm having an issue with the PCMG preconditioner in PETSc. I'm trying to
use PCMG to precondition a linear system of equations resulting from the
discretization of the EUler equations. It is from a multi-block
line 180
/nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c
Gaetan
On Fri, Aug 31, 2012 at 11:03 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Hi again
I distilled the problem down further to a simple subroutine in mgtest.F,
copied directly from ex8f.F
the coarse grid operators manually?
Thank you,
Gaetan
On Sat, Sep 1, 2012 at 12:29 AM, Jed Brown jedbrown at mcs.anl.gov wrote:
On Fri, Aug 31, 2012 at 11:22 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Hi Again
I also tried petsc-3.2 version and I still get the same backtrace
. Did you
set the fine grid operators before calling PCSetUp()? The coarse grid
operators should have been constructed earlier in PCSetUp_MG().
On Sep 1, 2012 10:05 AM, Gaetan Kenway kenway at utias.utoronto.ca wrote:
I believe I partially tracked down my problem. The issue arises at line
644
have a look at?
Thanks,
Gaetan
On Sat, Sep 1, 2012 at 4:54 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
The full calling sequence is:
call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr)
call EChk(PETScIerr,__FILE__,__LINE__)
useAD = .False.
useTranspose = .True.
usePC
,__FILE__,__LINE__)
sets mg-galerkin to a value of -1 and the code above is expecting a value
of 1.
Gaetan
On Sat, Sep 1, 2012 at 5:34 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
On Sat, Sep 1, 2012 at 4:20 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Sorry about the last email...I hit
, 2012 at 6:33 PM, Barry Smith bsmith at mcs.anl.gov wrote:
On Sep 1, 2012, at 5:16 PM, Gaetan Kenway kenway at utias.utoronto.ca
wrote:
Hi Jed
After probing why the following code snippit isn't run:
if (mg-galerkin == 1) {
Mat B;
/* currently only handle case where mat
1, 2012 at 7:02 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Hi Jed
That commit fixes the mg-galerkin issue.
I'm still having issues with the projection/restriction matrices. Are the
projection/restriction matrices supposed to be serial matrices operating on
the unknowns on each
Hi
I'm having an issue with the PCMG preconditioner in PETSc. I'm trying to
use PCMG to precondition a linear system of equations resulting from the
discretization of the EUler equations. It is from a multi-block code so I
have the ability to easily generate restriction operators using geometric
Hi
I was wondering if anyone had any experience with using these new matrix
formats in PETSc. I have a block aij matrix (with block size 5) and tried
converting to either of these types and it just trashed memory for 10
minutes and I then killed it. Matrix assemble is 13 seconds for reference.
I
bsmith at mcs.anl.gov wrote:
You have no reason to use those formats. They are specialized for the
Cray X1 vector machine which no longer exists. They will not be faster on
another machine. Sorry for the confusion.
Barry
On Aug 30, 2012, at 3:51 PM, Gaetan Kenway kenway
, 2012 at 11:19 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
On Mon, Aug 27, 2012 at 11:13 PM, Gaetan Kenway kenway at utias.utoronto.ca
wrote:
Excellent!
It finally runs without segfaulting. Now... to get it actually solve my
problem :)
Great. This is not a real fix, but the real
get accidentally overwritten somehow? I also tried creating
the 'psnes' first (with a SNESCreate) , and then using the SNESSetPC()
instead but the same error happens.
Gaetan
On Mon, Aug 27, 2012 at 11:21 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
On Mon, Aug 27, 2012 at 9:52 PM, Gaetan Kenway
= SNESSetTolerances(snes-pc, 0.0, 0.0, 0.0, 1,
snes-pc-max_funcs);CHKERRQ(ierr);
ierr = SNESSetNormType(snes-pc, SNES_NORM_FINAL_ONLY);CHKERRQ(ierr);
On Mon, Aug 27, 2012 at 10:44 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Following your suggestion I changed by calling routines
Hi
I am trying to use a PETSc snes shell object (with hopes of using it as the
'preconditioner' in athother snes ngmres object). I've upgraded to petsc3.3
but when I compile my code it complains that
/home/mica/kenway/hg/SUmb/src/NKsolver/setupNKSolver2.F90:47: undefined
reference to
at mcs.anl.gov wrote:
On Mon, Aug 27, 2012 at 11:34 AM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Is is the snesshell functionality not implemented in fortran?
It needs a custom Fortran binding which has not been written yet. Peter,
can you add this to petsc-3.3?
What determines when bfort
problems, let
me know.
On Mon, Aug 27, 2012 at 12:24 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Excellent. Thanks.
The reason for using the snes shell solver is I would like to try out the
ngmres snes. solver. The code I'm working with already has a has a
non-linear multi-grid method
any examples of using NGMRES?
Thanks,
Gaetan
On Mon, Aug 27, 2012 at 5:06 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Thanks very much for the very quick response.
I will pull the change from the mercurial repository and give it a try in
the next couple of days.
Gaetan
On Mon
backtrace from the debugger might give me a better notion
of exactly what is going wrong. You can get this output conveniently using
the -on_error_attach_debugger.
Thanks,
- Peter
On Mon, Aug 27, 2012 at 6:51 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Hi Again
I've pulled the most
jedbrown at mcs.anl.gov wrote:
On Mon, Aug 27, 2012 at 7:31 PM, Gaetan Kenway kenway at
utias.utoronto.cawrote:
Unfortunately I can run backtraces since the code is running from python
and the -on_error_attach_debugger option has no effect when you're running
from python.
You can use
it would be more robust than using the F90 pointer form?
Also, would this be a suitable place to use the application ordering in
petsc?
Thanks,
Gaetan Kenway
-- next part --
An HTML attachment was scrubbed...
URL:
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments
)
if self.AS.isStruct:
y = self.AS.solver.globalNKPreCon(x, y)
On Wed, Mar 7, 2012 at 10:14 AM, Jed Brown jedbrown at mcs.anl.gov wrote:
On Wed, Mar 7, 2012 at 08:49, Gaetan Kenway kenway at
utias.utoronto.cawrote:
What is the 'with block' option. I don't see anything related
. Each code takes care
of its own topology, communication etc, so my petsc4py objects just have
the owned rows with no halos.
Thanks for all your help
Gaetan
On Wed, Mar 7, 2012 at 4:08 PM, Matthew Knepley knepley at gmail.com wrote:
On Wed, Mar 7, 2012 at 3:04 PM, Gaetan Kenway kenway
Hello
I'm in the process of using petsc4py to solve a large multidisciplinary,
non-linear system and its adjoint. I have the non-linear solution with
snes() working correctly and I'm now doing the linear solution.
For the non-linear solve, I create the snes and set my user context as
follows:
silly here?
Thanks,
Gaetan
On Aug 25, 2011, at 1:29 PM, Gaetan Kenway wrote:
Hello
I've managed to get the c-function for freeing preconditioner memory
written. The contents of my new 'pcasmfreespace.c' is below:
#include private/pcimpl.h /*I petscpc.h I*/
typedef struct {
PetscInt n
same_local_solves; /* flag indicating whether all local
solvers are same */
PetscBool sort_indices;/* flag to sort subdomain indices */
} PC_ASM;
before the subroutine.
Barry
On Aug 21, 2011, at 3:24 PM, Gaetan Kenway wrote:
Hello
I am attempting to implement a hack that was posted
Hello
I am trying to setup a Additive Schwartz Preconditioner using multiple
blocks on each processor. My code is a 3D Multiblock CFD code and what I am
trying to do is to put each CFD block into its own subdomain, even if there
are more than 1 block on each processor. The PETSc help on this
Hello
I'm wondering if it is possible to put an external procedure reference in a
ctx() in fortran.
I'm in the process of writing a Newton--Krylov solver for an
aero-structural system. My two different codes are wrapped with python and
so each code is called through python for residual and
have to be
coded from scratch.
Any suggestions are greatly appreciated
Gaetan Kenway
*** BEGIN CODE
! Dummy assembly begin/end calls for the matrix-free Matrx
call MatAssemblyBegin(dRdw,MAT_FINAL_ASSEMBLY,ierr)
call EChk(ierr,__FILE__,__LINE__)
call
Hello
I am wondering how one sets options for the sub-ksp contexts when using
the ASM preconditoner for the krylov solve in a SNES. After I create
the snes context, set the FormFunction routine, set the snesSetJacobian
routine, I run something like this:
call SNESGetKSP(snes,ksp,ierr)
Hello
I use PETSc with fortran. I was wondering if the CHKERRQ(ierr) command
is supposed to work in Fortran? My compiler (mpif90 with ifort). If I
do something like this:
call VecCreate(WARP_COMM_WORLD,globalSurfForce,ierr)
CHKERRQ(ierr)
ifort complains there is a syntax error.
I also
Hello
I'm still trying to write out my mpiblockaij matrix. I see the
development version now supports the baij format writing, but I was just
trying to do the matconvert fix to write out the matrix. The code I'm
trying to run to convert the matrix is:
call
.
Any guidance would be greatly appreciated.
Gaetan Kenway
Ph.D Candidate
University of Toronto Institute for Aerospace Studies
-- next part --
A non-text attachment was scrubbed...
Name: test.py
Type: text/x-python
Size: 1113 bytes
Desc: not available
URL:
http
77 matches
Mail list logo