Haha, there's a reason nobody uses it. ;-)
Lukas van de Wiel writes:
> Well, I would rather not use a logo at all than use that design, to be
> honest...
>
> On Fri, Aug 25, 2017 at 3:05 PM, Satish Balay wrote:
>
>> Well we had this logo created
"Jose E. Roman" writes:
>> El 24 ago 2017, a las 22:51, Greg Meyer escribió:
>>
>> Hi,
>>
>> I have written a shell matrix for non-standard vectors (CUDA to be specific)
>> that works great. MatMult and MatNorm perform as they should. However,
"Klaij, Christiaan" writes:
> Matt,
>
> Thanks, I can understand the lower condition number of P A, but
> what about r? Doesn't that change to P r and if so why can we
> assume that ||r|| and ||P r|| have the same order?
Matt's equation was of course wrong in a literal sense,
Ben Yee writes:
> Hi,
>
> I was wondering what type of matrix I should use if I want to define my own
> interpolation matrix in PCMG.
>
> I don't have any DM objects, and I wanted to provide the matrix entries to
> the interpolation matrix manually. Moreover, I want the coarse
*
CALL FOR PARTICIPATION
SIAM Conference on Parallel Processing for Scientific Computing (PP18)
March 7-10, 2018, Waseda University, Tokyo, Japan
This usually isn't possible for iterative solvers and saves very little
for direct solvers (you still have to pay the factorization cost, but
can save a little on solve cost which is already vastly cheaper).
While it's easy to make the residual zero for some subset of unknowns,
that says nothing
They are your subdomains, plus some number of levels of overlap (1 by
default). You can call PCASMGetLocalSubdomains which gives you ISs for
the subdomain, then use it to insert ones into a zeroed global vector.
Plotting that will give you the characteristic function of the
subdomain which you
,
including, but not limited to, fault modeling, tectonics, subduction,
seismology, magma dynamics, mantle convection, the core, as well as
surface processes, hydrology, and cryosphere.
Conveners: Juliane Dannberg, Marc Spiegelman, and Jed Brown
https://agu.confex.com/agu/fm17/preliminaryview.cgi
Scaling by the volume element causes the rediscretized coarse grid
problem to be scaled like a Galerkin coarse operator. This is done
automatically when you use finite element methods.
Jason Lefley writes:
>> On Jun 26, 2017, at 7:52 PM, Matthew Knepley
Hao Zhang writes:
> Thanks for the input. I was doing a few tests. I will put quadruple
> precision choice on hold for now. Since the condition number is big at the
> beginning, scale of 10^2,
That is not large and you don't need quad precision to solve it to
better
Hao Zhang writes:
> It's 3d incompressible RT simulation. My pressure between serial and
> parallel calculation is off by 10^(-14) in relative error. it eventually
> build up at later time.
I doubt that is why your solution develops artifacts after many time
steps. If
Hao Zhang writes:
> hi, all:
>
> I'm developing a CFD project, which all the matrix solver is PETSc based.
> It works fine with double precision until I need some more, like quadruple
> precision.
What sort of CFD problem do you need quad precision for?
> Ax = b is the
There is no single correct way to do this so you can do whatever makes
the most sense for your application. That ranges from calling ParMETIS
directly and creating a numbering using any scheme you like to using
PETSc functions for everything. Note that assembled linear algebra
(matrices and
Hong writes:
> Jed:
>>
>> >> Is it better this way or as a fallback when !A->ops->rart? MatPtAP
>> >> handles other combinations like MAIJ.
>> >>
>> >
>> > Do you mean
>> > if ( !A->ops->rart) {
>> > Mat Rt;
>> > ierr =
Hong writes:
> Jed :
>
>>
>> Is it better this way or as a fallback when !A->ops->rart? MatPtAP
>> handles other combinations like MAIJ.
>>
>
> Do you mean
> if ( !A->ops->rart) {
> Mat Rt;
> ierr = MatTranspose(R,MAT_INITIAL_MATRIX,);CHKERRQ(ierr);
> ierr =
Hong writes:
> I add support for MatRARt_MPIAIJ
> https://bitbucket.org/petsc/petsc/commits/a3c14138ce0daf4ee55c7c10f1b4631a8ed2f13e
Is it better this way or as a fallback when !A->ops->rart? MatPtAP
handles other combinations like MAIJ.
> It is in branch hzhang/mpirart.
>
Hong writes:
> Jed :
>
>> It is only implemented for SeqAIJ, but it should arguably have a default
>> implementation that performs an explicit transpose and then calls
>> MatMatMatMult or MatPtAP.
>>
>
> We can implement
> P= R^T (we have this for mpi matrices, expensive
It is only implemented for SeqAIJ, but it should arguably have a default
implementation that performs an explicit transpose and then calls
MatMatMatMult or MatPtAP.
Franck Houssen writes:
> I read the doc, and googled this before to write a dummy example. So, matRARt
>
"Kannan, Ramakrishnan" writes:
> I am running NHEP across 16 MPI processors over 16 nodes in a matrix of
> global size of 1,000,000x1,000,000 with approximately global 16,000,000
> non-zeros. Each node has the 1D row distribution of the matrix with exactly
> 62500 rows and
Barry Smith <bsm...@mcs.anl.gov> writes:
>> On Jun 13, 2017, at 10:06 AM, Jed Brown <j...@jedbrown.org> wrote:
>>
>> Adrian Croucher <a.crouc...@auckland.ac.nz> writes:
>>
>>> One way might be to form the whole Jacobian but somehow
Adrian Croucher writes:
> One way might be to form the whole Jacobian but somehow use a modified
> KSP solve which would implement the reduction process, do a KSP solve on
> the reduced system of size n, and finally back-substitute to find the
> unknowns in the
Correction: it is still possible to book lodging today (closes at
midnight Mountain Time).
See you in two short weeks. Thanks!
Jed Brown <j...@jedbrown.org> writes:
> The program is up on the website:
>
> https://www.mcs.anl.gov/petsc/meetings/2017/
>
> If you haven't r
Franck Houssen writes:
> Must I destroy the local matrix I have (created and) set with
> MatISSetLocalMat ?
The implementation references the local matrix so you need to destroy
your copy. This pattern is always used when setting sub-objects like
this.
static
No, but this could be added to the ASCII viewer. Why do you want it?
Franck Houssen writes:
> How to VecView with a formatted precision (%10.8f) ? Not possible ?
>
> Franck
signature.asc
Description: PGP signature
://confreg.colorado.edu/CSM2017
We are looking forward to seeing you in Boulder!
Jed Brown <j...@jedbrown.org> writes:
> We'd like to invite you to join us at the 2017 PETSc User Meeting held
> at the University of Colorado Boulder on June 14-16, 2017.
>
> http://www.mcs.anl.
Michał Dereziński <michal.derezin...@gmail.com> writes:
>> Wiadomość napisana przez Jed Brown <j...@jedbrown.org> w dniu 24.05.2017, o
>> godz. 12:06:
>>
>> Okay, do you have more parameters than observations?
>
> No (not necessarily). The
tend to alternate between loading a portion of the matrix, then doing
> computations, then loading more of the matrix, etc. But, given that I
> observed large loading times for some datasets, parallel loading may make
> sense, if done efficiently.
>
> Thanks,
> Michal.
>
Michał Dereziński writes:
> Great! Then I have a follow-up question:
>
> My goal is to be able to load the full matrix X from disk, while at
> the same time in parallel, performing computations on the submatrices
> that have already been loaded. Essentially, I want
Travel awards for early career researchers are available.
--- Begin Message ---
Please disseminate the information below regarding the availability of
travel awards to the Preconditioning-2017 conference. These awards
are for junior participants who are US citizens/permanent residents.
Tina Patel writes:
> Hello,
> I created a few standalone programs that use a DMDA structure, calculate and
> create matrices. However, now that I am trying to combine them using a main
> and using the files as modules, the header files seem to consistently
> conflict. I am
Hoang Giang Bui writes:
> Hi Barry
>
> The first block is from a standard solid mechanics discretization based on
> balance of momentum equation. There is some material involved but in
> principal it's well-posed elasticity equation with positive definite
> tangent operator.
Mark Adams writes:
> On Wed, Apr 26, 2017 at 7:30 PM, Barry Smith wrote:
>
>>
>> Yes, you asked for LU so it used LU!
>>
>>Of course for smaller coarse grids and large numbers of processes this
>> is very inefficient.
>>
>>The default behavior for
Barry Smith <bsm...@mcs.anl.gov> writes:
>> On Apr 26, 2017, at 5:44 PM, Jed Brown <j...@jedbrown.org> wrote:
>>
>> PETSc is the only package I know for which "./app -help" does everything
>> that "./app" would do, with extra output. It i
PETSc is the only package I know for which "./app -help" does everything
that "./app" would do, with extra output. It is an unfortunate
consequence of extensibility that we can't exit immediately. What if we
had a function the user could call to exit early and cleanly after all
options have been
Barry Smith writes:
>> On Apr 25, 2017, at 1:36 PM, Zhang, Hong wrote:
>>
>> PetscBool is indeed an int. So there is nothing wrong. PETSc does not use
>> bool in order to support C89.
>
> Yes, but in Python using a bool is more natural. For example in
Tina Patel writes:
> Hello everyone,
> I want to manipulate a global vector that has values only on the subgrid of a
> larger grid obtained by DMDACreate3d(). For example, I'd like to extract a
> 6x6x6 chunk from a 7x7x7 grid.
> At some point, I need to put that 6x6x6 chunk
> number of nodes.
>
> 2017-04-19 13:20 GMT+02:00 Jed Brown <j...@jedbrown.org>:
>
>> Francesco Migliorini <francescomigliorin...@gmail.com> writes:
>>
>> > Hello!
>> >
>> > I have an MPI code in which a linear system is created and s
gn=sig-email_content=webmail>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> 2017-04-14 16:50 GMT+02:00 Jed Brown <j...@jedbrown.org>:
>
>> Ingo Gaertner <ingogaertner@gmail.com> writes:
>>
>> > Does PETSc include an efficient implementation for
Barry Smith writes:
>You can do MatGetDiagonal() then do a MatMult() followed by
>subtracting a VecPointsizeMult().
I would use VecPointsizeMult followed by MatMultAdd. One fewer
traversal of a vector.
signature.asc
Description: PGP signature
Ingo Gaertner writes:
> Does PETSc include an efficient implementation for the operation
> y=(A-diag(A))x or y_i=\sum_{j!=i}A_{ij}x_j on a sparse matrix A?
>
> In words, I need a matrix-vector product after the matrix diagonal has been
> set to zero. For efficiency
Rodrigo Felicio writes:
> I think they are linked to the same MPI, but I am not sure how to confirm
> that. Looking into mpi4py/mpi.cfg I see the expected mpicc. On
> petsc4py/lib/petsc.cfg points to the petsc install directory and in there on
> the configure.log
Looks like a question for Lisandro. I believe the code you have (with
appropriate collective semantics) was intended to work, but I'm not in a
position to debug right now. Have you confirmed that mpi4py is linked
to the same MPI as petsc4py/PETSc?
Rodrigo Felicio
Rodrigo Felicio writes:
> Hello all,
>
> Sorry for the newbie question, but is there a way of making petsc4py work
> with an MPI group or subcommunicator? I saw a solution posted back in 2010
> (http://lists.mcs.anl.gov/pipermail/petsc-users/2010-May/006382.html),
No, these are not part of the PETSc library so they would need to be
compiled and called separately (you can do that, but it isn't part of
petsc4py).
"Larson, Jeffrey M." writes:
> Hello,
>
> Is there a way to call a function in a PETSc example file from python?
>
>
Matthew Knepley <knep...@gmail.com> writes:
> On Thu, Apr 6, 2017 at 8:32 AM, Jed Brown <j...@jedbrown.org> wrote:
>
>> Matthew Knepley <knep...@gmail.com> writes:
>> > Okay, that makes sense. If I do not have fluxes matching the sources, I
>> do
>&
Matthew Knepley writes:
> Okay, that makes sense. If I do not have fluxes matching the sources, I do
> not
> preserve montonicity for an advected field. I might need this to machine
> precision
> because some other equations cannot tolerate a negative number there. I will
>
Lawrence Mitchell writes:
> On 06/04/17 12:25, Matthew Knepley wrote:
>> I'm not sure whether getting the Intel acronyms mixed up (KNL vs MKL)
>> makes the quote above better or worse.
>>
>>
>> Too cryptic. Are you saying that this cannot be what is
Matthew Knepley <knep...@gmail.com> writes:
> On Wed, Apr 5, 2017 at 9:57 PM, Jed Brown <j...@jedbrown.org> wrote:
>
>> Matthew Knepley <knep...@gmail.com> writes:
>>
>> > On Wed, Apr 5, 2017 at 1:13 PM, Jed Brown <j...@jedbrown.org> wrote:
&g
Ingo Gaertner writes:
> By transport equation I mean the advection-diffusion equation. This is
> always parabolic, independent of whether it is advection dominated or
> diffusion dominated.
This is true from an analysis perspective, but nearly meaningless from
the
Matthew Knepley writes:
> On Wed, Apr 5, 2017 at 12:23 PM, Justin Chang wrote:
>
>> I simply ran these KNL simulations in flat mode with the following options:
>>
>> srun -n 64 -c 4 --cpu_bind=cores numactl -p 1 ./ex48
>>
>> Basically I told it that
Matthew Knepley <knep...@gmail.com> writes:
> On Wed, Apr 5, 2017 at 1:13 PM, Jed Brown <j...@jedbrown.org> wrote:
>
>> Matthew Knepley <knep...@gmail.com> writes:
>>
>> > On Wed, Apr 5, 2017 at 12:03 PM, Jed Brown <j...@jedbrown.org> wrote:
&g
Justin Chang writes:
> BTW what are the relevant papers that describe this problem? Is this one
>
> http://epubs.siam.org/doi/abs/10.1137/110834512
Yup.
signature.asc
Description: PGP signature
Matthew Knepley <knep...@gmail.com> writes:
> On Wed, Apr 5, 2017 at 12:03 PM, Jed Brown <j...@jedbrown.org> wrote:
>
>> Matthew Knepley <knep...@gmail.com> writes:
>> > As a side note, I think using FV to solve an elliptic equation should be
>&g
MT+02:00 Matthew Knepley <knep...@gmail.com>:
>
>> On Wed, Apr 5, 2017 at 11:50 AM, Ingo Gaertner <ingogaertner@gmail.com
>> > wrote:
>>
>>> Hi Jed,
>>> thank you for your reply. Two followup questions below:
>>>
>>> 2017-04-04
"Zhang, Hong" writes:
> On Apr 4, 2017, at 10:45 PM, Justin Chang
> > wrote:
>
> So I tried the following options:
>
> -M 40
> -N 40
> -P 5
> -da_refine 1/2/3/4
> -log_view
> -mg_coarse_pc_type gamg
> -mg_levels_0_pc_type gamg
>
Matthew Knepley <knep...@gmail.com> writes:
> On Wed, Apr 5, 2017 at 12:12 AM, Richard Mills <richardtmi...@gmail.com>
> wrote:
>
>> On Tue, Apr 4, 2017 at 9:10 PM, Jed Brown <j...@jedbrown.org> wrote:
>>
>>> Barry Smith <bsm...@mcs.anl.gov&
Barry Smith writes:
>These results seem reasonable to me.
>
>What makes you think that KNL should be doing better than it does in
> comparison to Haswell?
>
>The entire reason for the existence of KNL is that it is a way for
>Intel to be able to "compete"
Justin Chang writes:
> So I tried the following options:
>
> -M 40
> -N 40
> -P 5
> -da_refine 1/2/3/4
> -log_view
> -mg_coarse_pc_type gamg
> -mg_levels_0_pc_type gamg
> -mg_levels_1_sub_pc_type cholesky
> -pc_type mg
> -thi_mat_type baij
>
> Performance improved
Matthew Knepley <knep...@gmail.com> writes:
> On Tue, Apr 4, 2017 at 10:02 PM, Jed Brown <j...@jedbrown.org> wrote:
>
>> Matthew Knepley <knep...@gmail.com> writes:
>>
>> > On Tue, Apr 4, 2017 at 3:40 PM, Filippo Leonardi <filippo.l...@gmail.co
Matthew Knepley writes:
> On Tue, Apr 4, 2017 at 3:40 PM, Filippo Leonardi
> wrote:
>
>> I had weird issues where gcc (that I am using for my tests right now)
>> wasn't vectorising properly (even enabling all flags, from tree-vectorize,
>> to mavx).
Justin Chang writes:
> Attached are the job output files (which include -log_view) for SNES ex48
> run on a single haswell and knl node (32 and 64 cores respectively).
> Started off with a coarse grid of size 40x40x5 and ran three different
> tests with -da_refine 1/2/3 and
Ingo Gaertner writes:
> We have never talked about Riemann solvers in our CFD course, and I don't
> understand what's going on in ex11.
> However, if you could answer a few of my questions, you'll give me a good
> start with PETSc. For the simple poisson problem that
Justin Chang writes:
> Thanks everyone for the helpful advice. So I tried all the suggestions
> including using libsci. The performance did not improve for my particular
> runs, which I think suggests the problem parameters chosen for my tests
> (SNES ex48) are not optimal
Matthew Knepley writes:
>> BLAS. (Here a interesting point opens: I assume an efficient BLAS
>>
>> implementation, but I am not so sure about how the different BLAS do
>> things
>>
>> internally. I work from the assumption that we have a very well tuned BLAS
>>
>>
Barry Smith <bsm...@mcs.anl.gov> writes:
>> On Apr 3, 2017, at 8:51 AM, Jed Brown <j...@jedbrown.org> wrote:
>>
>> Barry Smith <bsm...@mcs.anl.gov> writes:
>>
>>> Jed,
>>>
>>>Here is the problem.
>>>
Barry Smith writes:
>Jed,
>
> Here is the problem.
>
> https://bitbucket.org/petsc/petsc/branch/barry/fix/even-huger-flaw-in-ts
Hmm, when someone uses -snes_mf_operator, we really just need
SNESTSFormJacobian to ignore the Amat. However, the user is allowed to
Matthew Knepley writes:
> I can't think why it would fail there, but DMDA really likes old numbers of
> vertices, because it wants
> to take every other point, 129 seems good. I will see if I can reproduce
> once I get a chance.
This problem uses periodic boundary conditions
se are the parameters I am trying:
>>> >
>>> > srun -n 1032 -c 2 ./ex48 -M 80 -N 80 -P 9 -da_refine 1 -pc_type mg
>>> -thi_mat_type baij -mg_coarse_pc_type gamg
>>> >
>>> > The above works perfectly fine if I used 96 processes. I also tried to
>
The issue is that we need to create
a*I - Jrhs
and this is currently done by creating a*I first when we have separate
matrices for the left and right hand sides. There is code to just scale
and shift Jrhs when there is no IJacobian, but the creation logic got
messed up at some point (or at
Justin Chang writes:
> It was sort of arbitrary. I want to conduct a performance spectrum
> (dofs/sec) study where at least 1k processors are used on various HPC
> machines (and hopefully one more case with 10k procs). Assuming all
> available cores on these compute nodes
Fri, Mar 31, 2017 at 12:47 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
>
>>
>> > On Mar 31, 2017, at 10:00 AM, Jed Brown <j...@jedbrown.org> wrote:
>> >
>> > Justin Chang <jychan...@gmail.com> writes:
>> >
>> >> Yeah based o
Justin Chang writes:
> Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + 1
> has decent performance.
>
> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some of
> the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/
>
Hom Nath Gharti writes:
> Thanks, Jed! I will try. I see that FindPETSc.cmake has following lines:
>
> set(PETSC_VALID_COMPONENTS
> C
> CXX)
>
> Should we add FC or similar?
You could, but then you'd have to also add test code for that language
binding. (All this does is a
Hom Nath Gharti writes:
> Dear all,
>
> Does FindPETSc.cmake (https://github.com/jedbrown/cmake-modules) work
> with Fortran as well?
It should, but you need to be sure to use a compatible Fortran compiler.
signature.asc
Description: PGP signature
"Daralagodu Dattatreya Jois, Sathwik Bharadw"
writes:
> I am using AIJ matrix to solve Laplace problem in finite element
> framework. To apply Neumann boundary conditions I need to obtain
> values of first and last few columns and subtract it with the
>
I can't reproduce this result. It looks like something that could
happen by mixing up the headers used to compile with the library used to
link/execute.
Andreas Mang writes:
> Hey guys:
>
> I was trying to run my code with single precision which resulted in strange
>
Ajit Desai writes:
> Hello Everyone,
> A couple of questions on the *-log_summary* provided by PETSc.
-log_view is the preferred name.
> 1. *Avg-Flops & Avg-Flops/sec* are averaged among the participating cores
> or averaged over the simulation time or both?
Flops on each
"Kong, Fande" writes:
> Hi All,
>
> What is the definition of KSPNormType natural? It is easy to understand
> none, preconditioned, and unpreconditioned, but not natural.
It is the energy norm. It only makes sense for an SPD operator and SPD
preconditioner.
We'd like to invite you to join us at the 2017 PETSc User Meeting held
at the University of Colorado Boulder on June 14-16, 2017.
http://www.mcs.anl.gov/petsc/meetings/2017/
The first day consists of tutorials on various aspects and features of
PETSc. The second and third days will be devoted
Looks fine to me. Thanks.
Barry Smith writes:
>A proposed fix
> https://bitbucket.org/petsc/petsc/pull-requests/645/do-not-assume-that-all-ksp-methods-support
>
>Needs Jed's approval.
>
>Barry
>
>
>
>> On Mar 8, 2017, at 10:33 AM, Barry Smith
"Kong, Fande" <fande.k...@inl.gov> writes:
> On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown <j...@jedbrown.org> wrote:
>
>> Hong <hzh...@mcs.anl.gov> writes:
>>
>> > Fande,
>> > Got it. Below are what I get:
>>
>> Is
Hong writes:
> Fande,
> Got it. Below are what I get:
Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to
get a somewhat larger benefit.)
> petsc/src/ksp/ksp/examples/tutorials (master)
> $ ./ex10 -f0 binaryoutput -rhs 0 -mat_view ascii::ascii_info
> Mat
Gideon Simpson writes:
> I just wanted to check, has the default RK solver (when called with -ts_type
> rk) always been 3rd order (RK3)?
Yes, since it was rewritten to be adaptive and extensible in 2013.
signature.asc
Description: PGP signature
Gideon Simpson writes:
> Thanks, as always.
>
> Interesting that -Wall (with clang) didn’t catch that.
gcc warns on this if you turn on optimization. I don't know why clang
doesn't.
signature.asc
Description: PGP signature
Sanjay Govindjee writes:
> I'm not sure it is best to say that "the standard way to handle this" is
> to partition the elements. Minimization of communication calls for
> partitioning the nodes (at the expense of performing extra element
> computations).
For high order
Matt Landreman writes:
> On Jed's comment, the application I have in mind is indeed a
> convection-dominated equation (a steady linear 3D convection-diffusion
> equation with smoothly varying anisotropic coefficients and recirculating
> convection). Gamg and
Gideon Simpson writes:
> I’ve been continuing working on implementing a projection method problem,
> which, loosely, looks like the following. Vec y contains the state vector
> for my system, y’ = f(y) which is solved with a TS, using, for now, rk4.
>
> I have added
gideon.simp...@gmail.com writes:
> Yes, when I was talking about vector operations, i.e., VecAXPY, I was doing
> them on the global vectors. So what I'm understanding from you is that the
> ghost points only appear after I go to the local data structure, is that
> correct?
The concept
Gideon Simpson writes:
> I’ve got a simple problem where I use a DM to handle a representation of a
> vector complex numbers, storing the real and imaginary components at each
> lattice point. I also have ghost points at either end, i.e.:
>
> DMDACreate1d
David Nolte writes:
> At the moment it's not possible for me to rebuild python with the proper
> configuration for valgrind (--without-pymalloc --with-valgrind). I'm not
> sure how useful the output is now. I can't say I get it... you can find
> it in the attachment, if you
Gideon Simpson writes:
> still getting a no rule to make target…
What exactly is the error message? And what is the output of "ls -l" in
that directory?
My syntax suggestion is just avoiding error-prone duplication.
signature.asc
Description: PGP signature
Jennifer Swenson writes:
> As a part of my problem, I need to be able to look up the value of a scalar
> field (let's call it xi) that is defined via a somewhat non-trivial integral
> over a second scalar field (let's call it rho). Mathematically we have
>
>
>
> xi(x,t) =
Matt Landreman writes:
> 3. Is it at all sensible to do this second kind of defect correction with
> _algebraic_ multigrid? Perhaps Amat for each level could be formed from the
> high-order matrix at the fine level by the Galerkin operator R A P, after
> getting all the
Gideon Simpson writes:
> Ok, it should be:
>
> myprog1: prog1.o ${OBJS}
> ${CLINKER} -o myprog1 prog1.o ${OBJS} ${PETSC_LIB}
${CLINKER} -o $@ $^ ${PETSC_LIB}
^^ I would spell it this way
signature.asc
Description: PGP signature
Gideon Simpson writes:
> I’ve been trying to use some code that was originally developed under petsc
> 3.6.x with my new 3.7.5 installation. I’m having an issue in that the way
> the code is written, it’s spread across several .c files. Here are the
> essential
David Nolte writes:
> Dear all,
>
> I am trying to use PETSc to solve a steady Stokes problem, discretized
> with stable FEM (P2P1) in 2D and 3D. I have been playing around with
> different preconditioners to get the hang of things. For my 2D case, for
> example, the
Ed Bueler writes:
> Jed --
>
>> My point was that including sliding potentially adds another
>> stiff/algebraic term so whatever interface we choose better be able to
>> support at least two stiff terms.
>
> Yes, totally agreed w.r.t. the interface design. (The DAE that you
Barry Smith writes:
>For optimal implementation of line relaxation (assuming each
>process is only doing sequential lines) you would write a PCSHELL
>that looped over the "lines" for your geometry and use inside it a
>PC created with the tridiagonal matrix for
Ed Bueler writes:
> Jed --
>
>>> u_t - div (u^k |grad u|^{p-2} grad u) = g(t,u)
>>Are you also interested in the case with sliding?
>
> You must be getting grumpy. There is this 2009 paper about sliding ...
> Yes I am interested in sliding but I was trying to keep things
Matt Landreman writes:
> Hi, Petsc developers,
>
> A few basic questions about geometric and algebraic multigrid in petsc:
>
> Based on the output of -ksp_view, I see that for both -pc_type mg and
> -pc_type gamg, the smoothing on each multigrid level is implemented
701 - 800 of 3943 matches
Mail list logo