Ritwik Saha via petsc-users writes:
> Hi All,
>
> PETSc provides various implementations of functions like VecAXPY() in CUDA.
> I am talking specifically about VecAXPY_SeqCUDA() in
> src/vec/vec/impls/seq/seqcuda/veccuda2.cu . How to I include these
> functions in my C code?
I'm not sure I
Xiangdong via petsc-users writes:
> Hello everyone,
>
> It seems that to use MatXAIJSetPreallocation, one has to pass the array of
> the number of nonzero blocks per row, even if this number is same across
> all the local rows.
>
> For the other preallocation functions (say,
Its purpose is to preallocate off-diagonal blocks, but unless you're
hard up against your memory capacity, I would skip that clumsy code and
use MatPreallocator.
"Ellen M. Price via petsc-users" writes:
> Hello again!
>
> For my multiphysics problem, I think a DMComposite might make the most
>
Smoothed aggregation mostly just cares about the near-null space
(MatSetNearNullSpace), which is a global property. Classical AMG uses
block size directly (number of dofs per C-point), but I'm not aware of
any implementation that supports variable block size. This would be a
research topic.
Manuel Valera writes:
> Yes, all of that sounds correct to me,
>
> No I haven't tried embedding the column integral into the RHS, right now I
> am unable to think how to do this without the solution of the previous
> intermediate stage. Any ideas are welcome,
Do you have some technical notes on
Dave Lee writes:
> Thanks Jed,
>
> I will reconfigure my PETSc with MUMPS or SuperLU and see if that helps.
> (my code is configured to only run in parallel on 6*n^2 processors (n^2
> procs on each face of a cubed sphere, which is a little annoying for
> situations like these where a serial LU
> linesearch off and use Richardson (with a fixed number of iterations) or
>> >> exact solves as Jed suggested. As far as scaling can you use the same NL
>> >> problem on each slab? This should fix all the problems anyway. Or, on
>> the
>> >> good decouple so
lves, if the true residual is of the same scale and *all*
>> of the slabs converge well then you should be OK on scaling. If this works
>> then start adding stuff back in and see what breaks it.
>>
>> On Thu, Oct 10, 2019 at 11:01 AM Jed Brown via petsc-users <
>> petsc
Dave Lee via petsc-users writes:
> Hi PETSc,
>
> I have a nonlinear 3D problem for a set of uncoupled 2D slabs. (Which I
> ultimately want to couple once this problem is solved).
>
> When I solve the inner linear problem for each of these 2D slabs
> individually (using KSPGMRES) the convergence
Manuel Valera writes:
> Sorry, I don't follow this last email, my spatial discretization is fixed,
> the problem is caused by the choice of vertical coordinate, in this case
> sigma, that calls for an integration of the hydrostatic pressure to correct
> for the right velocities.
Ah, fine. To
Is it a problem with the spatial discretization or with the time
discretization that you've been using thus far? (This sort of problem
can occur for either reason.)
Note that an SSP method is merely "preserving" -- the spatial
discretization needs to be strongly stable for an SSP method to
Manuel Valera writes:
> Thanks,
>
> My time integration schemes are all explicit, sorry if this a very atypical
> setup. This is similar to the barotropic splitting but not exactly, we
> don't have free surface in the model, this is only to correct for sigma
> coordinates deformations in the
Manuel Valera writes:
> Thanks for the answer, I will read the mentioned example, but to clarify
> for Barry I will schematize the process:
>
> At time n, the program need to do all of these at once:
>
>1. Solve T as a function of u,v,w
>2. Solve S as a function of u,v,w
>3. Solve
Manuel Valera via petsc-users writes:
> Hello,
>
> I have a set of equations which are co-dependent when integrating in time,
> this means the velocities u,v,w need a component from the Temperature and
> Salinity integration at the same intermediate step. Same for Temperature
> and Salinity,
"Povolotskyi, Mykhailo via petsc-users" writes:
> Hi Matthew,
>
> is it possible to do in principle what I would like to do?
SNES isn't meant to solve tiny independent systems. (It's just high
overhead for that purpose.) You can solve many such instances together
by creating a residual
"Smith, Barry F. via petsc-users" writes:
>Currently none of the XXXViewFromOptions() have manual pages or Fortran
> stubs/interfaces. It is probably easier to remove them as inline functions
> and instead write them as full functions which just call
> PetscObjectViewFromOptions() with
Matthew Knepley via petsc-users writes:
> On Fri, Sep 20, 2019 at 7:54 AM Bao Kai via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hi,
>>
>> I understand that PETSc is not designed to be used this way, while I
>> am wondering if someone have done something similar to this.
>>
>> We have
Where do your tridiagonal systems come from? Do you need to solve one
at a time, or batches of tridiagonal problems?
Although it is not in PETSc, we have some work on solving the sort of
tridiagonal systems that arise in compact discretizations, which it
turns out can be solved much faster than
I believe this is intended to work with most any implicit solver,
*provided* the initial conditions are compatible. It was added by Emil,
but I don't see it explicitly tested in PETSc.
"Huck, Moritz via petsc-users" writes:
> Hi,
> TS_EQ_DAE_SEMI_EXPLICIT_INDEX(?) are defined in TSEquationType
Elemental also has distributed-memory eigensolvers that should be at
least as good as ScaLAPACK's. There is support for Elemental in PETSc,
but not yet in SLEPc.
"Povolotskyi, Mykhailo via petsc-users" writes:
> Thank you for suggestion.
>
> Is it interfaced to SLEPC?
>
>
> On 08/29/2019 04:14
Try to reduce the problem. You may also compare with
src/ts/examples/tutorials/ex11.c, which includes a finite-volume
advection solver (with or without slope reconstruction/limiting).
Praveen C via petsc-users writes:
> Dear all
>
> I am trying to write a simple first order upwind FVM to solve
Jian Zhang - 3ME writes:
> Hi Jed,
>
> Thank you very much. I tried to use DMPlexGetCone, but the output is
> the edge ids, not the vertice ids.
This means you have an interpolated mesh (edges represented explicitly
in the data structure).
> For the function DMPlexGetClosureIndices, I can not
Jian Zhang - 3ME via petsc-users writes:
> Hi guys,
>
> I am trying to get the element connectivity from DMPlex. The input is the
> element id, and the output should be the vertice ids. Which function should I
> use to achieve this? Thanks in advance.
See DMPlexGetCone or
Are you saying that the MINRES error is larger than CG error? In which
norm? And which norm are you using for CG? (Output from
-ksp_monitor_true_residual -ksp_view would be useful.)
CG does find a solution that is optimal in a different inner product,
though this is usually pretty benign
Mingchang Ding writes:
>> It appears that you are calling MatZeroRows with rows = {0, info.mx-1},
>> which will only affect the first and last rows. The other entries are
>> how you have assembled them (with the matrix-matrix product).
>
> Yes. I think so. Is there anyway I can fill in the rows
Please always use "reply-all" so that your messages go to the list.
This is standard mailing list etiquette. It is important to preserve
threading for people who find this discussion later and so that we do
not waste our time re-answering the same questions that have already
been answered in
Can you give an example of what you are trying? These functions should be
capable of handling any set of rows.
Mingchang Ding via petsc-users writes:
> Hi, all
>
> I am trying to apply LDG to discretizing 1D diffusion term u_{xx} with
> Dirichlet boundary condition. The difficulty I have is
You can use reduced-precision preconditioning if you're writing your
own, but there isn't out-of-the-box support. Note that the benefit is
limited when working with sparse matrices because a lot of the cost
comes from memory access (including column indices) and vectorization
for some operations
If you are thinking about attending the American Geophysical Union Fall
Meeting (Dec 9-13 in San Francisco), please consider submitting an
abstract to this interdisciplinary session. Abstracts are due July 31.
T003: Advances in Computational Geosciences
This session highlights advances in the
You can use MatCreateMAIJ(A,2,) and a single MatMult(A,xy) where xy
contains the vectors x and y interlaced [x_0, y_0, x_1, y_1, ...].
There is also MatMatMult(A,X,...,) where X is a MATDENSE with two
columns, but I would prefer the MAIJ variant above in most cases.
Tyler Chen via petsc-users
"Smith, Barry F. via petsc-users" writes:
>We've found that the additive Schwarz methods (PCASM) or even block
>Jacobi PCBJACOBI often work well for network problems, better than
>GAMG. GAMG doesn't know anything about the structure of networks,
>it is for PDEs on meshes, so
Yingjie Wu via petsc-users writes:
> Respected PETSc developers:
> Hi,
> I have some questions about some functions of AMG and the construction time
> of Jacobian matrix in the process of using. Please help me to answer them.
>
> 1. I see some functions about AMG in the list of PETSc functions.
Matthew Knepley via petsc-users writes:
>> I am just wondering which way is better, or do you have any other
>> suggestion?
>>
> If you plan on doing a lot of mesh manipulation by hand and want to control
> everything, the first option might be better.
> On the other hand, if you use Plex, you
This is typical for weak preconditioners. Have you tried -pc_type gamg
or -pc_type mg (algebraic and geometric multigrid, respectively)? On a
structured grid with smooth coefficients, geometric multigrid is
possible with low setup cost and convergence in a few iterations
independent of problem
Stefano Zampini via petsc-users writes:
> Jared,
>
> The petsc output shows
>
> package used to perform factorization: petsc
>
> You are not using umfpack, but the PETSc native LU. You can run with
> -options_left to see the options that are not processed from the PETSc
> options database.
Dave, have you considered using GCR instead of FGMRES? It's a flexible
method that is equivalent in many circumstances, but provides the
residual and solution at each iteration without needing to "build" it.
Dave Lee via petsc-users writes:
> Hi Matt and Barry,
>
> thanks for the good ideas.
>
What is the partition like? Suppose you randomly assigned nodes to
processes; then in the typical case, all neighbors would be on different
processors. Then the "diagonal block" would be nearly diagonal and the
off-diagonal block would be huge, requiring communication with many
other processes.
José Lorenzo via petsc-users writes:
> I'm using PETSC 3.10 with 64 bits indices.
>
> When I run valgrind I get the following message at the end of the report,
> which I don't know how to interpret:
>
> ==399672== Invalid read of size 8
> ==399672==at 0x5627D05: MatZeroEntries_SeqAIJ
Chih-Chuen Lin via petsc-users writes:
> Dear PETSc users,
>
> I am Ian. I trying to implement a solver which involves a sparse symmetric
> matrix A multiplied by a dense matrix X. And because of the nature of the
> problem, the bandwidth of the matrix A would be kind of large.For A*X, I am
>
David Knezevic via petsc-users writes:
> I'm doing load stepping with SNES, where I do a SNES solve for each load
> step. Ideally the convergence tolerances for each load step would be set
> based on the norm of the load in the current step. As a result I would like
> to be able to update the
Just MatSetValues and MatAssembly* again. If you use ADD_VALUES, then
you might MatZeroEntries before adding values.
Evan Um via petsc-users writes:
> Hi PETSC users,
>
> I want to solve a number of matrix-vector equations. In each equation, the
> matrix has different values, but the sparsity
For most purposes, it's easiest to read it with SciPy (or Matlab, etc.),
merge, and use PetscBinaryIO to write. Then it'll be fast to access
from PETSc, even in parallel.
Afrah Najib via petsc-users writes:
> Hi,
>
> I have a set of files generated synthetically in matrix market file formats
>
"Smith, Barry F." writes:
> Sorry, my mistake. I assumed that the naming would follow PETSc convention
> and there would be MatGetLocalSubMatrix_something() as there is
> MatGetLocalSubMatrix_IS() and MatGetLocalSubMatrix_Nest(). Instead
> MatGetLocalSubMatrix() is hardwired to call
"Smith, Barry F. via petsc-users" writes:
>This is an interesting idea, but unfortunately not directly compatible
> with libMesh filling up the finite element part of the matrix. Plus it
> appears MatGetLocalSubMatrix() is only implemented for IS and Nest matrices
> :-(
Maybe I'm missing
Lisandro Dalcin writes:
> On Tue, 28 May 2019 at 22:05, Jed Brown wrote:
>
>>
>> Note that all of these compilers (including Sun C, which doesn't define
>> the macro) recognize -fPIC. (Blue Gene xlc requires -qpic.) Do we
>> still need to test the other alternatives?
>>
>>
> Well, worst case,
No love with:
cc: Sun C 5.12 Linux_i386 2011/11/16
Note that all of these compilers (including Sun C, which doesn't define
the macro) recognize -fPIC. (Blue Gene xlc requires -qpic.) Do we
still need to test the other alternatives?
"Smith, Barry F." writes:
> Works for Intel and PGI
Lisandro Dalcin via petsc-users writes:
> On Tue, 28 May 2019 at 17:31, Balay, Satish via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Configure.log shows '--with-pic=1' - hence this error.
>>
>> Remove '--with-pic=1' and retry.
>>
>>
> Nonsense. Why this behavior? Building a static
Dave Lee via petsc-users writes:
> Hi Petsc,
>
> I'm attempting to implement a "hookstep" for the SNES trust region solver.
> Essentially what I'm trying to do is replace the solution of the least
> squares problem at the end of each GMRES solve with a modified solution
> with a norm that is
GIRET Jean-Christophe via petsc-users writes:
> Hello,
>
> Thanks Mark and Jed for your quick answers.
>
> So the idea is to define all the Vecs on the world communicator, and perform
> the communications using traditional scatter objects? The data would still be
> accessible on the two
The standard approach would be to communicate via the parent comm. So
you split comm world into part0 and part1 and use a VecScatter with vecs
on world (which can have zero entries on part1 and part0 respectively)
to exchange your data. You can use VecPlaceArray or VecCreate*WithArray
to avoid
Fande Kong via petsc-users writes:
> It looks like mpicxx from openmpi does not handle this correctly. I
> switched to mpich, and it works now.
>
> However there is till some warnings:
>
> *clang-6.0: warning: treating 'c' input as 'c++' when in C++ mode, this
> behavior is deprecated
I'm pretty confident all the tests are sorted. It wouldn't be any great
hardship for us to allow unsorted input. If you submit an unsorted
test, we can make sure it works (it might already, but we should
probably add a call to in-place sort).
Oleksandr Koshkarov via petsc-users writes:
> Dear
"Smith, Barry F." writes:
> This is fine for "hacking" on PETSc but worthless for any other package.
> Here is my concern, when someone
> realizes there is a problem with a package they are using through a package
> manager they think, crud I have to
>
> 1) find the git repository for this
"Smith, Barry F. via petsc-users" writes:
> So it sounds like spack is still mostly a "package manager" where people
> use "static" packages and don't hack the package's code. This is not
> unreasonable, no other package manager supports hacking a package's code
> easily, presumably. The
"Zhang, Hong" writes:
> Jed:
>>> Myriam,
>>> Thanks for the plot. '-mat_freeintermediatedatastructures' should not
>>> affect solution. It releases almost half of memory in C=PtAP if C is not
>>> reused.
>
>> And yet if turning it on causes divergence, that would imply a bug.
>> Hong, are you
"Zhang, Hong via petsc-users" writes:
> Myriam,
> Thanks for the plot. '-mat_freeintermediatedatastructures' should not affect
> solution. It releases almost half of memory in C=PtAP if C is not reused.
And yet if turning it on causes divergence, that would imply a bug.
Hong, are you able to
This sounds an awful lot like a homework question. In any case, it does
not relate directly to PETSc and is thus off-topic for this list.
Zulfi Khan via petsc-users writes:
> Calculation of Cost of all-to-broadcast for a Balanced Binary Tree
>
> Hi,
>
> I have a question is:
>
> Given a
Fande Kong via petsc-users writes:
> Hi All,
>
> *Input Parameters*
>
> *n - number of values*
> *i - array of integers*
> *Ii - second array of data*
> *size - sizeof elements in the data array in bytes*
> *work - workspace of "size" bytes used when sorting*
>
>
> size is the size of one
Memory use will depend on the preconditioner. This will converge very
slowly (i.e., never) without multigrid unless time steps are small.
Depending on how rough the coefficients are, you may be able to use
geometric multigrid, which has pretty low setup costs and memory
requirements.
To estimate
Junchao's PR has been merged to 'master'.
https://bitbucket.org/petsc/petsc/pull-requests/1511/add-signed-char-unsigned-char-and-char
Fande Kong via petsc-users writes:
> Thanks for the reply. It is not necessary for me to use MPI_SUM. I think
> the better choice is MPIU_REPLACE. Doesn’t
Fande Kong via petsc-users writes:
> Hi Jed,
>
> One more question. Is it fine to use the same SF to exchange two groups of
> data at the same time? What is the better way to do this
This should work due to the non-overtaking property defined by MPI.
> Fande Kong,
>
> ierr =
>
"Zhang, Junchao via petsc-users" writes:
> MPI standard chapter 5.9.3, says "MPI_CHAR, MPI_WCHAR, and MPI_CHARACTER
> (which represent printable characters) cannot be used in reduction operations"
> So Fande's code and Jed's branch have problems. To fix that, we have to add
> support for
Mark Adams via petsc-users writes:
> On Thu, Apr 4, 2019 at 7:35 AM Dave Lee wrote:
>
>> I already have the Navier Stokes solver. My issue is wrapping it in a JFNK
>> solver to find the periodic solutions. I will keep reading up on SVD
>> approaches, there may be some capability for something
Myriam Peyrounette via petsc-users writes:
> Hi all,
>
> for your information, you'll find attached the comparison of the weak
> memory scalings when using :
>
> - PETSc 3.6.4 (reference)
> - PETSc 3.10.4 without specific options
> - PETSc 3.10.4 with the three scalability options you mentionned
You can try branch 'jed/feature-sf-char'; not tested yet.
Fande Kong writes:
> Thanks, Jed,
>
> Please let me know when the patch is in master.
>
> Fande
>
>
>> On Apr 2, 2019, at 10:47 PM, Jed Brown wrote:
>>
>> Fande Kong writes:
>>
>>> I am working on petsc master. So it should be fine
Fande Kong writes:
> I am working on petsc master. So it should be fine to have it in 3.11
Cool, I'd rather just do it in 'master'.
We can add it easily. Would it be enough to add it to petsc-3.11.*?
(I'd rather not backport to an earlier version, for which we presumably
won't have any more maintenance releases.)
Fande Kong via petsc-users writes:
> Hi All,
>
> There were some error messages when using PetscSFReduceBegin
When you roll your own equivalent real formulation, PETSc has no way of
knowing what conjugate transpose might mean, thus symmetry is lost. I
would suggest just using the AVX2 implementation for now and putting in
a request (or contributing a patch) for AVX-512 complex optimizations.
Sajid Ali
Justin Chang via petsc-users writes:
> Hi all,
>
> I'm writing a petsc4py routine to manually create nested fieldsplits using
> index sets, and it looks like whenever I move onto the next level of splits
> I need to rescale the IS's.
>
> From the PCFieldSplitSetDefault() routine, it looks like
Yuyun Yang via petsc-users writes:
> It's simply for visualization purposes. I wasn't sure if HDF5 would perform
> better than binary, and what specific functions are needed to load the PETSc
> vectors/matrices, so wanted to ask for some advice here. Since Matt mentioned
> it's not likely to
Yuyun Yang via petsc-users writes:
> Currently we are forming the sparse matrices explicitly, but I think the goal
> is to move towards matrix-free methods and use a stencil, which I suppose is
> good to use GPUs for and more efficient. On the other hand, I've also read
> about matrix-free
Mark Lohry writes:
> It seems to me with these semi-implicit methods the CFL limit is still so
> close to the explicit limit (that paper stops at 30), I don't really see
> the purpose unless you're running purely incompressible? That's just my
> ignorance speaking though. I'm currently running
Mark Lohry via petsc-users writes:
> For what it's worth, I'm regularly solving much larger problems (1M-100M
> unknowns, unsteady) with this discretization and AMG setup on 500+ cores
> with impressively great convergence, dramatically better than ILU/ASM. This
> just happens to be the first
Is there any output if you run with -malloc_dump?
Manuel Colera Rico via petsc-users writes:
> Hi, Junchao,
>
> I have installed the newest version of PETSc and it works fine. I just
> get the following memory leak warning:
>
> Direct leak of 28608 byte(s) in 12 object(s) allocated from:
>
For PETSc users/developers interested in software and data
infrastructure for fluid dynamics:
We are excited to invite you to attend the second workshop to aid in the
conceptualization of FDSI, a potential NSF-sponsored Institute dedicated
to Fluid Dynamics Software Infrastructure. The workshop
Manuel Valera writes:
> Ok i'll try that and let you know, for the time being i reverted to 3.9 to
> finish a paper, will update after that :)
3.10 will also work.
Did you just update to 'master'? See VecScatter changes:
https://www.mcs.anl.gov/petsc/documentation/changes/dev.html
Manuel Valera via petsc-users writes:
> Hello,
>
> I just updated petsc from the repo to the latest master branch version, and
> a compilation problem popped up, it seems like
It may not address the memory issue, but can you build 3.10 with the
same options you used for 3.6? It is currently a debugging build:
##
##
#
This is very unusual. MatCreate() does no work, merely dup'ing a
communicator (or referencing an inner communicator if this is not the
first PetscObject on the provided communicator). What size matrices are
you working with? Can you send some performance data and (if feasible)
a reproducer?
Myriam, in your first message, there was a significant (about 50%)
increase in memory consumption already on 4 cores. Before attacking
scaling, it may be useful to trace memory usage for that base case.
Even better if you can reduce to one process. Anyway, I would start by
running both cases
Of course, just as you would run any other MPI application.
GangLu via petsc-users writes:
> Hi all,
>
> When installing petsc, there is a stream test that is quite useful.
>
> Is it possible to run such test in batch mode, e.g. using pbs script?
>
> Thanks.
>
> cheers,
>
> Gang
"Zhang, Junchao via petsc-users" writes:
> Perhaps PETSc should have a MatGetRemoteRow (or
> MatGetRowOffDiagonalBlock) (A, r, , , ). MatGetRow()
> internally has to allocate memory and sort indices and values from
> local diagonal block and off-diagonal block. It is totally a waste in
> this
The compiler doesn't know anything special about PetscFinalize.
Destructors are called after all executable statements in their scope.
So you need the extra scoping if the destructor should be called
earlier.
Yuyun Yang writes:
> Oh interesting, so I need to add those extra brackets around my
If you run this with MPICH, it prints
Attempting to use an MPI routine after finalizing MPICH
You need to ensure that the C++ class destructor is called before
PetscFinalize. For example, like this:
diff --git i/test_domain.cpp w/test_domain.cpp
index 0cfe22f..23545f2 100644
---
MatCreateMAIJ does that (implicitly).
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMAIJ.html
If you want a Kronecker product with a non-identity matrix, this PR may
be of interest.
https://bitbucket.org/petsc/petsc/pull-requests/1334/rmills-mat-kaij/diff
Yuyun Yang
It shouldn't have any affect. This will need to be debugged. There's
no chance I'll have time for at least a week; hopefully one of the other
TS contributors can look sooner.
"Ellen M. Price via petsc-users" writes:
> Quick update: I found that changing TS_EXACTFINALTIME_INTERPOLATE to
>
This wasn't explained well in the commit message. The old code used the
Galerkin procedure on the "Pmat" (preconditioning matrix; which may or
may not be the same as the Amat) and set the result as both Amat and
Pmat of the coarse grid. The new code allows you to specify. If your
Amat and Pmat
What kind of problems are you solving? Running for a Krylov method for
tens of thousands of iterations is very rarely recommended.
Regarding storage, it's significantly more expensive to store the Krylov
basis (even when it's a recurrence) than the current approximation.
Some methods require
What kind of solver are you using and how often do you want to write?
Sal Am via petsc-users writes:
> Is there a function/command line option to save the solution as it is
> solving (and read in the file from where it crashed and keep iterating from
> there perhaps)?
> Had a seg fault and all
We should make the (two line) functionality a command-line feature of
PetscBinaryIO.py. Then a user could do
python -m PetscBinaryIO matrix.mm matrix.petsc
Matthew Knepley via petsc-users writes:
> It definitely should not be there under 'datafiles'. We should put it in an
> example, as
Justin Chang via petsc-users writes:
> So I used -mat_view draw -draw_pause -1 on my medium sized matrix and got
> this output:
>
> [image: 1MPI.png]
>
> So it seems there are lots of off-diagonal terms, and that a decomposition
> of the problem via matload would give a terrible unbalanced
You're probably looking for PETSC_MACHINE_EPSILON.
ztdepyahoo via petsc-users writes:
> Dear sir:
> I output the value of the "PETSC_SMALL", it is 1E-10. But i think it
> should be more smaller than this for double float number.
> Regards
Can you run in a debugger to get a stack trace? I believe Dolfin
swallows the stack trace to obstruct efforts to remove bugs; probably
paid off by the bug lobby.
aditya kumar via petsc-users writes:
> Hello,
>
> I am using PETSc with FEniCS project libraries to solve a nonlinear
> problem. I
Dolfin will need this PR to work with any PETSc 3.10.
https://bitbucket.org/fenics-project/dolfin/pull-requests/508/jed-petsc-310/diff
It's been chillin' there for a couple months; it appears that much of
the Dolfin development effort has moved to DolfinX and Firedrake.
"Huck, Moritz" writes:
Andrew Parker writes:
> Thanks, so you would suggest a flat vector storing u, v, w (or indeed x, y,
> z) or interleaved and then construct eigen types on the fly?
Interleaved if you want to use Eigen types in the same memory, or if
your code (like most applications) benefits more from memory
Fazlul Huq via petsc-users writes:
> Hello PETSc Developers,
>
> may be this is a trivial question!
>
> I usually run PETSc code from Home/petsc-3.10.2 directory. Last day I tried
> to run the code from Documents/petsc directory but I can't. As far as I can
> recall, I have installed PETSc in
"Huck, Moritz via petsc-users" writes:
> @Shri
> The system is very stiff, but the stiffness is handled well by ARKIMEX.
>
> I'am using PETSc 3.10. (I cannot use 3.10.3 at the moment due to
> compatibilty with a third library),
What compatibility problem is this? 3.10.3 should be (binary and
My suggestion is to use PETSc like usual and inside your
residual/Jacobian evaluation, for each cell or batch of cells, create
Eigen objects. For size 2d or 3d, it won't matter much whether you make
them share memory with the PETSc Vec -- the Eigen types should mostly
exist in registers.
Andrew
Matthew Knepley via petsc-users writes:
>> 2) I tried all the suggestions mentioned before: setting
>> -pc_gamg_agg_nsmooths 0 -pc_gamg_square_graph 10 did not improve my
>> convergence. Neither did explicitly setting -mg_coarse_pc_type lu or more
>> iterations of richardson/sor.
>>
>
> 1) Can
Justin Chang via petsc-users writes:
> Here's IMHO the simplest explanation of the equations I'm trying to solve:
>
> http://home.eng.iastate.edu/~jdm/ee458_2011/PowerFlowEquations.pdf
>
> Right now we're just trying to solve eq(5) (in section 1), inverting the
> linear Y-bus matrix. Eventually
1 - 100 of 139 matches
Mail list logo