On Fri, 7 Jul 2017 at 11:31, Florian Lindner wrote:
> Hello,
>
> I'm having some struggle understanding the preallocation for MPIAIJ
> matrices, especially when a value is in off-diagonal
> vs. diagonal block.
>
> The small example program is at https://pastebin.com/67dXnGm3
Please send your modified version of ex34. It will be faster to examine the
source and experiment with option choices locally rather than sending
emails back and forth.
Thanks,
Dave
On Thu, 22 Jun 2017 at 03:13, Jason Lefley
wrote:
> Hello,
>
> We are attempting to
You can assemble R^t and then use MatPtAP which supports MPIAIJ
On Wed, 21 Jun 2017 at 15:00, Franck Houssen
wrote:
> How to compute RARt with A and R as distributed (MPI) matrices ?
>
> This works with sequential matrices.
> The doc say "currently only implemented for
> Furthermore, maybe anyone has a hint where to start tuning multigrid? So
> far hypre worked better than ML, but I have not experimented much with the
> parameters.
>
>
>
> Thanks again for your help!
>
> Best wishes,
> David
>
>
>
>
> On 06/12/2017 04:52
I've been following the discussion and have a couple of comments:
1/ For the preconditioners that you are using (Schur factorisation LDU, or
upper block triangular DU), the convergence properties (e.g. 1 iterate for
LDU and 2 iterates for DU) come from analysis involving exact inverses of
A_00
On 6 June 2017 at 17:45, Franck Houssen wrote:
> How to VecScatter from global to local vector, and then, VecGather back ?
>
> This is a very simple use case: I need to split a global vector in local
> (possibly overlapping) pieces, then I need to modify each local piece
On Mon, 29 May 2017 at 08:39, leejearl wrote:
> Hi, all:
> I have create a IS for every cell in dmplex by the following steps:
> 1. Creating a integer array which size is matched to the number of cells.
> 2. Use the routine "ISCreateGeneral" to create a corresponding IS.
>
> Is
On Sun, 28 May 2017 at 09:30, leejearl wrote:
> Hi, Dave:
> Thank you for your kind reply. If I want to store a mixture of
> PetscReal and PetscInt, how can I do it?
What operations do you need to perform with your struct?
>
> Thanks,
> leejearl
>
>
On Sun, 28 May 2017 at 08:31, leejearl wrote:
> Hi, PETSc developer:
>
> I need to create a PetscSection with a struct. The struct is
> defined as follow,
>
> typedef struct
> {
>PetscReal x;
>PetscInt id;
> } testStruct;
>
> When I run the
On 12 May 2017 at 07:50, Matt Baker wrote:
> Hello,
>
>
> I have a few questions on how to improve performance of my program. I'm
> solving Poisson's equation on a (large) 3D FD grid with Dirichlet boundary
> conditions and multiple right hand sides. I set up the matrix and
On Wed, 3 May 2017 at 09:29, Hoang Giang Bui wrote:
> Dear Jed
>
> If I understood you correctly you suggest to avoid penalty by using the
> Lagrange multiplier for the mortar constraint? In this case it leads to the
> use of discrete Lagrange multiplier space. Do you or
conditioning?
>
> On Tue, Apr 11, 2017 at 11:57 AM, Dave May <dave.mayhe...@gmail.com>
> wrote:
>
>
>
> On Tue, 11 Apr 2017 at 07:28, Kaushik Kulkarni <kaushik...@gmail.com>
> wrote:
>
> A strange behavior I am observing is:
> Problem: I have to solv
On Tue, 11 Apr 2017 at 07:28, Kaushik Kulkarni wrote:
> A strange behavior I am observing is:
> Problem: I have to solve A*x=rhs, and currently I am currently trying to
> solve for a system where I know the exact solution. I have initialized the
> exact solution in the Vec
You should also not call PetscInitialize() from within your user MatMult
function.
On Fri, 7 Apr 2017 at 13:24, Matthew Knepley wrote:
> On Fri, Apr 7, 2017 at 5:11 AM, Francesco Migliorini <
> francescomigliorin...@gmail.com> wrote:
>
> Hello,
>
> I need to solve a linear
On Thu, 16 Mar 2017 at 07:16, Matthew Knepley wrote:
> Hi Valentin,
>
> Have you seen this example:
> https://bitbucket.org/petsc/petsc/src/1830d94e4628b31f970259df1d58bc250c9af32a/src/ksp/ksp/examples/tutorials/ex2f.F?at=master=file-view-default
>
> Would that be enough to
Any time you modify one of the submats, you need to call assembly begin/end
on that sub matrix AND on the outer matnest object.
Thanks,
Dave
On Wed, 8 Feb 2017 at 22:51, Manav Bhatia wrote:
> aha.. that might be it.
>
> Does that need to be called for the global
It looks like the Schur solve is requiring a huge number of iterates to
converge (based on the instances of MatMult).
This is killing the performance.
Are you sure that A11 is a good approximation to S? You might consider
trying the selfp option
I suggest you check the code is valgrind clean.
See the petsc FAQ page for details of how to do this.
Thanks,
Dave
On Sun, 8 Jan 2017 at 04:57, Mark Adams wrote:
> This error seems to be coming from the computation of the extreme
> eigenvalues of the matrix for smoothing in
On 6 January 2017 at 22:31, Łukasz Kasza wrote:
>
>
> Dear PETSc Users,
>
> Please consider the following 2 snippets which do exactly the same
> (calculate a sum of two vectors):
> 1.
> VecAXPY(amg_level_x[level],1.0,amg_level_residuals[level]);
>
>
> Manuel
>> >
>> > On Wed, Jan 4, 2017 at 3:23 PM, Matthew Knepley <knep...@gmail.com>
>> wrote:
>> > On Wed, Jan 4, 2017 at 5:21 PM, Manuel Valera <mval...@mail.sdsu.edu>
>> wrote:
>> > I did a PetscBarrier just before calling the vica
Do you now see identical residual histories for a job using 1 rank and 4
ranks?
If not, I am inclined to believe that the IS's you are defining for the
splits in the parallel case are incorrect. The operator created to
approximate the Schur complement with selfp should not depend on the
number
gt;
> I just tried that and it didn't make a difference, any other suggestions ?
>
> Thanks,
> Manuel
>
> On Wed, Jan 4, 2017 at 2:29 PM, Dave May <dave.mayhe...@gmail.com> wrote:
>
> You need to swap the order of your function calls.
> Call VecSetSizes() before VecS
You need to swap the order of your function calls.
Call VecSetSizes() before VecSetType()
Thanks,
Dave
On Wed, 4 Jan 2017 at 23:21, Manuel Valera wrote:
Hello all, happy new year,
I'm working on parallelizing my code, it worked and provided some results
when i just
The issue is your fieldsplit_1 solve. You are applying mumps to an
approximate Schur complement - not the true Schur complement. Seemingly the
approximation is dependent on the communicator size.
If you want to see iteration counts of 2, independent of mesh size and
communicator size you need to
On 5 December 2016 at 16:49, Massoud Rezavand wrote:
> Dear Petsc team,
>
> In order to create a parallel matrix and solve by KSP, is it possible to
> directly use MatSetValues() in runtime when each matrix entry is just
> created without MatMPIAIJSetPreallocation()?
>
For collocated variables, I recommend you use the function
DMDAGetReducedDMDA()
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMDAGetReducedDMDA.html
That's the simplest option.
In general, if the 2 dmdas have the same number of points in each
direction, and you let petsc
Massoud,
On 28 November 2016 at 20:18, Massoud Rezavand
wrote:
> Hi,
>
> Thanks.
>
> As you know, in SPH method, the calculations are done over the neighboring
> particles (j) that fall inside a support domain defined by a circle over
> the particle of interest (i).
to contains links to example
codes (see bottom of the webpage) where you can see how to use these
functions.
Thanks,
Dave
>
> Again, Thanks a lot!
> Rolf
>
>
>
>
> Am 24.11.2016 um 22:30 schrieb Dave May <dave.mayhe...@gmail.com>:
>
> When you create the
When you create the DMDA, set the number of DOFs (degrees of freedom) per
point to be 2 instead of 1.
You must be using and ancient version of petsc given the function names you
quoted. Consider upgrading to 3.7
Thanks,
Dave
On Thu, 24 Nov 2016 at 20:24, Rolf Kuiper wrote:
>
Damn - the last part of my email is wrong. You want to set the PCType to
"mat". KSPType preonly is fine
On Mon, 14 Nov 2016 at 07:04, Dave May <dave.mayhe...@gmail.com> wrote:
> Looks like you want the contents of your mat shell, specifically the op
> Ax
Looks like you want the contents of your mat shell, specifically the op Ax,
to define the action of the preconditioner.
You need to either create a PCShell (rather than a MatShell), and define
the operation called by PCApply(), or keep you current shell but change
"preonly" to "mat".
On 21 October 2016 at 18:55, Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:
> Hi,
>
> I am on a new issue with a message:
> [1]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: Argument out of range
>
On Sunday, 16 October 2016, 丁老师 wrote:
> Dear professor:
>I met the following error for Petsc 3.7.3.
>I delcare LocalSize as int, but it doesn't work anymore. it works for
> 3.6.3.
>
This error has nothing to do with the version of petsc. Whether it "worked"
is
On 15 October 2016 at 06:17, Dave May <dave.mayhe...@gmail.com> wrote:
>
>
> On Saturday, 15 October 2016, Barry Smith <bsm...@mcs.anl.gov> wrote:
>
>>
>> Unless the particles are more or less equally distributed over the the
>> entire domain a
the domains to load balance
> the particles per domain or
>
> 2) parallelize the particles (some how) instead of just the geometry.
>
> Anyways, there is a preliminary DMSWARM class in the development version
> of PETSc for helping to work with particles provided by Dave May. Yo
ta is produced by solving a constant coefficients Possoin
> equation with different rhs for 100 steps.
> As you can see, the time of VecAssemblyBegin increase dramatically from
> 32K cores to 65K.
> With 65K cores, it took more time to assemble the rhs than solving the
> equation. Is t
>
>
> Also, you use CG/MG when FMG by itself would probably be faster. Your
> smoother is likely not strong enough, and you
> should use something like V(2,2). There is a lot of tuning that is
> possible, but difficult to automate.
>
Matt's completely correct.
If we could
On 5 October 2016 at 18:49, Matthew Knepley wrote:
> On Wed, Oct 5, 2016 at 11:19 AM, E. Tadeu wrote:
>
>> Matt,
>>
>> Do you know if there is any example of solving Navier Stokes using a
>> staggered approach by using a different DM object such as
oarse mesh of the 2nd communicator should
> I use to improve the performance?
>
> I attached the test code and the petsc options file for the 1024^3 cube
> with 32768 cores.
>
> Thank you.
>
> Regards,
> Frank
>
>
>
>
>
>
> On 09/15/2016 03:35
On Thursday, 22 September 2016, Florian Lindner > wrote:
> Hello,
>
> I want to write a MATSBAIJ to a file in binary, so that I can load it
> later using MatLoad.
>
> However, I keep getting the error:
>
> [5]PETSC ERROR:
On 19 September 2016 at 21:05, David Knezevic
wrote:
> When I use MUMPS via PETSc, one issue is that it can sometimes fail with
> MUMPS error -9, which means that MUMPS didn't allocate a big enough
> workspace. This can typically be fixed by increasing MUMPS icntl 14,
Barry
>
> > On Sep 15, 2016, at 5:35 AM, Dave May <dave.mayhe...@gmail.com
> <javascript:;>> wrote:
> >
> > HI all,
> >
> > I the only unexpected memory usage I can see is associated with the call
> to MatPtAP().
> > Here is so
e command line.
> I add more comments and also fix an error in the attached code. ( The
> error only effects the accuracy of solution but not the memory usage. )
>
> Thank you.
> Frank
>
>
> On 9/14/2016 9:05 PM, Dave May wrote:
>
>
>
> On Thursday, 15 September 2
s file).
For my testing purposes I'll have to tweak your code as I don't want to
always have to change two options when changing the partition size or mesh
size (as I'll certainly get it wrong every second time leading to a lose of
my time due to queue wait times)
Thanks,
Dave
>
>
>
On Thursday, 15 September 2016, Dave May <dave.mayhe...@gmail.com> wrote:
>
>
> On Thursday, 15 September 2016, frank <hengj...@uci.edu
> <javascript:_e(%7B%7D,'cvml','hengj...@uci.edu');>> wrote:
>
>> Hi,
>>
>> I write a simple code to re-pro
Memory ) killer of the supercomputer terminated the
>> job during "KSPSolve".
>> >>>
>> >>> I attached the output of ksp_view( the third test's output is from
>> ksp_view_pre ), memory_view and also the petsc options.
>> >>>
>> >>> In all
job during "KSPSolve".
>> >>>
>> >>> I attached the output of ksp_view( the third test's output is from
>> ksp_view_pre ), memory_view and also the petsc options.
>> >>>
>> >>> In all the tests, each core can access about 2G memory.
On 5 September 2016 at 10:43, Justin Chang wrote:
> Hi all,
>
> So i used the following command-line options to view the non-zero
> structure of my assembled matrix:
>
> -mat_view draw -draw_pause -1
>
> And I got an image filled with cyan squares and dots. However, if I
>
t;
> On Fri, Aug 26, 2016 at 1:35 PM, Dave May <dave.mayhe...@gmail.com
> <javascript:_e(%7B%7D,'cvml','dave.mayhe...@gmail.com');>> wrote:
>
>>
>>
>> On 26 August 2016 at 14:34, Dave May <dave.mayhe...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','dav
On 26 August 2016 at 14:34, Dave May <dave.mayhe...@gmail.com> wrote:
>
>
> On 26 August 2016 at 14:14, Steven Dargaville <dargaville.ste...@gmail.com
> > wrote:
>
>> Hi all
>>
>> I'm just wondering if there is any plans in the future for
>
On 26 August 2016 at 14:14, Steven Dargaville
wrote:
> Hi all
>
> I'm just wondering if there is any plans in the future for
> MatGetDiagonalBlock to support shell matrices by registering a
> user-implemented MATOP? MatGetDiagonal supports MATOP, but the block
>
On Monday, 8 August 2016, Neiferd, David John > wrote:
> Hello all,
>
> I've been searching through the PETSc documentation to try to find how to
> solve a nonlinear system where the right hand side (b) varies as
On 4 August 2016 at 10:10, Patrick Sanan wrote:
> I have a patch that I got from Dave that he got from Jed which seems
> to be related to this. I'll make a PR.
>
Jed wrote this variant of the VTK viewer so please mark him as a reviewer
for my bug fix.
>
>
> On Wed,
8G which I mentioned in previous email. I re-run the job with 8G memory
> per core on average and there is no "Out Of Memory" error. I would do more
> test to see if there is still some memory issue.
>
Ok. I'd still like to know where the memory was being used since my
estimates
, light weight existing PETSc
example, run on your machine at the same scale.
This would hopefully enable us to correctly evaluate the actual memory
usage required by the solver configuration you are using.
Thanks,
Dave
>
>
> Frank
>
>
>
>
> On 07/08/2016 10:38 PM, Dave
On Saturday, 9 July 2016, frank > wrote:
> Hi Barry and Dave,
>
> Thank both of you for the advice.
>
> @Barry
> I made a mistake in the file names in last email. I attached the correct
> files this time.
> For all the three
Hi Frank,
On 6 July 2016 at 00:23, frank wrote:
> Hi,
>
> I am using the CG ksp solver and Multigrid preconditioner to solve a
> linear system in parallel.
> I chose to use the 'Telescope' as the preconditioner on the coarse mesh
> for its good performance.
> The petsc
On Thursday, 30 June 2016, Hassan Raiesi
wrote:
> Hello,
>
>
>
> We are using PETSC in our CFD code, and noticed that using
> “MatCreateMPIAIJWithSplitArrays” is almost 60% faster for large problem
> size (i.e DOF > 725M, using GAMG each time-step only takes
On Wednesday, 29 June 2016, ehsan sadrfaridpour wrote:
> I faced the below error during compiling my code for using
> MatGetSubMatrices.
>
> error: cannot convert ‘IS {aka _p_IS*}’ to ‘_p_IS* const*’ for argument
>> ‘3’ to ‘PetscErrorCode MatGetSubMatrices(Mat, PetscInt,
On 3 June 2016 at 11:37, Michael Becker <
michael.bec...@physik.uni-giessen.de> wrote:
> Dear all,
>
> I have a few questions regarding possible performance enhancements for the
> PETSc solver I included in my project.
>
> It's a particle-in-cell plasma simulation written in C++, where Poisson's
he
DMDA. Others may disagree.
I hope I finally helped answered your question.
Cheers,
Dave
>
> Ed
>
>
> On Fri, May 27, 2016 at 2:47 PM, Dave May <dave.mayhe...@gmail.com> wrote:
>
>>
>>
>> On 27 May 2016 at 21:24, Ed Bueler <elbue...@alaska.
On 27 May 2016 at 21:24, Ed Bueler wrote:
> Dave --
>
> Perhaps you should re-read my questions.
>
Actually - maybe we got our wires crossed from the beginning.
I'm going back to the original email as I missed something.
>> """
>> The recommended approach for
On 27 May 2016 at 20:34, Ed Bueler wrote:
> Dear PETSc --
>
> This is an "am I using it correctly" question. Probably the API has the
> current design because of something I am missing.
>
> First, a quote from the PETSc manual which I fully understand; it works
> great and
Matt beat me to the punch... :D
Anyway, here is my more detailed answer.
> Thanks! Somehow I missed DM{Get,Create}LocalVector(). BTW what is the
> difference between the Get and Create versions? It is not obvious from the
> documentation.
>
The DMDA contains a pool of vectors (both local and
n doubt, just email the petsc-users list :D
Thanks,
Dave
>
>
> Regards,
>
> Federico
>
>
>
> *__* *__* *__*
>
> Federico Miorelli
>
>
>
> Senior R Geophysicist
>
> *Subsurface Imaging - General Geophysics **Italy*
>
>
>
&
On 12 May 2016 at 11:36, Miorelli, Federico
wrote:
> In one of my subroutines I'm calling DMDAGetAO to get the application
> ordering from a DM structure.
>
> After using it I was calling AODestroy.
>
>
>
> Everything worked fine until I called the subroutine for the
On 12 May 2016 at 10:42, Sean Dettrick wrote:
> Hi,
>
> When discussing DMDAVecGetArrayDOF etc in section 2.4.4, the PETSc 3.7
> manual says "The array is accessed using the usual global indexing on the
> entire grid, but the user may only refer to the local and
On 30 April 2016 at 16:04, Ilyas YILMAZ wrote:
> Hello,
>
> The code segment I wrote based on "src/dm/da/examples/tutorials/ex2.c"
> crashes when destroying things / freeing memory as given below.
> I can't figure out what I'm missing? Any comments are welcome. (Petsc
>
> It is also inconsistent with some of the other DMShellSetXXX() as some
> type check the type (e.g. DMShellSetCreateMatrix), whilst some do, e.g.
> DMShellSetCoarsen().
>
>
ooops - I meant some setters DO check types and some DON'T
DMShellSetCreateMatrix() --> doesn't check the type
On 27 April 2016 at 22:49, Jed Brown <j...@jedbrown.org> wrote:
> Dave May <dave.mayhe...@gmail.com> writes:
> > This always bugged me.
> > I prefer to access the pointer as at least it's clear what I am doing and
> > when reading the code later, I am not requir
On 26 April 2016 at 23:58, Jed Brown <j...@jedbrown.org> wrote:
> Dave May <dave.mayhe...@gmail.com> writes:
> > You are always free to over-ride the method
> > dm->ops->creatematrix
> > with your own custom code to create
> > and preallocate the mat
On 26 April 2016 at 16:50, Gautam Bisht wrote:
> I want to follow up on this old thread. If a user knows the exact fill
> pattern of the off-diagonal block (i.e. d_nz+o_nz or d_nnz+o_nnz ), can
> one still not preallocate memory for the off-diagonal matrix when using
>
On 12 April 2016 at 15:39, Aulisa, Eugenio wrote:
> Hi,
>
> I am trying to understand better the meaning of
> the output obtained using the option
>
> -ksp_monitor_true_residual
>
> For this particular ksp gmres solver I used
>
> KSPSetTolerances( ksp, 1.0e-4, 1.0e-20,
for the vector.
> It is not clear to me when I can simply use VecCreate (as I've always done
> so far), and when to use MatCreateVecs...
>
You should always use MatCreateVecs(). The implementation of Mat will give
you back a consistent vector.
Thanks,
Dave
>
>
>
> On Wed
On 6 April 2016 at 11:08, FRANCAVILLA MATTEO ALESSANDRO
wrote:
> Hi,
>
> I'm trying to set a linear problem with a 2x2 block matrix (say A=[A11
> A12; A21 A22] in matlab notation). Obviously I could generate a single
> matrix, but I would like to keep it as a 2x2 block matrix
On 2 April 2016 at 11:18, Rongliang Chen wrote:
> Hi Shri,
>
> Thanks for your reply.
>
> Do you mean that I need to change the VecGetArrary() in
> /home/rlchen/soft/petsc-3.6.3/src/dm/interface/dm.c to VecGetArrayRead()?
>
No - you should change it in your function
On 15 March 2016 at 04:46, Matthew Knepley wrote:
> On Mon, Mar 14, 2016 at 10:05 PM, Steena Monteiro
> wrote:
>
>> Hello,
>>
>> I am having difficulty getting MatSetSize to work prior to using MatMult.
>>
>> For matrix A with rows=cols=1,139,905 and
> Other suggestions on how to best integrate staggered finite differences
> within the current PETSc framework are ofcourse also highly welcome.
> Our current thinking was to pack it into a DMSHELL (which has the problem
> of not having a restriction interface).
>
>
Using DMShell is the cleanest
On 11 March 2016 at 18:11, anton wrote:
> Hi team,
>
> I'm implementing staggered grid in a PETSc-canonical way, trying to build
> a custom DM object, attach it to SNES, that should later transfered it
> further to KSP and PC.
>
> Yet, the Galerking coarsening for staggered
On 22 February 2016 at 23:16, Timothée Nicolas
wrote:
> Hi all,
>
> It sounds it should be obvious but I can't find it somehow. I would like
> to use weighted jacobi (more precisely point block jacobi) as a smoother
> for multigrid, but I don't find the options to set
On 12 February 2016 at 19:16, Bhalla, Amneet Pal S
wrote:
> Hi Folks,
>
> I want to extract the CSR format from PETSc matrices and ship it to CUDA.
> Is there an easy way of doing this?
>
Yep, see these web pages
> [0]PETSC ERROR: MatSetBlockSize() line 6686 in
> /home/sang/petsc/petsc-3.4.5/src/mat/interface/matrix.c
> [0]PETSC ERROR: PetscErrorCode
> sasMatVecPetsc::DMCreateMatrix_DA_3d_MPIAIJ_pvs(DM, sasSmesh*,
> sasVector&, sasVector&, sasVector&a
I think he wants the source location so that he can copy and implementation
and "tweak" it slightly
The location is here
${PETSC_DIR}/src/dm/impls/da/fdda.c
/Users/dmay/software/petsc-3.6.0/src
dmay@nikkan:~/software/petsc-3.6.0/src $ grep -r
DMCreateMatrix_DA_3d_MPIAIJ *
On 11 February 2016 at 07:05, Michele Rosso wrote:
> I tried setting -mat_superlu_dist_replacetinypivot true: it does help to
> advance the run past the previous "critical" point but eventually it stops
> later with the same error.
> I forgot to mention my system is singular: I
If you don't specify preconditioner via -pc_type XXX, the default being
used is BJacobi-ILU.
This preconditioner will yield different results on different numbers of
MPI-processes, and will yield different results for a fixed number of
different MPI-processes, but with a different matrix
On 8 February 2016 at 12:31, Jacek Miloszewski
wrote:
> Dear PETSc users,
>
> I use PETSc to assembly a square matrix (in the attached example it is n =
> 4356) which has around 12% of non-zero entries. I timed my code using
> various number of process (data in
On 8 February 2016 at 12:31, Jacek Miloszewski
wrote:
> Dear PETSc users,
>
> I use PETSc to assembly a square matrix (in the attached example it is n =
> 4356) which has around 12% of non-zero entries. I timed my code using
> various number of process (data in
On Sunday, 7 February 2016, Kaushik Kulkarni wrote:
> Hello,
> I am a beginner at PETSc, so please excuse for such a trivial doubt. I am
> writing a program to learn about PETSc vectors. So in the process I thought
> to write a small program to learn about initializing and
On Thursday, 28 January 2016, Matthew Knepley > wrote:
> On Thu, Jan 28, 2016 at 11:36 AM, Xiangdong wrote:
>
>> What functions/tools can I use for dynamic migration in DMPlex framework?
>>
>
> In this
Try this
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatAXPY.html
On 22 January 2016 at 00:11, wen zhao wrote:
> Hello,
>
> I want to add to matrix, but i haven't found a function which can do this
> operation. Is there existe a kind of operation can do
On 14 January 2016 at 14:24, Matthew Knepley wrote:
> On Wed, Jan 13, 2016 at 11:12 PM, Bhalla, Amneet Pal S <
> amne...@live.unc.edu> wrote:
>
>>
>>
>> On Jan 13, 2016, at 6:22 PM, Matthew Knepley wrote:
>>
>> Can you mail us a -log_summary for a rough
On 12 January 2016 at 14:14, Gideon Simpson
wrote:
> That seems to to allow for me to cook up a convergence test in terms of
> the 2 norm.
>
While you are only provided the 2 norm of F, you are also given access to
the SNES object. Thus inside your user convergence
or many stopping conditions
and are computed by the snes methods. As such, are readily available and
for efficiency and convenience they are provided to the user (e.g. to avoid
you having to re-compute norms).
Cheers,
Dave
>
> -gideon
>
> On Jan 12, 2016, at 8:24 AM, Dave May <dav
ace the rule for
SNES_CONVERGED_FNORM_RELATIVE
with your custom scaled stopping condition.
>
> On Jan 12, 2016, at 8:37 AM, Dave May <dave.mayhe...@gmail.com> wrote:
>
>
>
> On 12 January 2016 at 14:33, Gideon Simpson <gideon.simp...@gmail.com>
> wrote:
>
The manpage
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetDiagonalBlock.html
indicates the reference counter on the returned matrix (a) isn't
incremented.
This statement would imply that in the absence of calling
PetscObjectReference() yourself, you should not call
On 17 December 2015 at 08:06, Jose A. Abell M. wrote:
> Hello dear PETSc users,
>
> This is a problem that pops up often, from what I see, in the mailing
> list. My program takes a long time assembling the matrix.
>
> What I know:
>
>
>- Matrix Size is (MatMPIAIJ)
t;>>
>>
>> Note here you would need --trace-children=yes for valgrind.
>>
>> Matt
>>
>>
>>> It seems the second is the correct way to proceed right ? This gives
>>> very different behaviour for valgrind.
>>>
>>> Timoth
be inserted
live locally (and don't need to be scattered to another rank), it should
definitely not take hours.
>
> Regards,
> Jose
>
> --
>
> José Abell
> *PhD Candidate*
> Computational Geomechanics Group
> Dept. of Civil and Environmental Engineering
> UC Davis
> ww
One suggestion is you have some uninitialized variables in your pcshell.
Despite your arch being called "debug", your configure options indicate you
have turned debugging off.
C standard doesn't prescribe how uninit variables should be treated - the
behavior is labelled as undefined. As a result,
configure with
> --with-debugging=no, which did not change anything.
>
> Thx
>
> Timothee
>
>
> 2015-12-14 17:04 GMT+09:00 Dave May <dave.mayhe...@gmail.com>:
>
>> One suggestion is you have some uninitialized variables in your pcshell.
>> Despite your
101 - 200 of 317 matches
Mail list logo