Hi, Philip,
Could you add -log_view and see what functions are used in the solve? Since
it is CPU-only, perhaps with -log_view of different runs, we can easily see
which functions slowed down.
--Junchao Zhang
From: Fackler, Philip
Sent: Tuesday, January 17,
MPI rank distribution (e.g., 8 ranks per node or 16 ranks per node) is usually
managed by workload managers like Slurm, PBS through your job scripts, which is
out of petsc’s control.
From: Amin Sadeghi
Date: Wednesday, March 25, 2020 at 4:40 PM
To: Junchao Zhang
Cc: Mark Adams , PETSc users
--Junchao Zhang
On Mon, Jan 27, 2020 at 10:09 AM Felix Huber
mailto:st107...@stud.uni-stuttgart.de>> wrote:
Thank you all for you reply!
> Are you using a KSP/PC configuration which should weak scale?
Yes the system is solved with KSPSolve. There is no preconditioner yet,
but I fixed the
Hello, Anthony
I tried petsc-3.8.4 + icc/gcc + Intel MPI 2019 update 5 + optimized/debug
build, and ran with 1024 ranks, but I could not reproduce the error. Maybe you
can try these:
* Use the latest petsc + your test example, run with AND without
-vecscatter_type mpi1, to see if they can
I submitted a job and I am waiting for the result.
--Junchao Zhang
On Tue, Jan 21, 2020 at 3:03 AM Dave May
mailto:dave.mayhe...@gmail.com>> wrote:
Hi Anthony,
On Tue, 21 Jan 2020 at 08:25, Anthony Jourdon
mailto:jourdon_anth...@hotmail.fr>> wrote:
Hello,
I made a test to try to reproduce
It seems the problem is triggered by DMSetUp. You can write a small test
creating the DMDA with the same size as your code, to see if you can reproduce
the problem. If yes, it would be much easier for us to debug it.
--Junchao Zhang
On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon
Do you have a test example?
--Junchao Zhang
On Tue, Jan 14, 2020 at 4:44 AM Y. Shidi
mailto:ys...@cam.ac.uk>> wrote:
Dear developers,
I have a 2x2 nested matrix and the corresponding nested vector.
When I running the code with field splitting, it gets the following
errors:
[0]PETSC ERROR:
Fixed in https://gitlab.com/petsc/petsc/merge_requests/2262
--Junchao Zhang
On Fri, Nov 1, 2019 at 6:51 PM Sajid Ali
mailto:sajidsyed2...@u.northwestern.edu>>
wrote:
Hi Junchao/Barry,
It doesn't really matter what the h5 file contains, so I'm attaching a lightly
edited script of
I know nothing about Vec FFTW, but if you can provide hdf5 files in your test,
I will see if I can reproduce it.
--Junchao Zhang
On Fri, Nov 1, 2019 at 2:08 PM Sajid Ali via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi PETSc-developers,
I'm unable to debug a crash with VecDestroy
Usually due to uninitialized variables. You can try valgrind. Read tutorial
from page 3 of https://www.mcs.anl.gov/petsc/petsc-20/tutorial/PETSc1.pdf
--Junchao Zhang
On Fri, Oct 18, 2019 at 6:23 AM Shidi Yan via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Dear developers,
I am using
Hello, David,
It took a longer time than I expected to add the CUDA-aware MPI feature in
PETSc. It is now in PETSc-3.12, released last week. I have a little fix after
that, so you better use petsc master. Use petsc option -use_gpu_aware_mpi to
enable it. On Summit, you also need jsrun
Does it hang with 2 or 4 processes? Which PETSc version do you use (using the
latest is easier for us to debug)? Did you configure PETSc with
--with-debugging=yes COPTFLAGS="-O0 -g" CXXOPTFLAGS="-O0 -g"
After attaching gdb to one process, you can use bt to see its stack trace.
--Junchao
With VecGhostUpdateBegin(v, INSERT_VALUES, SCATTER_REVERSE), the owner will get
updated by ghost values. So in your case 1, proc0 gets either value1 or value2
from proc1/2; in case 2; proc0 gets either value0 or value2 from proc1/2.
In short, you could not achieve your goal with INSERT_VALUES.
On Wed, Sep 25, 2019 at 9:11 AM Aulisa, Eugenio via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi,
I have a vector with ghost nodes where each process may or may not change the
value of a specific ghost node (using INSERT_VALUES).
At the end I would like for each process, that see a
When processes get stuck, you can attach gdb to one process and back trace its
call stack to see what it is doing, so we can have better understanding.
--Junchao Zhang
On Fri, Sep 13, 2019 at 11:31 AM José Lorenzo via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,
I am solving a
Definitely I will do. Thanks.
--Junchao Zhang
On Thu, Aug 22, 2019 at 11:34 AM David Gutzwiller
mailto:david.gutzwil...@gmail.com>> wrote:
Hello Junchao,
Spectacular news!
I have our production code running on Summit (Power9 + Nvidia V100) and on
local x86 workstations, and I can definitely
This feature is under active development. I hope I can make it usable in a
couple of weeks. Thanks.
--Junchao Zhang
On Wed, Aug 21, 2019 at 3:21 PM David Gutzwiller via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,
I'm currently using PETSc for the GPU acceleration of simple
Did you used the same number of MPI ranks, same build options on your pc and on
cluster? If not, you can try to align options on your pc with those on your
cluster to see if you can reproduce the error on your pc. You can also try
valgrind to see if there are memory errors like use of
You need to test on your personal computer with multiple MPI processes (e.g.,
mpirun -n 2 ...) before moving to big machines. You may also need to configure
petsc with --with-dedugging=1 --COPTFLAGS="-O0 -g" etc to ease debugging.
--Junchao Zhang
On Sat, Jul 20, 2019 at 11:03 AM Yuyun Yang via
On Sat, Jul 20, 2019 at 5:47 AM José Lorenzo via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,
I am not sure I understand the function VecGhostRestoreLocalForm. If I proceed
as stated in the manual,
I know the time only from my function. Anyway, in
> > every case, the times between std::chronos and the PETSc log match.
> >
> > (The large times are in part "4b- Building offdiagonal part" or "Event
> > Stage 5: Offdiag").
> >
> > El vie.
22:20, Smith, Barry F.
> (mailto:bsm...@mcs.anl.gov>>) escribió:
>
> Note that this is a one time cost if the nonzero structure of the matrix
> stays the same. It will not happen in future MatAssemblies.
>
> > On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users
On Thu, Jun 27, 2019 at 4:50 PM Adrian Croucher
mailto:a.crouc...@auckland.ac.nz>> wrote:
hi
On 28/06/19 3:14 AM, Zhang, Junchao wrote:
> You can dump relevant SFs to make sure their graph is correct.
Yes, I'm doing that, and the graphs don't look correct.
Check how the graph is created and
On Wed, Jun 26, 2019 at 11:12 PM Adrian Croucher
mailto:a.crouc...@auckland.ac.nz>> wrote:
hi
On 27/06/19 4:07 PM, Zhang, Junchao wrote:
Adrian, I am working on SF but know nothing about DMPlexDistributeField. Do
you think SF creation or communication is wrong? If yes, I'd like to know the
On Mon, Jun 24, 2019 at 6:23 PM Adrian Croucher via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
hi
Thanks Matt for the explanation about this.
I have been trying a test which does the following:
1) read in DMPlex from file
2) distribute it, with overlap = 1, using
>>) escribió:
>
> Note that this is a one time cost if the nonzero structure of the matrix
> stays the same. It will not happen in future MatAssemblies.
>
> > On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users
> > mailto:petsc-users@mcs.anl.gov>&g
anks both of you for your answers,
>
> El jue., 20 jun. 2019 a las 22:20, Smith, Barry F.
> (mailto:bsm...@mcs.anl.gov>>) escribió:
>
> Note that this is a one time cost if the nonzero structure of the matrix
> stays the same. It will not happen in future MatAssemblies.
>
>
s.anl.gov>>) escribió:
>
> Note that this is a one time cost if the nonzero structure of the matrix
> stays the same. It will not happen in future MatAssemblies.
>
> > On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users
> > mailto:petsc-users@mcs.anl.gov>
atrix
stays the same. It will not happen in future MatAssemblies.
> On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Those messages were used to build MatMult communication pattern for the
> matrix. They were not part
Those messages were used to build MatMult communication pattern for the matrix.
They were not part of the matrix entries-passing you imagined, but indeed
happened in MatAssemblyEnd. If you want to make sure processors do not set
remote entries, you can use
Sanjay,
You have one more reason to use VecScatter, which is heavily used and
well-tested.
--Junchao Zhang
On Wed, Jun 5, 2019 at 5:47 PM Sanjay Govindjee
mailto:s...@berkeley.edu>> wrote:
I found the bug (naturally in my own code). When I made the MPI_Wait( )
changes, I missed one
OK, I see. I mistakenly read PetscMemoryGetCurrentUsage as
PetscMallocGetCurrentUsage. You should also do PetscMallocGetCurrentUsage(),
so that we know whether the increased memory is allocated by PETSc.
On Wed, Jun 5, 2019, 9:58 AM Sanjay GOVINDJEE
mailto:s...@berkeley.edu>> wrote:
On Mon, Jun 3, 2019 at 5:23 PM Stefano Zampini
mailto:stefano.zamp...@gmail.com>> wrote:
On Jun 4, 2019, at 1:17 AM, Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Sanjay & Barry,
Sorry, I made a mistake that I said I could reproduced Sanjay's exper
Sanjay & Barry,
Sorry, I made a mistake that I said I could reproduced Sanjay's experiments.
I found 1) to correctly use PetscMallocGetCurrentUsage() when petsc is
configured without debugging, I have to add -malloc to run the program. 2) I
have to instrument the code outside of KSPSolve().
On Sat, Jun 1, 2019 at 3:21 AM Sanjay Govindjee
mailto:s...@berkeley.edu>> wrote:
Barry,
If you look at the graphs I generated (on my Mac), you will see that
OpenMPI and MPICH have very different values (along with the fact that
MPICH does not seem to adhere
to the standard (for releasing
On Fri, May 31, 2019 at 3:48 PM Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Thanks Stefano.
Reading the manual pages a bit more carefully,
I think I can see what I should be doing. Which should be roughly to
1. Set up target Seq vectors on PETSC_COMM_SELF
2. Use
Sanjay,
I tried petsc with MPICH and OpenMPI on my Macbook. I inserted
PetscMemoryGetCurrentUsage/PetscMallocGetCurrentUsage at the beginning and end
of KSPSolve and then computed the delta and summed over processes. Then I
tested with
Hi, Sanjay,
Could you send your modified data exchange code (psetb.F) with MPI_Waitall?
See other inlined comments below. Thanks.
On Thu, May 30, 2019 at 1:49 PM Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Lawrence,
Thanks for taking a look! This is what I had
Yes, see MatGetRow
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetRow.html
--Junchao Zhang
On Wed, May 29, 2019 at 2:28 PM Manav Bhatia via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi,
Once a MPI-AIJ matrix has been assembled, is there a method to get the
Also works with PathScale EKOPath Compiler Suite installed on MCS machines.
$ pathcc -c check-pic.c -fPIC
$ pathcc -c check-pic.c
check-pic.c:2:2: error: "no-PIC"
#error "no-PIC"
^
1 error generated.
--Junchao Zhang
On Tue, May 28, 2019 at 1:54 PM Smith, Barry F. via petsc-users
The index sets provide possible i, j in scatter "y[j] = x[i]". Each process
provides a portion of the i and j of the whole scatter. The only requirement of
VecScatterCreate is that on each process, local sizes of ix and iy must be
equal (a process can provide empty ix and iy). A process's i
Jean-Christophe,
I added a petsc example at
https://bitbucket.org/petsc/petsc/pull-requests/1652/add-an-example-to-show-transfer-vectors/diff#chg-src/vec/vscat/examples/ex9.c
It shows how to transfer vectors from a parent communicator to vectors on a
child communicator. It also shows how to
https://www.mcs.anl.gov/petsc/documentation/changes/37.html has
PetscMemoryShowUsage() and -memory_info changed to PetscMemoryView() and
-memory_view
--Junchao Zhang
On Tue, May 7, 2019 at 6:56 PM Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
I was trying to clean
On Tue, Apr 30, 2019 at 11:42 AM Sajid Ali via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi PETSc Developers,
I see that in the examples for ISCreateGeneral, the index sets are created by
copying values from int arrays (which were created by PetscMalloc1 which is not
collective).
How many MPI ranks do you use? The following line is suspicious. I guess you
do not want a vector of global length 1.
66 VecSetSizes(b,PETSC_DECIDE,1);
--Junchao Zhang
On Wed, Apr 24, 2019 at 4:14 PM Karl Lin via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi, there
I have been
The error message has
[0]PETSC ERROR: New nonzero at (61,124) caused a malloc
[0]PETSC ERROR: New nonzero at (124,186) caused a malloc
You can check your code to see if you allocated spots for these nonzeros.
--Junchao Zhang
On Tue, Apr 23, 2019 at 8:57 PM Maahi Talukder via petsc-users
I updated the branch and made a PR. I tried to do MPI_SUM on MPI_CHAR. We do
not have UnpackAdd on this type (we are right). But unfortunately, MPICH's
MPI_Reduce_local did not report errors (it should) so we did not generate an
error either.
--Junchao Zhang
On Thu, Apr 4, 2019 at 10:37 AM
On Wed, Apr 3, 2019, 10:29 PM Fande Kong
mailto:fdkong...@gmail.com>> wrote:
Thanks for the reply. It is not necessary for me to use MPI_SUM. I think the
better choice is MPIU_REPLACE. Doesn’t MPIU_REPLACE work for any mpi_datatype?
Yes.
Fande
On Apr 3, 2019, at 9:15 PM, Zhang, Junchao
On Wed, Apr 3, 2019 at 3:41 AM Lisandro Dalcin via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
IIRC, MPI_CHAR is for ASCII text data. Also, remember that in C the signedness
of plain `char` is implementation (or platform?) dependent.
I'm not sure MPI_Reduce() is supposed to / should
on pattern to see if it is
the case.
--Junchao Zhang
On Wed, Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera
mailto:mvaler...@sdsu.edu>> wrote:
Thanks for your answer, so for example i have
Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera
mailto:mvaler...@sdsu.edu>> wrote:
Thanks for your answer, so for example i have a log for 2
ften happens thanks to locality, then the memory copy time is counted in
VecScatter. You can analyze your code's communication pattern to see if it is
the case.
--Junchao Zhang
On Wed, Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
On Wed,
On Thu, Mar 21, 2019 at 1:57 PM Derek Gaston via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
It sounds like you already tracked this down... but for completeness here is
what track-origins gives:
==262923== Conditional jump or move depends on uninitialised value(s)
==262923==at
s ago
>
> https://bitbucket.org/petsc/petsc/commits/c3caad8634d376283f7053f3b388606b45b3122c
>
> Maybe this will fix your problem too?
>
> Stefano
>
>
> Il Gio 21 Mar 2019, 04:21 Zhang, Junchao via petsc-users <
> petsc-users@mcs.anl.gov<mailto:petsc-users@m
https://bitbucket.org/petsc/petsc/commits/c3caad8634d376283f7053f3b388606b45b3122c
Maybe this will fix your problem too?
Stefano
Il Gio 21 Mar 2019, 04:21 Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> ha scritto:
Hi, Derek,
Try to apply this tiny (but dirty) patch on
Hi, Derek,
Try to apply this tiny (but dirty) patch on your version of PETSc to disable
the VecScatterMemcpyPlan optimization to see if it helps.
Thanks.
--Junchao Zhang
On Wed, Mar 20, 2019 at 6:33 PM Junchao Zhang
mailto:jczh...@mcs.anl.gov>> wrote:
Did you see the warning with small
Did you see the warning with small scale runs? Is it possible to provide a
test code?
You mentioned "changing PETSc now would be pretty painful". Is it because it
will affect your performance (but not your code)? If yes, could you try PETSc
master and run you code with or without
pattern to see if it is
the case.
--Junchao Zhang
On Wed, Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera
mailto:mvaler...@sdsu.edu>> wrote:
Thanks for your answer, so for example i have a
On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera
mailto:mvaler...@sdsu.edu>> wrote:
Thanks for your answer, so for example i have a log for 200 cores across 10
nodes that reads:
See the "Mess AvgLen Reduct" number in each log stage. Mess is the total
number of messages sent in an event over all processes. AvgLen is average
message len. Reduct is the number of global reduction.
Each event like VecScatterBegin/End has a maximal execution time over all
processes, and
Manuel,
Could you try to add this line
sbaij->free_imax_ilen = PETSC_TRUE;
after line 2431 in
/opt/PETSc_library/petsc-3.10.4/src/mat/impls/sbaij/seq/sbaij.c
PS: Matt, this bug looks unrelated to my VecRestoreArrayRead_Nest fix.
--Junchao Zhang
On Wed, Mar 13, 2019 at 9:05 AM Matthew
Maybe you should delete your PETSC_ARCH directory and recompile it? I tested
my branch. It should not that easily fail :)
--Junchao Zhang
On Tue, Mar 12, 2019 at 8:20 PM Manuel Valera
mailto:mvaler...@sdsu.edu>> wrote:
Hi Mr Zhang, thanks for your reply,
I just checked your branch out,
Manuel,
I was working on a branch to revert the VecScatterCreate to
VecScatterCreateWithData change. The change broke PETSc API and I think we do
not need it. I had planed to do a pull request after my another PR is merged.
But since it already affects you, you can try this branch now, which is
Hi, Manuel,
I recently fixed a problem in VecRestoreArrayRead. Basically, I added
VecRestoreArrayRead_Nest. Could you try the master branch of PETSc to see if it
fixes your problem?
Thanks.
--Junchao Zhang
On Mon, Mar 11, 2019 at 6:56 AM Manuel Colera Rico via petsc-users
On Mon, Mar 4, 2019 at 10:39 AM Matthew Knepley via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
On Mon, Mar 4, 2019 at 11:28 AM Cyrill Vonplanta via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Dear Petsc Users,
I am trying to implement a variant of the $l^1$-Gauss-Seidel
Eda,
An update to ex72 is merged to PETSc master branch just now. It now can read
matrices either symmetric or non-symmetric in Matrix Market format, and output
a petsc binary matrix in MATSBAIJ format (for symmetric) or MATAIJ format (for
non-symmetric). See help in source code for usage.
Try the following to see if you can catch the bug easily: 1) Get error code for
each petsc function and check it with CHKERRQ; 2) Link your code with a petsc
library with debugging enabled (configured with --with-debugging=1); 3) Run
your code with valgrind
--Junchao Zhang
On Wed, Feb 27,
Could you provide a compilable and runnable test so I can try it?
--Junchao Zhang
On Wed, Feb 27, 2019 at 7:34 PM Yuyun Yang
mailto:yyan...@stanford.edu>> wrote:
Thanks, I fixed that, but I’m not actually calling the testScatters() function
in my implementation (in the constructor, the only
On Wed, Feb 27, 2019 at 7:03 PM Sajid Ali
mailto:sajidsyed2...@u.northwestern.edu>>
wrote:
Hi Junchao,
I’m confused with the syntax. If I submit the following as my job script, I get
an error :
#!/bin/bash
#SBATCH --job-name=petsc_test
#SBATCH -N 1
#SBATCH -C knl,quad,flat
#SBATCH -p
Use srun numactl -m 1 ./app OR srun numactl -p 1
./app
See bottom of
https://www.nersc.gov/users/computational-systems/cori/configuration/knl-processor-modes/
--Junchao Zhang
On Wed, Feb 27, 2019 at 4:16 PM Sajid Ali via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi,
I ran a TS
On Wed, Feb 27, 2019 at 10:41 AM Yuyun Yang via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
I called VecDestroy() in the destructor for this object – is that not the right
way to do it?
In Domain::testScatters(), you have many VecDuplicate(,), You need to
VecDestroy() before doing new
Sure.
--Junchao Zhang
On Tue, Feb 12, 2019 at 9:47 AM Matthew Knepley
mailto:knep...@gmail.com>> wrote:
Hi Junchao,
Could you fix the MM example in PETSc to have this full support? That way we
will always have it.
Thanks,
Matt
On Tue, Feb 12, 2019 at 10:27 AM Zhang, Junchao via
Eda,
I have a code that can read in Matrix Market and write out PETSc binary
files. Usage: mpirun -n 1 ./mm2petsc -fin -fout . You can
have a try.
--Junchao Zhang
On Tue, Feb 12, 2019 at 1:50 AM Eda Oktay via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,
I am trying to
On Fri, Jan 25, 2019 at 8:07 PM Mohammad Gohardoust via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi,
I am trying to modify a "pure MPI" code for solving water movement equation in
soils which employs KSP iterative solvers. This code gets really slow in the
hpc I am testing it as
-mattransposematmult_via scalable'
Hong
On Fri, Jan 11, 2019 at 9:52 AM Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
I saw the following error message in your first email.
[0]PETSC ERROR: Out of memory. This could be due to allocating
[0]PETSC ERROR: too large an obj
11, 2019 at 10:34 AM Zhang, Hong via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Add option '-mattransposematmult_via scalable'
Hong
On Fri, Jan 11, 2019 at 9:52 AM Zhang, Junchao via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
I saw the following error message in
I saw the following error message in your first email.
[0]PETSC ERROR: Out of memory. This could be due to allocating
[0]PETSC ERROR: too large an object or bleeding by not properly
[0]PETSC ERROR: destroying unneeded objects.
Probably the matrix is too large. You can try with more compute nodes,
Or, you can have your own array and then create PETSc vectors with
VecCreateGhostWithArray, so that the memory resizing is managed by yourself.
--Junchao Zhang
On Mon, Dec 17, 2018 at 9:15 AM Shidi Yan via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello,
I am working on adaptive
On Wed, Dec 12, 2018 at 7:14 AM Sal Am via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi I am getting an error using MUMPS.
How I run it:
bash-4.2$ mpiexec -n 16 ./solveCSys -ksp_type richardson -pc_type lu
-pc_factor_mat_solver_type mumps -ksp_max_it 1 -ksp_monitor_true_residual
On Tue, Nov 27, 2018 at 5:25 AM Edoardo alinovi via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Dear users,
I have installed intel parallel studio on my workstation and thus I would like
to take advantage of intel compiler.
Before messing up my installation, have you got some
80 matches
Mail list logo