Hi All,
I get some problem in VecGetArray in Fortran, which I guess is due to
the default -C compiler option.
Current compiler option are:
$ make ksp_inhm
/cygdrive/c/cygwin/packages/petsc-3.4.2/bin/win32fe/win32fe ifort -c
-MT -Z7 -fpp -I/cygdrive/c/cygwin/packages/petsc-3.4.2/in
clude
Hi Satish,
Thanks. It works now.
Danyang
On 09/08/2013 2:52 PM, Satish Balay wrote:
On Fri, 9 Aug 2013, Danyang Su wrote:
Hi All,
I get some problem in VecGetArray in Fortran, which I guess is due to the
default -C compiler option.
Current compiler option are:
$ make ksp_inhm
/cygdrive/c
Hi All,
I have the following codes, it can be compiled but it always throw out
error when running. I also tried the example ex44f.F90, it although
throw out similar error.
call
DMDAGetInfo(da,PETSC_NULL_INTEGER,mx,PETSC_NULL_INTEGER,
Hi All,
I have many linear equations with the same matrix structure (same
non-zero entries) that are derived from a flow problem at different time
steps. I feel puzzled that the results are a little different when the
solver run repeatedly and one by one. Say, I have three equations, I can
Hi Matthew,
Thanks so much. It works out now.
Danyang
On 14/08/2013 11:14 AM, Matthew Knepley wrote:
On Wed, Aug 14, 2013 at 12:28 PM, Danyang Su danyang...@gmail.com
mailto:danyang...@gmail.com wrote:
Hi All,
I have many linear equations with the same matrix structure (same
, Danyang Su danyang...@gmail.com wrote:
Hi All,
I have many linear equations with the same matrix structure (same non-zero
entries) that are derived from a flow problem at different time steps. I feel
puzzled that the results are a little different when the solver run repeatedly
and one by one. Say
Hi All,
I have a parallel Fortran project utilizing OpenMP parallelism. Before
porting to PETSc utilizing MPICH, I want to know if it is possible to
use PETSc solver with OpenMP. There is some information for petsc-dev
openmp benchmarking as can be found here,
Barry
On Sep 17, 2013, at 12:29 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
I have a parallel Fortran project utilizing OpenMP parallelism. Before porting
to PETSc utilizing MPICH, I want to know if it is possible to use PETSc solver
with OpenMP. There is some information for petsc-dev
of days but haven't completed it yet. I'll let you know
if I get it working.
Barry
On Sep 17, 2013, at 2:31 PM, Danyang Su danyang...@gmail.com wrote:
Hi Barry,
Thanks so much.
Which solvers are current OpenMP parallel currently? Can I use the external
package (e.g., SuperLU OpenMP
:
There are three thread communicator types in PETSc. The default is no
thread which is basically a non-threaded version. The other two types
are openmp and pthread. If you want to use OpenMP then use the
option -threadcomm_type openmp.
Shri
On Sep 21, 2013, at 3:46 PM, Danyang Su danyang...@gmail.com
the affinities via KMP_AFFINITY and let us know if it works.
Shri
On Sep 21, 2013, at 11:06 PM, Danyang Su wrote:
Hi Shri,
Thanks for your info. It can work with the option -threadcomm_type
openmp. But another problem arises, as described as follows.
The sparse matrix is 53760*53760
version WITHOUT the
-threadcomm_nthreads 1
-threadcomm_type openmp
command line options is it still slow?
On Sep 23, 2013, at 1:33 PM, Danyang Su danyang...@gmail.com wrote:
Hi Shri,
It seems that the problem does not result from the affinities setting for
threads. I have tried several
installation of PETSc-dev
OpenMP version, the same problem exist in PETSc-3.4.2 MPI version if run
with 1 processor, but no problem with 2 or more processors.
Thanks,
Danyang
On 23/09/2013 12:01 PM, Danyang Su wrote:
Hi Barry,
Sorry I forgot the message in the previous email. It is still slow
when
Hi Barry,
Yes, it works fine when OpenMP is not used.
I will try GNU compiler.
Thanks,
Danyang
On 23/09/2013 12:15 PM, Barry Smith wrote:
On Sep 23, 2013, at 2:01 PM, Danyang Su danyang...@gmail.com wrote:
Hi Barry,
Sorry I forgot the message in the previous email. It is still slow when
On 23/09/2013 12:16 PM, Barry Smith wrote:
On Sep 23, 2013, at 2:13 PM, Danyang Su danyang...@gmail.com wrote:
Hi Barry,
Another strange problem:
Currently I have PETSc-3.4.2 MPI version and PETSc-dev OpenMP version on my
computer, with different environment variable of PETSC_ARCH
On 23/09/2013 12:33 PM, Jed Brown wrote:
Barry Smith bsm...@mcs.anl.gov writes:
So when you compile the software to use OpenMP it is slow;
regardless of whether you use OpenMP explicitly or not. When you
compile the software to NOT use OpenMP then it is much faster?
Are you running
anybody help to check my log
to see what causes this problem?
Thanks and regards,
Danyang
On 23/09/2013 12:16 PM, Barry Smith wrote:
On Sep 23, 2013, at 2:13 PM, Danyang Su danyang...@gmail.com wrote:
Hi Barry,
Another strange problem:
Currently I have PETSc-3.4.2 MPI version and PETSc-dev
Hi All,
I have a question on the speedup of PETSc when using OpenMP. I can get
good speedup when using MPI, but no speedup when using OpenMP.
The example is ex2f with m=100 and n=100. The number of available
processors is 16 (32 threads) and the OS is Windows Server 2012. The log
files for 4
Hi Karli,
This does not make any difference. I have scaled up the matrix but the
performance does not change. If I run with OpenMP, the iteration number
is always the same whatever how many processors are used. This seems
quite strange as the iteration number usually increase as the number of
dense format.
So the time step cannot increase due to this.
Thanks,
Danyang
Barry
On Jan 22, 2014, at 2:11 PM, Danyang Su danyang...@gmail.com wrote:
Dear All,
I have a reactive transport problem that use block matrices. Each block can be
dense with a lot of zero entries or sparse
-pc_type lu and the sparse format right? Or is that too
slow?
Barry
Jed and Emil, do we have any integrators that keep the time-step small due
to slow convergence of Newton in TS?
On Jan 22, 2014, at 5:05 PM, Danyang Su danyang...@gmail.com wrote:
On 22/01/2014 1:42 PM, Barry Smith
, Danyang Su danyang...@gmail.com wrote:
On 22/01/2014 4:32 PM, Barry Smith wrote:
What ODE integrator are you using? Your own? One in PETSc?
I use my own integrator and PETSc is used as KSP solver. I may need to test more
Hi All,
I implement KSP solver in a Newton iteration. The convergence parameters
are: rtol 1.0E-8, abstol 1.0E-50, dtol 1.0E5, maxits 1000.
Does the following convergence monitoring make sense? I thought the
solver should stop before iteration 36, but it stops at iteration 70.
0 KSP
Hi All,
Is it possible to configure PETSc with support for hybrid MPI-OpenMP?
My codes have a lot of OpenMP instructions and work well. I have
implemented PETSc solver in the codes and the solver can also work and
help to speedup. Currently I have the following question:
1. If I configure
On 29/01/2014 6:08 PM, Jed Brown wrote:
Danyang Su danyang...@gmail.com writes:
Hi Karli,
--with-threadcomm --with-openmp can work when configure PETSc with
MPI-OpenMP. Sorry for making a mistake before.
The program can be compiled but I got a new error while running my program.
Error
,nprcs,ierrcode)
The value of rank and nprcs are always 0 and 1, respectively, whatever
how many processors are used in running the program.
Danyang
On 29/01/2014 6:08 PM, Jed Brown wrote:
Danyang Su danyang...@gmail.com writes:
Hi Karli,
--with-threadcomm --with-openmp can work when
On 30/01/2014 9:30 AM, Jed Brown wrote:
Danyang Su danyang...@gmail.com writes:
I made a second check on initialization of PETSc, and found that the
initialization does not take effect. The codes are as follows.
call PetscInitialize(Petsc_Null_Character,ierrcode)
call
Hi All,
When configure petsc with msmpi
./configure --with-cc='win32fe cl' --with-fc='win32fe ifort'
--with-cxx='win32fe cl' --download-f-blas-lapack --with-threadco
mm --with-openmp --with-mpi-include=/cygdrive/c/Program Files/Microsoft
HPC Pack 2008 R2/Inc --with-mpi-lib=/cygdrive/c/Prog
Hi All,
MPICH with CYGWIN can work on my box if only --with-mpi or --with-openmp
is specified, but when both option is given, I can not configure PETSc.
As you see, it's best to switch to Linux or at first step, switch to
Linux virtual machine. This is what I need to do later, maybe I should
Hi All,
I have come across an ill conditioned like problem. The matrix in this
problem is block matrix, in each block, there are some zero entries. The
preconditioned residual norm drops slowly but the true residual norm
drops quickly in the first few iterations. So as to improve the
Hi All,
Can I control KSP solver iteration by user defined criteria? I would
like to check both preconditioned residual norm and true residual norm
in every iteration, and force the solver to stop iteration if either
criteria matches.
If using KSPSetConvergenceTest, can I get the true
On 14/02/2014 11:12 AM, Barry Smith wrote:
On Feb 14, 2014, at 1:07 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
Can I control KSP solver iteration by user defined criteria? I would like to
check both preconditioned residual norm and true residual norm in every
iteration, and force
, at 11:40 PM, Danyang Su danyang...@gmail.com wrote:
-ksp_norm_type unpreconditioned -ksp_set_pc_side right
The -ksp_norm_type unpreconditioned does not work well for my cases. When I
set the following function in MyKSPConverged function, the result is different from
KSPDefaultConverged though
On 16/02/2014 6:00 AM, Barry Smith wrote:
On Feb 16, 2014, at 2:04 AM, Danyang Su danyang...@gmail.com wrote:
Thanks for the quick reply.
I just made a second check on it. I use the preconditioned norm type for both runs. The
ex2f was taken as the example to implement MyKSPConverged
Hi All,
I can successfully build hypre to hypre.lib with cmake and vs2010, but I
cannot find the include folder in the release folder. There is only one
folder named include in src\FEI_mv\ml\src\include. Is this the one I
need to configure with PETSc?
Both hypre-2.9.0b and hypre-2.9.1a have
in
LIBCMT.lib(free.obj)
You'll have to build hypre with '/MT' or equivalent option.
All libraries should be built with the same option - otherwise MS
compilers barf at link time.
It can now be configured without any problem now. Thanks so much.
Danyang
Satish
On Tue, 25 Feb 2014, Danyang Su
Hi All,
I tried to set the simulation domain using DMDA coordinates, following
the example dm/examples/tutorials/ex3.c. The 1D problem worked fine but
the 2D and 3D failed because of the definition in coors2d and coords3d.
What should I use to define the variable coords2d and coords3d?
,
respectively. And the returned value is index (local node index in x,
y, z dim), not the coordinate. Correct?
Thanks and regards,
Danyang
On 25/04/2014 10:06 AM, Barry Smith wrote:
On Apr 25, 2014, at 11:41 AM, Danyang Su danyang...@gmail.com wrote:
Hi All,
I tried to set the simulation domain using
DMDAGetLocalBoundingBox(dmda_flow%da,lmin,lmax,ierr)
!this will return coordinate value
Thanks and regards,
Danyang
On 27/04/2014 7:00 AM, Jed Brown wrote:
Danyang Su danyang...@gmail.com writes:
Hi Barry,
Another question is about DMDAGetLocalBoundingBox in fortran.
PetscErrorCode
Hi All,
The codes can run successfully in release mode, but in debug mode, it
causes the following error.
forrtl: severe (408): fort: (11): Subscript #1 of the array XX has value
-665625807 which is less than the lower bound of 1
I can get rid of this error by replacing VecGetArray to
Hi All,
I use DMDA for a flow problem and found the local vector and global
vector does not match for 2D and 3D problem when dof 1.
For example, the mesh is as follows:
|proc 1| proc 2 | proc 3 |
|7 8 9|16 17 18|25 26 27|
|4 5 6|13 14 15|22 23 24|
|1 2 3|10 11 12|19 20 21|
/The
Hi Matthew,
How about the matview output? Is this automatically permuted to the
natural ordering too?
Thanks,
Danyang
On 20/05/2014 12:25 PM, Matthew Knepley wrote:
On Tue, May 20, 2014 at 1:31 PM, Danyang Su danyang...@gmail.com
mailto:danyang...@gmail.com wrote:
Hi All,
I use
Hi All,
I have a 1D transient flow problem (1 dof) coupled with energy balance
(1 dof), so the total dof per node is 2.
The whole domain has 10 nodes in z direction.
The program runs well with 1 processor but failed in 2 processors. The
matrix is the same for 1 processor and 2 processor but
On 22/05/2014 12:01 PM, Matthew Knepley wrote:
On Thu, May 22, 2014 at 1:58 PM, Danyang Su danyang...@gmail.com
mailto:danyang...@gmail.com wrote:
Hi All,
I have a 1D transient flow problem (1 dof) coupled with energy
balance (1 dof), so the total dof per node is 2.
The whole
expect? Then run on two processes but
call VecView on
x_vec_loc only on the first process. Is it what you expect?
Also what is vecpointer1d declared to be?
Barry
On May 22, 2014, at 4:44 PM, Danyang Su danyang...@gmail.com wrote:
On 22/05/2014 12:01 PM, Matthew Knepley wrote:
On Thu, May 22
On 22/05/2014 5:34 PM, Barry Smith wrote:
DMDA does not work that way. Local and global vectors associated with DA’s
are always “interlaced”.
Then, is there any routine convert matrix to be interlaced?
Thanks,
Danyang
Barry
On May 22, 2014, at 6:33 PM, Danyang Su danyang
.
Barry
On May 22, 2014, at 7:42 PM, Danyang Su danyang...@gmail.com wrote:
On 22/05/2014 5:34 PM, Barry Smith wrote:
DMDA does not work that way. Local and global vectors associated with DA’s
are always “interlaced”.
Then, is there any routine convert matrix to be interlaced?
Thanks
Hi All,
I am testing my codes under windows with PETSc V3.4.4.
When running with option -pc_type hypre using 1 processor, the program
exactly uses 6 processors (my computer is 6 processors 12 threads) and
the program crashed after many timesteps. The error information is as
follows:
job
possibilities:
Are you sure that the hypre was compiled with exactly the same MPI as the
that used to build PETSc?
On May 28, 2014, at 4:57 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
I am testing my codes under windows with PETSc V3.4.4.
When running with option -pc_type hypre using 1
Hi All,
I recompiled the hypre library with the same compiler and intel mkl,
then the error is gone.
Thanks,
Danyang
On 28/05/2014 4:10 PM, Danyang Su wrote:
Hi Barry,
I need further check on it. Running this executable file on another
machine results into mkl_intel_thread.dll missing
Hi All,
I have the codes first developed on Windows OS, it works fine. Now the
codes have been ported to Linux, there are some compiling errors
regarding some variable and function definition, as shown below.
1. DMDA_BOUNDARY_NONE DM_BOUNDARY_NONE
I should use DMDA_BOUNDARY_NONE on windows
Hi All,
Does any one know how to use the preprocessor definition flags for IBM XL
Compiler on IBM BG/Q system?
I can use FPPFLAGS (below) for Intel fortran compiler and Gfortran, but IBM
XL Fortran compiler cannot recognise it. I also tried to use FPPFLAGS_IBMXL
(below) but without success.
On Tue, Aug 12, 2014 at 10:46 PM, Satish Balay ba...@mcs.anl.gov wrote:
On Tue, 12 Aug 2014, Danyang Su wrote:
Hi All,
Does any one know how to use the preprocessor definition flags for IBM XL
Compiler on IBM BG/Q system?
I can use FPPFLAGS (below) for Intel fortran compiler
it with +mpiwrapper-gcc]
Yes, with +mpiwrapper-gcc, the codes can be compiled successfully.
Thanks.
Satish
On Tue, 12 Aug 2014, Danyang Su wrote:
Hi Satish,
The log file is attached.
There are errors like below:
../../min3p/welcome_pc.F90, line 151.16: 1516-036 (S) Entity iargc has
Hi All,
I have flow and reactive transport equations in one system. The share
the same domain decomposition. Usually solving flow equations takes much
less time than reactive transport equations, say 10% vs 90%. For some
extreme cases, the flow equations are not well scaled while reactive
Hi There,
I have some parallel mpi output codes that works fine without PETSc but
crashes when compiled with PETSc. To make the problem easy, I test the
following example which has the same problem. This example is modified
form
On 12/09/2014 12:10 PM, Barry Smith wrote:
On Sep 12, 2014, at 1:34 PM, Danyang Su danyang...@gmail.com wrote:
Hi There,
I have some parallel mpi output codes that works fine without PETSc but crashes when
compiled with PETSc. To make the problem easy, I test the following example which has
Hi All,
How can I avoid array bounds error when using PETSc. I have to debug
my codes so I compiled the codes with option Check Array and String
Bounds. But this results into array bounds error when I need to use
ltog, as shown below.
call
the idx data type
is wrong here. Would you please let me know what data type should be
used here?
PetscInt, pointer :: idx(:)
call DMGetLocalToGlobalMapping(dmda_flow%da,ltogm,ierr)
call ISLocalToGlobalMappingGetIndicesF90(ltogm,idx,ierr)
Thanks,
Danyang
On Oct 19, 2014, at 6:38 PM, Danyang Su
On 20/10/2014 11:00 AM, Barry Smith wrote:
On Oct 20, 2014, at 11:56 AM, Danyang Su danyang...@gmail.com wrote:
On 19/10/2014 5:06 PM, Barry Smith wrote:
Use ISLocalToGlobalMappingGetIndicesF90()
Hi Barry,
Would you please give my a brief introduction or example on the parameters
Hi All,
I have a PETSc application that need additional compiling flags to build
Hybrid MPI-OpenMP parallel application on WestGrid Supercomputer
(Canada) system.
The code and makefile work fine on my local machine for both Windows and
Linux, but when compiled on WestGrid Orcinus System for
On 14-12-01 01:48 PM, Satish Balay wrote:
On Mon, 1 Dec 2014, Danyang Su wrote:
Hi All,
I have a PETSc application that need additional compiling flags to build
Hybrid MPI-OpenMP parallel application on WestGrid Supercomputer (Canada)
system.
The code and makefile work fine on my local
On Apr 25, 2015, at 2:24 PM, Danyang Su danyang...@gmail.com wrote:
On 15-04-25 11:55 AM, Barry Smith wrote:
On Apr 25, 2015, at 1:51 PM, Danyang Su danyang...@gmail.com
wrote:
On 15-04-25 11:32 AM, Barry Smith wrote:
I told you this yesterday.
It is probably stopping here
On 15-04-25 11:55 AM, Barry Smith wrote:
On Apr 25, 2015, at 1:51 PM, Danyang Su danyang...@gmail.com wrote:
On 15-04-25 11:32 AM, Barry Smith wrote:
I told you this yesterday.
It is probably stopping here on a harmless underflow. You need to edit the
PETSc code to not worry about
at timeloop.F90:1194
#24 0x5ABFD7 in driver_pc at driver_pc.F90:599
#24 0x5ABFD7 in driver_pc at driver_pc.F90:599
#24 0x5ABFD7 in driver_pc at driver_pc.F90:599
On 15-04-24 11:12 AM, Barry Smith wrote:
On Apr 24, 2015, at 1:05 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
One of my case
have multiple PETSC_ARCH or multiple PETSc installs to explain why
you reported two different places where the exception occurred.
On Apr 25, 2015, at 8:31 PM, Danyang Su danyang...@gmail.com wrote:
On 15-04-25 06:26 PM, Matthew Knepley wrote:
On Sat, Apr 25, 2015 at 8:23 PM, Danyang Su danyang
Hi All,
One of my case crashes because of floating point exception when using 4
processors, as shown below. But if I run this case with 1 processor, it
works fine. I have tested the codes with around 100 cases up to 768
processors, all other cases work fine. I just wonder if this kind of
]PETSC ERROR: likely location of problem given in stack below
Thanks,
Danyang
On 15-04-24 01:54 PM, Danyang Su wrote:
On 15-04-24 01:23 PM, Satish Balay wrote:
c 4 1.0976214263087059E-067
I don't think this number can be stored in a real*4.
Satish
Thanks, Satish. It is caused
On 15-04-24 11:12 AM, Barry Smith wrote:
On Apr 24, 2015, at 1:05 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
One of my case crashes because of floating point exception when using 4
processors, as shown below. But if I run this case with 1 processor, it works
fine. I have tested
];
}
}
}
x[0] /= A[0];
return(err_flag);
}
}
On Apr 28, 2015, at 12:55 PM, Danyang Su danyang...@gmail.com wrote:
Hi Barry,
The development version of PETSc does not help to solve my problem. It still
crashed due to the same error information.
As Matthew
on VecGetArrayF90(). All the VecGetArrayF90()
come with VecRestoreArrayF90(). No VecGetArray() is used in my codes.
Danyang
On Apr 29, 2015, at 1:52 PM, Danyang Su danyang...@gmail.com wrote:
On 15-04-29 11:30 AM, Barry Smith wrote:
On Apr 29, 2015, at 12:15 PM, Danyang Su danyang...@gmail.com
. For 3.5.3, i edited fp.c file and then configure and make.
Thanks,
Danyang
On 15-04-25 07:34 PM, Danyang Su wrote:
Hi All,
The floating point underflow is caused by a small value divided by a
very large value. This result is forced to zero and then it does not
report any underflow problem
Dear All,
I have run my codes successfully with up to 100 million total unknowns
using 1000 processors on WestGrid Jasper Cluster, Canada. But when I
scale the unknows up to 1 billion, the codes crashes with the following
error. It's out of memory.
Error message from valgrind output
on VecLockPush or VecLockPop
type
bt
then type
cont
send all the output, this will tell us where the vector got locked readonly and
did not get unlocked.
Barry
On Apr 29, 2015, at 2:35 PM, Danyang Su danyang...@gmail.com wrote:
On 15-04-29 12:19 PM, Barry Smith wrote:
Ok, your code
to the
development version of PETSc (that uses the latest version of hypre), here are
the instructions on how to obtain it
http://www.mcs.anl.gov/petsc/developers/index.html
Please let us know if this resolves the problem with hypre failing.
Barry
On Apr 27, 2015, at 11:44 AM, Danyang Su
Hi All,
I recently have some time-dependent cases that have difficulty in
convergence. It needs a lot of linear iterations during a specific time,
e.g., more than 100 linear iterations for every newton iteration. In
PETSc parallel version, this number will be doubled or even more. Our
case
On 15-05-11 07:19 PM, Hong wrote:
Danyang:
I recently have some time-dependent cases that have difficulty in
convergence. It needs a lot of linear iterations during a specific
time, e.g., more than 100 linear iterations for every newton
iteration. In PETSc parallel version,
On 15-05-12 11:13 AM, Barry Smith wrote:
On May 11, 2015, at 7:10 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
I recently have some time-dependent cases that have difficulty in convergence.
It needs a lot of linear iterations during a specific time, e.g., more than 100
linear
of
PETSc and for packaging systems
On 15-06-14 09:15 PM, Danyang Su wrote:
Hi PETSc User,
I get problem in compiling my codes after updating PETSc to 3.6.0.
The codes work fine using PETSc 3.5.3 and PETSc-dev.
I have made modified include lines in makefile from
#PETSc variables for V3.5.3
Hi Hong,
It's not easy to run in debugging mode as the cluster does not have
petsc installed using debug mode. Restart the case from the crashing
time does not has the problem. So if I want to detect this error, I need
to start the simulation from beginning which takes hours in the cluster.
Hi Hong,
Thank. I can test it but it may takes some time to install petsc-dev on
the cluster. I will try more cases to see if I can get this error on my
local machine which is much more convenient for me to test in debug
mode. So far, the error does not occur on my local machine using the
Hi All,
My code fails due to the error in external library. It works fine for
the previous 2000+ timesteps but then crashes.
[4]PETSC ERROR: Error in external library
[4]PETSC ERROR: Error reported by MUMPS in numerical factorization
phase: INFO(1)=-1, INFO(2)=0
The full error message is
Hi Hong,
Sorry to bother you again. The modified code works much better than
before using both superlu or mumps. However, it still encounters
failure. The case is similar with the previous one, ill-conditioned
matrices.
The code crashed after a long time simulation if I use superlu_dist,
Hi Hong,
Thanks for checking this. A mechanical model was added at the time when
the solver failed, causing some problem. We need to improve this part in
the code.
Thanks again and best wishes,
Danyang
On 15-12-08 08:10 PM, Hong wrote:
Danyang :
Your matrices are ill-conditioned,
Hi Hong,
I did more test today and finally found that the solution accuracy
depends on the initial (first) matrix quality. I modified the ex52f.F to
do the test. There are 6 matrices and right-hand-side vectors. All these
matrices and rhs are from my reactive transport simulation. Results
Hi Hong,
I just checked using ex10 for these matrices and rhs, they all work
fine. I found something is wrong in my code when using direct solver.
The second parameter mat in PCFactorGetMatrix(PC pc,Mat *mat) is not
initialized in my code for SUPERLU or MUMPS.
I will fix this bug, rerun
Hi Hong,
The binary format of matrix, rhs and solution can be downloaded via the
link below.
https://www.dropbox.com/s/cl3gfi0s0kjlktf/matrix_and_rhs_bin.tar.gz?dl=0
Thanks,
Danyang
On 15-12-03 10:50 AM, Hong wrote:
Danyang:
To my surprising, solutions from SuperLU at timestep 29
Hello Hong,
Thanks for the quick reply and the option "-mat_superlu_dist_fact
SamePattern" works like a charm, if I use this option from the command
line.
How can I add this option as the default. I tried using
PetscOptionsInsertString("-mat_superlu_dist_fact SamePattern",ierr) in
my code
Thank. The inserted options works now. I didn't put
PetscOptionsInsertString in the right place before.
Danyang
On 15-12-07 12:01 PM, Hong wrote:
Danyang:
Add 'call MatSetFromOptions(A,ierr)' to your code.
Attached below is ex52f.F modified from your ex52f.F to be compatible
with petsc-dev.
PM, Danyang Su <danyang...@gmail.com> wrote:
Dear All,
I recalled there was OpenMP available for PETSc for the old development version. When google
"petsc hybrid mpi openmp", there returned some papers about this feature. My code was
first parallelized using OpenMP and then r
the factorization
level through hypre?
Thanks,
Danyang
On 17-05-24 04:59 AM, Matthew Knepley wrote:
On Wed, May 24, 2017 at 2:21 AM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:
Dear All,
I use PCFactorSetLevels for ILU and PCFactorSetFi
with pretty
good speedup. And I am not sure if I miss something for this problem.
Thanks,
Danyang
On 17-05-24 11:12 AM, Matthew Knepley wrote:
On Wed, May 24, 2017 at 12:50 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:
Hi Matthew and Barry,
.
Thanks,
Danyang
On 17-05-24 11:12 AM, Matthew Knepley wrote:
On Wed, May 24, 2017 at 12:50 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:
Hi Matthew and Barry,
Thanks for the quick response.
I also tried super
Hi All,
I just delete the .info file and it works without problem now.
Thanks,
Danyang
On 17-05-24 06:32 PM, Hong wrote:
Remove your option '-vecload_block_size 10'.
Hong
On Wed, May 24, 2017 at 3:06 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote
ove your option '-vecload_block_size 10'.
Hong
On Wed, May 24, 2017 at 3:06 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:
Dear Hong,
I just tested with different number of processors for the same
matrix. It sometimes got "ERROR: Argumen
Remove your option '-vecload_block_size 10'.
Hong
On Wed, May 24, 2017 at 3:06 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:
Dear Hong,
I just tested with different number of processors for the
same matrix. It sometimes
Dear All,
I use PCFactorSetLevels for ILU and PCFactorSetFill for other
preconditioning in my code to help solve the problems that the default
option is hard to solve. However, I found the latter one,
PCFactorSetFill does not take effect for my problem. The matrices and
rhs as well as the
16 or 48
processors. The error information is attached. I tested this on my local
computer with 6 cores 12 threads. Any suggestion on this?
Thanks,
Danyang
On 17-05-24 12:28 PM, Danyang Su wrote:
Hi Hong,
Awesome. Thanks for testing the case. I will try your options for the
code and g
On 2018-04-27 04:11 AM, Matthew Knepley wrote:
On Fri, Apr 27, 2018 at 2:09 AM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:
Hi Matt,
Sorry if this is a stupid question.
In the previous code for unstructured grid, I create labels to
mark
Hi All,
I use DMPlex and need to get coordinates back after distribution.
However, I always get segmentation violation in getting coords values in
the following codes if using multiple processors. If only one processor
is used, it works fine.
For each processors, the off value starts from 0
1 - 100 of 175 matches
Mail list logo