-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Matthew.
Thanks for the answer. We've checked again and actually it is working
with chaco. But we haven't found any example/way to use it with metis,
that it's what we wanted.
I've seen that there is a new Partitioner class, but I could not find
On 8 Jul 2015, at 12:53, Javier Quinteros jav...@gfz-potsdam.de wrote:
Signed PGP part
Hi Matthew.
Thanks for the answer. We've checked again and actually it is working
with chaco. But we haven't found any example/way to use it with metis,
that it's what we wanted.
I've seen that there
On Wed, Jul 8, 2015 at 5:16 AM, Javier Quinteros jav...@gfz-potsdam.de
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all.
We use petsc4py to develop. We are trying to partition a mesh (DMPlex)
but we receive different errors when calling to the distribute
method of the DMPlex
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all.
We use petsc4py to develop. We are trying to partition a mesh (DMPlex)
but we receive different errors when calling to the distribute
method of the DMPlex class.
We use versions 3.6 from PETSc and petsc4py.
We run it in two different
Sorry for the delay. The problem is that the eigen estimates are bad for
the Chebyshev smoother. Unfortunately this fails catastrophically.
First, I do not understand why we do not get an error message here:
13:57 PICell ~/Codes/petsc/src/ksp/ksp/examples/tutorials$ mpirun -n 8
./ex54 -ne 1023
Did you find out how to change option to use parallel symbolic
factorization? Perhaps PETSc team can help.
Sherry
On Tue, Jul 7, 2015 at 3:58 PM, Xiaoye S. Li x...@lbl.gov wrote:
Is there an inquiry function that tells you all the available options?
Sherry
On Tue, Jul 7, 2015 at 3:25 PM,
Hi,
thank you for the answer. I tried to change the ksp to cg to estimate
the eigenvalues like you propose but petsc doesn't find the option:
mpirun -n 8 ./ex54 -ne 1023 -ksp_rtol 1e-10 -ksp_monitor_true_residual
-options_left -mg_levels_esteig_ksp_type cg
0 KSP preconditioned resid norm
Hello,
First of all..thanks to the PETSC developers and everyone else
contributing supporting material on the web.
I need to solve a system of equations Ax =b, where A is symmetric, sparse,
unstructured and in parallel as a part of a finite volume solver for CFD.
Decomposition is already done.
Indeed, the parallel symbolic factorization routine needs power of 2
processes, however, you can use however many processes you need;
internally, we redistribute matrix to nearest power of 2 processes, do
symbolic, then redistribute back to all the processes to do factorization,
triangular solve
Hi,
I have used the switch -mat_superlu_dist_parsymbfact in my pbs script.
However, although my program worked fine with sequential symbolic
factorization, I get one of the following 2 behaviors when I run with
parallel symbolic factorization (depending on the number of processors that
I use):
What version of PETSc are you using? This has been changing recently.
-help will show you the parameters for each level, like
-mg_levels_1_eigest_ksp_type GMRES. PETSc provides syntactic sugar to do
all levels by removing the _1.
On Wed, Jul 8, 2015 at 3:57 PM, Benoit Fabrèges
The runtime option for using parallel symbolic factorization with
petsc/superlu_dist is '-mat_superlu_dist_parsymbfact', e.g.,
petsc/src/ksp/ksp/examples/tutorials (master)
$ mpiexec -n 2 ./ex2 -pc_type lu -pc_factor_mat_solver_package superlu_dist
-mat_superlu_dist_parsymbfact
Hong
On Wed, Jul
I am using the release version 3.6.0. I tried with the development
version and it is working fine now.
Thanks a lot,
Benoit
On Wed 08 Jul 2015 06:53:11 PM CEST, Mark Adams wrote:
What version of PETSc are you using? This has been changing recently.
-help will show you the parameters for
Please provide a bit more detail about the operator. Is it a pressure solver
for CFD? Cell centered? Does the matrix has a null space of the constant
functions? Is it the same linear system for each time-step? in the CFD solver
or different?
How many iterations is hypre BoomerAMG
Hi Barry,
For the sequence of problems, will -memory_info and -malloc_log also
provide useful information about memory when Superlu_dist or Slepc
routines are called?
Thanks
Anthony
On 07/07/2015 01:27 PM, Barry Smith wrote:
I would suggest running a sequence of problems, 101 by 101
-malloc_log is not helpful because it can only record PETSc memory usage;
most of the memory usage with direct solvers will be from SuperLU_Dist
On Jul 8, 2015, at 5:41 PM, Anthony Haas a...@email.arizona.edu wrote:
Hi Barry,
For the sequence of problems, will -memory_info and
Dear PETSc,
I was wondering if you had any advice for using PETSc TS solvers to
solve a vector of ODEs. PETSc TS solvers solve DAEs of the form
f(t,u,u^dot,a) = g(u,t) = 0 (for my case)
where f is a user provided function, u is a vector of state variables,
u^dot a vector of the derivatives of
On Jul 8, 2015, at 6:25 PM, Chris Bradley c.brad...@auckland.ac.nz wrote:
Dear PETSc,
I was wondering if you had any advice for using PETSc TS solvers to
solve a vector of ODEs. PETSc TS solvers solve DAEs of the form
f(t,u,u^dot,a) = g(u,t) = 0 (for my case)
where f is a user
You should try running under valgrind, see:
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
You can also run in the debugger (yes this is tricky on a batch system but
possible) and see exactly what triggers the floating point exception or when it
hangs interrupt the
19 matches
Mail list logo