Jed Brown writes:
Nystrom, William David w...@lanl.gov writes:
Well, I would really like to be able to do the experiment with PETSc -
and I tried to do so back in the summer of 2013. But I encountered
problems which I documented with the current PETSc threadcomm package
trying a
Now that mpich has been updated, would it be possible to update OpenMPI to
the latest stable release, openmpi-1.8.3?
Thanks,
Dave
Thanks for your detailed reply. Suppose I start working with wdn_mods_master
instead of wdn_mods_next where wdn_mods_master is master + my local changes
to master. Now, suppose I find a bug which I report. With the current petsc
workflow, it seems like that bug would get fixed on next and then
So what exactly do you mean when you say to turn off the support for the
current pthreads support because I have been using it, such as it is, in
the next branch?
Original Message
Subject: Re: [petsc-dev] PETSc 3.5.0 compilation on Windows and OpenMP
with pthread
From: Barry
and that I want without having to wait a day for them to show up in
master.
Dave
[sorry I don't remember the discussion on why you needed next]
Satish
On Thu, 3 Jul 2014, Dave Nystrom wrote:
So what exactly do you mean when you say to turn off the support for the
current pthreads
, 2014, at 11:47 AM, Dave Nystrom dave.nyst...@tachyonlogic.com
wrote:
So what exactly do you mean when you say to turn off the support for the
current pthreads support because I have been using it, such as it is, in
the next branch?
Original Message
Subject
Hi Karli,
Karl Rupp writes:
Hi Dave,
Your list sounds great to me. Glad that you and Paul are working on
this together.
My main interests are in better preconditioner support and better
multi-GPU/MPI scalability.
This is follow-up work then. There are a couple of
Karl Rupp writes:
Hi Dave,
That sounds very reasonable. Regarding polynomial preconditioning, were
you
thinking of least squares polynomial preconditioning or something else?
I haven't thought about anything specific yet, just about the
infrastructure for applying any p(A).
Hi Karli,
I'm very interested in trying out your new ViennaCL interface in the near
future. Looking at the documentation of ViennaCL online, it looks like there
are preconditioners available within ViennaCL. Are there plans to provide
petsc interfaces to the ViennaCL preconditioners in the near
to the bug
Jed found in CMake 2.8.10.2, I have no other explanation why this hangs.
I'll try to reproduce this as soon as I' at a CUDA-enabled machine again.
Best regards,
Karli
On 03/23/2013 06:49 PM, Dave Nystrom wrote:
Hi Karli,
Still does not seem to work. Attached
even though the logs will have to wait till
this evening.
Thanks,
Dave
--
Dave Nystrom
LANL HPC-5
Phone: 505-667-7913
Email: wdn at lanl.gov
Smail: Mail Stop B272
Group HPC-5
Los Alamos National Laboratory
Los Alamos, NM 87545
I've been using cholmod from SuiteSparse for over a year now to solve one of
my linear systems that is difficult to solve with an iterative method. Are
there other sparse direct solvers, such as clique, that I should try
sometime? My linear system is SPD and I am solving it on a single node. My
Jed Brown writes:
On Thu, Feb 21, 2013 at 4:47 PM, Dave Nystrom dnystrom1 at comcast.net
wrote:
I've been using cholmod from SuiteSparse for over a year now to solve one
of
my linear systems that is difficult to solve with an iterative method. Are
there other sparse direct
Jed Brown writes:
On Sat, Jun 30, 2012 at 9:17 PM, Dave Nystrom dnystrom1 at comcast.net
wrote:
From the FAQ, I see that src/mat/examples/tests/ex72.c provides an example
of
reading in a matrix system in matrix market format and then exporting it in
petsc sparse format
Jed Brown writes:
On Sat, Jun 30, 2012 at 9:32 PM, Dave Nystrom dnystrom1 at comcast.net
wrote:
Thanks. I don't have access or experience with Matlab so that seems not a
good option. I'm fairly new to Python and have never done anything with
the
petsc python interface
Jed Brown writes:
On Sun, Jun 10, 2012 at 10:07 AM, Dave Nystrom dnystrom1 at
comcast.netwrote:
Is there an easy way to dump a matrix and rhs in petsc? I have a
particular
system that has been challenging to solve with petsc other than using
direct
solves. Also, if I could
Thanks.
Jed Brown writes:
src/ksp/ksp/examples/tutorials/ex10.c
On Fri, Jun 15, 2012 at 8:20 AM, Dave Nystrom dnystrom1 at comcast.net
wrote:
Jed Brown writes:
On Sun, Jun 10, 2012 at 10:07 AM, Dave Nystrom dnystrom1 at comcast.net
wrote:
Is there an easy way
for that - I see it
is on the petsc ToDo list. I will try again tonight but would welcome advice
or experiences from anyone else who has tried the new cholmod.
Dave
Jed Brown writes:
Nope, why don't you try it and send us a patch if you get it working.
On Wed, Jun 13, 2012 at 12:49 AM, Dave Nystrom
Is there an easy way to dump a matrix and rhs in petsc? I have a particular
system that has been challenging to solve with petsc other than using direct
solves. Also, if I could dump the linear system, is it acceptable to post it
on this email list so that someone who knows more about petsc and
Is there a tag that reflects the petsc-dev state at release time?
Thanks,
Dave
Satish Balay writes:
I've now created the release repos - so petsc-dev is now open for all
commits.
[If you've accumulated petsc-dev related commits - rebasing them with
current petsc-dev - before pushing
yet.
However we haven't been adding tags for patch updates.
Satish
On Thu, 31 May 2012, Dave Nystrom wrote:
Is there a tag that reflects the petsc-dev state at release time?
Thanks,
Dave
Satish Balay writes:
I've now created the release repos - so petsc
I'm interested in doing more extensive tests of my build of petsc-dev than
what I get from doing a make test after the build. Would running the
petsc-dev regression test suite be the right thing to consider? If so, how
would I run the regression test suite after a build of petsc-dev?
Thanks,
Thanks. I'll try that.
Jed Brown writes:
You can run make alltest if you like.
On May 19, 2012 9:38 PM, Dave Nystrom dnystrom1 at comcast.net wrote:
I'm interested in doing more extensive tests of my build of petsc-dev than
what I get from doing a make test after the build. Would
to the other computationally
expensive sections of your code, otherwise the overall speed-up of your
application will be modest.
Cheers
Gerard
Dave Nystrom emailed the following on 09/05/12 04:29:
Is the pthreads support further along than the OpenMP support? I have not
tried
I see that petsc-dev now has some OpenMP support. Would a serial, non-mpi
code that uses petsc-dev be able to gain much performance improvement from it
now for the case of doing sparse linear solve with cg and jacobi
preconditioning?
Thanks,
Dave
Is the pthreads support further along than the OpenMP support? I have not
tried the pthreads support yet. Does either the pthreads support or the
OpenMP support implement the matvec or do they just do vector type
operations?
Jed Brown writes:
On Tue, May 8, 2012 at 9:23 PM, Dave Nystrom
Barry Smith writes:
On May 1, 2012, at 8:34 AM, Dave Nystrom wrote:
I have a 2d resistive mhd code that has an interface to both agmg and
various
multigrid solvers available from PETSc including gamg, hypre and ml. I'm
not
that familiar with multigrid and so
Barry Smith writes:
On May 1, 2012, at 7:22 PM, Dave Nystrom wrote:
2. Is anyone on this list sufficiently familiar with agmg and the other
PETSc mg solvers to know how to configure the PETSc mg solvers to work
more
like agmg? It seems that agmg gives better performance than
I have a 2d resistive mhd code that has an interface to both agmg and various
multigrid solvers available from PETSc including gamg, hypre and ml. I'm not
that familiar with multigrid and so it is difficult for me to know how to
experiment with the various mg packages and tune them to my
At the end of configure.log, there are two possible ways to build petsc-dev
that are specified. Which is the recommended way to build - using make or
using python? I have been using make.
Also, one is labeled as legacy and one is labeled as experimental. That
gives the impression of having a
Thanks. So far, I have been using make.
Barry Smith writes:
The recommend usage is make
If cmake was found with ./configure it will use it automatically if you do
make, if it was not found it will use the legacy make
Barry
On Apr 22, 2012, at 10:19 AM, Dave Nystrom wrote
as the testbed cluster for these results but has 308
nodes. That
should be interesting and fun.
Thanks,
Dave
--
Dave Nystrom
LANL HPC-5
Phone: 505-667-7913
Email: wdn at lanl.gov
Smail: Mail Stop B272
Group HPC-5
Los Alamos National
I was wondering if anyone had ever tried using cuBlas as a substitute for
something like MKL with PETSc. I've been wondering if it would give better
performance than MKL for my direct solves with cholmod even though the block
sizes are small for cholmod i.e. 32x32 is the default I believe. If
On Fri, Feb 24, 2012 at 8:28 PM, Dave Nystrom Dave.Nystrom at
tachyonlogic.com
wrote:
I was wondering if anyone had ever tried using cuBlas as a substitute for
something like MKL with PETSc. I've been wondering if it would give better
performance than MKL for my direct solves
OK. I was thinking that if I used --download-txpetscgpu=yes, then
--with-txpetscgpu=1 would not be needed. That is how I was previously using
the package i.e. just using --download-txpetscgpu=yes.
Paul Mullowney writes:
--with-txpetscgpu=1 --download-txpetscgpu=yes
If the first option is
I have recently added the capability to have a separate preconditioning
matrix in the petsc interface for the code I am working with. I have two
types of preconditioning matrices that I have questions about. One is
tridiagonal and the other is 7 diagonals. In both cases, the the diagonals
are
is important.
Thanks,
Dave
Mark F. Adams writes:
On Dec 20, 2011, at 10:33 AM, Dave Nystrom wrote:
Hi Mark,
I would like to try GAMG on some of my linear solves. Could you suggest
how
to get started? Is it more complicated than something like:
-ksp_type cg -pc_type
Jed Brown writes:
On Thu, Dec 29, 2011 at 11:01, Dave Nystrom dnystrom1 at comcast.net wrote:
I have recently added the capability to have a separate preconditioning
matrix in the petsc interface for the code I am working with. I have two
types of preconditioning matrices that I
Jed Brown writes:
On Thu, Dec 29, 2011 at 11:15, Dave Nystrom dnystrom1 at comcast.net wrote:
I have tried out -ksp_type cg -pc_type gamg -pc_gamg_type sa on my problem
and am encouraged enough with the results that I would like to try taking
the
next step with using gamg. Could
Mark F. Adams writes:
On Dec 29, 2011, at 12:51 PM, Dave Nystrom wrote:
Jed Brown writes:
On Thu, Dec 29, 2011 at 11:15, Dave Nystrom dnystrom1 at comcast.net
wrote:
I have tried out -ksp_type cg -pc_type gamg -pc_gamg_type sa on my
problem
and am encouraged enough
Mark F. Adams writes:
On Dec 29, 2011, at 12:45 PM, Dave Nystrom wrote:
Jed Brown writes:
On Thu, Dec 29, 2011 at 11:01, Dave Nystrom dnystrom1 at comcast.net
wrote:
I have recently added the capability to have a separate preconditioning
matrix in the petsc interface
Jed Brown writes:
On Thu, Dec 29, 2011 at 19:30, Dave Nystrom
Dave.Nystrom at tachyonlogic.comwrote:
Generally I use CG and LU from petsc. Cholesky runs slower than LU and I
was
told in a previous email that it was because of the extra data movement for
Cholesky versus LU. I
Jed Brown writes:
On Thu, Dec 29, 2011 at 19:30, Dave Nystrom
Dave.Nystrom at tachyonlogic.comwrote:
Generally I use CG and LU from petsc. Cholesky runs slower than LU and I
was
told in a previous email that it was because of the extra data movement for
Cholesky versus LU. I
Jed Brown writes:
On Thu, Dec 29, 2011 at 21:42, Dave Nystrom Dave.Nystrom at
tachyonlogic.com wrote:
I just browsed through the output of configure -help and don't see
suitesparse. I do see stuff for cholmod but not a download option for
cholmod.
You have to install
I am experimenting now with solving my various linear systems with petsc
using a separate preconditioning matrix. These linear systems are all banded
systems arising from discretization of pdes on a 2d structured grid. My
preconditioning matrix is the inner band of diagonals about the main
Is it possible to use the threaded version of Intel MKL with PETSc? If so,
what are the constraints? I'm getting a build failure but am building PETSc
with mpich and pthreads enabled.
Thanks,
Dave
I was wondering if anyone has experimented with using Goto BLAS with Petsc
for algorithms that make heavy use of blas and if there is any idea how it
compares to Intel MKL.
I'm finding that using Intel MKL with umfpack is speeding up my sparse direct
solves by about 2x. I'm just using default
for Microsoft and
is no longer maintaining the library. It remains to be seen whether or not
anyone will be filling his shoes for new architectures. If MKL is
available, I would go ahead and use it.
Jack
On Mon, Dec 26, 2011 at 11:10 PM, Dave Nystrom dnystrom1 at
comcast.netwrote
BTW, the Intel MKL results below are for the sequential version of MKL. I
have not yet tried the threaded version of MKL. Not sure if threading helps
that much for blocks of that size i.e. default is 32x32.
Dave
Dave Nystrom writes:
I was wondering if anyone has experimented with using Goto
Mark F. Adams writes:
It sounds like you have a symmetric positive definite systems like du/dt -
div(alpha(x) grad)u. The du/dt term makes the systems easier to solve.
I'm guessing your hard system does not have this mass term and so is
purely elliptic. Multigrid is well suited for this
to find the blocks cannot be easily
offset.
Sherry
On Tue, Dec 20, 2011 at 7:28 AM, Dave Nystrom dnystrom1 at comcast.net
wrote:
I have been comparing sequential SuperLU on one of my linear solves versus
PETSc LU. I am finding SuperLU to be a little over 2x slower than PETSc
.
2011/12/20 Dave Nystrom dnystrom1 at comcast.net
I have been comparing sequential SuperLU on one of my linear solves versus
PETSc LU. I am finding SuperLU to be a little over 2x slower than PETSc
LU.
I was wondering if this is due to SuperLU not being tuned to my problem
I'm trying to build PETSc using the PGI Workstation compilers i.e. pgcc, pgCC
and pgfortran. So I modified my PETSc build script by adding the following
two comfigure options:
--with-gnu-compilers=0 --with-vendor-compilers=pgi
I'm starting out with a somewhat minimal set of external packages
Jed Brown writes:
On Thu, Dec 22, 2011 at 22:18, Dave Nystrom dnystrom1 at comcast.net wrote:
--with-gnu-compilers=0 --with-vendor-compilers=pgi
--with-vendor-compilers=portland
just instructs BuildSystem to look for pgcc. You might be better off
specifying the path
Jed Brown writes:
On Thu, Dec 22, 2011 at 23:52, Dave Nystrom Dave.Nystrom at
tachyonlogic.com wrote:
So, should I specify each of the compiler environment variables this way?
That is,
CC=/path/to/pgcc
CXX=/path/to/pgCC
FC=/path/to/pgfortran
Yes
OK.
Should I
I have been comparing sequential SuperLU on one of my linear solves versus
PETSc LU. I am finding SuperLU to be a little over 2x slower than PETSc LU.
I was wondering if this is due to SuperLU not being tuned to my problem or if
the PETSc LU algorithm performance is expected to be superior to
. Are the other GAMG option
defaults good to start with or should I be trying to configure them as well?
If so, I'm not familiar enough with multigrid to know off hand how to do
that.
Thanks,
Dave
Mark F. Adams writes:
On Dec 2, 2011, at 6:06 PM, Dave Nystrom wrote:
Mark F. Adams
Several of the external packages that have been interfaced with petsc require
mpi. Is there a way to build a serial version of petsc with these packages
and use stub routines for mpi? I know there are fortran stub routines,
i.e. mpiuni, that I am using for my application which is currently
Barry Smith writes:
On Dec 16, 2011, at 10:26 PM, Dave Nystrom wrote:
Barry Smith writes:
Dave,
Band solvers (like in LAPACK) handle all the matrix entries from the band
to the diagonal as nonzero (even though in your case the vast majority of
those values are zero
I'm trying to figure out whether I can do a couple of things with petsc.
1. It looks like the preconditioning matrix can actually be different from
the full problem matrix. So I'm wondering if I could provide a different
preconditioning matrix for my problem and then do an LU solve of the
Barry Smith writes:
On Dec 16, 2011, at 9:52 AM, Matthew Knepley wrote:
On Fri, Dec 16, 2011 at 9:37 AM, Dave Nystrom dnystrom1 at comcast.net
wrote:
I'm trying to figure out whether I can do a couple of things with petsc.
1. It looks like the preconditioning matrix can
Matthew Knepley writes:
On Fri, Dec 16, 2011 at 9:37 AM, Dave Nystrom dnystrom1 at comcast.net
wrote:
I'm trying to figure out whether I can do a couple of things with petsc.
1. It looks like the preconditioning matrix can actually be different from
the full problem matrix
to seeing what I can do with a
separate preconditioner matrix.
Thanks again for your reply.
Cheers,
Dave
Barry
On Dec 16, 2011, at 6:12 PM, Dave Nystrom wrote:. .
Matthew Knepley writes:
On Fri, Dec 16, 2011 at 9:37 AM, Dave Nystrom dnystrom1 at comcast.net
wrote:
I'm
Barry Smith writes:
http://en.wikipedia.org/wiki/SPIKE_algorithm
Looks great to me. I have a bunch of banded systems in my code to solve.
I've been trolling through the Petsc documentation trying to figure out how
to set the maximum number of iterations for a krylov solver without using the
command line option -ksp_max_its and without calling KSPSetFromOptions. Is
this possible? If so, how?
Thanks,
Dave
Jed Brown writes:
On Sun, Dec 4, 2011 at 23:37, Dave Nystrom Dave.Nystrom at
tachyonlogic.comwrote:
I've been trolling through the Petsc documentation trying to figure out how
to set the maximum number of iterations for a krylov solver without using
the
command line option
makes
sense?
Thanks,
Dave Nystrom writes:
Jed Brown writes:
On Tue, Nov 29, 2011 at 23:53, Dave Nystrom dnystrom1 at comcast.net
wrote:
I have a resistive mhd code that I have recently interfaced to petsc
which
has 7 linear solves that are all symmetric. I recently tried
Matthew Knepley writes:
On Wed, Nov 30, 2011 at 12:41 AM, Dave Nystrom dnystrom1 at
comcast.netwrote:
I have a linear system in a code that I have interfaced to petsc that is
taking about 80 percent of the run time per timestep. This linear system
is a symmetric block banded
the petsc
pthreads work. I'd be happy to give it a try when you think it is more
mature development wise and has the potential to offer some meaningful
performance gain.
Thanks,
Dave
Shri
On Nov 29, 2011, at 11:47 PM, Dave Nystrom dnystrom1 at comcast.net wrote:
I have a 2d
I never received any reply to this question but would very much appreciate
one. Not sure if it fell through the cracks.
Thanks,
Dave
Dave Nystrom writes:
I have a 2d resistive mhd code interfaced to petsc. The code has seven
different linear solves per timestep and these linear solves
Never mind. I was looking in the wrong place.
Dave Nystrom writes:
I was looking for GAMG on the development section of the online web site and
can't seem to find any reference to it. Is there documentation available on
GAMG?
Thanks,
Dave
It seems that whenever I am running my code with petsc and run top on linux
it reports that my code is running something like 24-25 GB of memory, even if
I am running a smallish problem. Is this normal? I just had a problem
terminate because it ran out of memory and I had no idea that I was
in the near term.
Thanks,
Dave
--
Dave Nystrom
phone: 505-661-9943 (home office)
505-662-6893 (home)
skype: dave.nystrom76
email: dnystrom1 at comcast.net
smail: 219 Loma del Escolar
Los Alamos, NM 87544
the petsc
pthreads capability that is in petsc-dev. Is this work at a stage yet where
I might be able to benefit from trying it with my code when running on a
multi-core cpu?
Thanks,
Dave
--
Dave Nystrom
phone: 505-661-9943 (home office)
505-662-6893 (home)
skype: dave.nystrom76
email
preonly. I was wondering if that was reasonable behavior. I would not have
thought that using a cholesky direct solve would take longer than an LU
direct solve in petsc for the serial case and was hoping it would be faster.
Does this behavior seem reasonable?
Thanks,
Dave
--
Dave Nystrom
for
this forum as well if anyone wants a copy.
Thanks,
Dave
--
Dave Nystrom
phone: 505-661-9943 (home office)
505-662-6893 (home)
skype: dave.nystrom76
email: dnystrom1 at comcast.net
smail: 219 Loma del Escolar
Los Alamos, NM 87544
Matthew Knepley writes:
On Sun, Oct 2, 2011 at 10:50 PM, Dave Nystrom Dave.Nystrom at
tachyonlogic.com wrote:
Hi Barry,
Barry Smith writes:
Dave,
I cannot explain why it does not use the MatMult_SeqAIJCusp() - it does
for me.
Do you get good performance
/attachments/20111002/4cb2699c/attachment-0001.obj
-- next part --
Dave Nystrom writes:
Matthew Knepley writes:
On Sat, Oct 1, 2011 at 11:26 PM, Dave Nystrom Dave.Nystrom at
tachyonlogic.com wrote:
Barry Smith writes:
On Oct 1, 2011, at 9:22 PM, Dave
for your help,
Dave
Barry
On Oct 2, 2011, at 9:08 PM, Dave Nystrom wrote:
Barry Smith writes:
On Oct 2, 2011, at 6:39 PM, Dave Nystrom wrote:
Thanks for the update. I don't believe I have gotten a run with good
performance yet, either from C or Fortran. I wish
79 matches
Mail list logo