for compiling multiple sources
On Fri, Mar 18, 2016 at 9:49 AM, Matthew Overholt <overh...@capesim.com> wrote:
Hi,
I’m just getting started with PETSc and have been able to configure and run the
examples, but as I’m starting to put together a more substantial code with
multiple source files I h
Hi,
I'm just getting started with PETSc and have been able to configure and run
the examples, but as I'm starting to put together a more substantial code
with multiple source files I haven't been able to find or create a makefile
which works and follows PETSc guidelines.
I've configured
Petsc-users,
I want to use PARDISO for a KSPPREONLY solution in a parallel context. I
understand that my FEA stiffness matrix for KSP (PARDISO) needs to be of
type MATSEQAIJ (according to MATSOLVERMKL_PARDISO), but I would like to
assemble this matrix in parallel (MATSBAIJ) and then collect
PETSc Users,
I am doing a KSPPREONLY solution (of the heat transfer equation using FEA)
and comparing several packages like PARDISO and MUMPS, and I am encountering
a MatSolve() failure that I am having trouble diagnosing. The matrix
inversion fails and I get "nan". The failure only happens
Hi Petsc-Users,
I am trying to understand an issue where PetscCommDuplicate() calls are
taking an increasing percentage of time as I run a fixed-sized problem on
more processes.
I am using the FEM to solve the steady-state heat transfer equation (K.x =
q) using a PC direct solver, like
To: overh...@capesim.com
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] large PetscCommDuplicate overhead
> On Oct 5, 2016, at 2:30 PM, Matthew Overholt <overh...@capesim.com> wrote:
>
> Hi Petsc-Users,
>
> I am trying to understand an issue where PetscCommDuplic
>> On Aug 22, 2016, at 10:49 AM, Matthew Overholt <overh...@capesim.com> wrote:
>>
>> I am using the Intel MKL CPardiso library as a PC direct solver, and I am
>> trying to
>> figure out how to properly set options (the Pardiso and CPardiso “iparm”
>
ct machine are you running on? Please run modules list so we can
see exactly what modules you are using. Please tell us exactly what options
you are passing to pat_build?
Barry
> On Oct 6, 2016, at 10:45 AM, Matthew Overholt <overh...@capesim.com>
wrote:
>
> Matthew
'
Subject: Re: [petsc-users] large PetscCommDuplicate overhead
Matthew Overholt <overh...@capesim.com> writes:
> Barry,
>
> Subsequent tests with the same code and a problem (input) having a
> much smaller vertex (equation) count (i.e. a much smaller matrix to
> invert for the s
() are the same regardless
of geometry and the time of the call shouldn't depend on the geometry.
Would you be able to do another set of tests where you track the time in
MPI_Get_attr() and MPI_Barrier() instead of PetscCommDuplicate()? It could
be Cray did something "funny" in their impleme
Hello,
I am using KSP in KSPPREONLY mode to do a direct solve on an A*x = b
system, with solver algorithms MUMPS, CPardiso and Pardiso. For Pardiso,
is it possible to control the solver execution step (denoted "phase" in
Intel's docs)? I would like to be able to control when it refactors as one
e Analysis on output time steps when unsteady, and just for the
first fixed point iteration when steady.
Matt...
On Tue, Jun 5, 2018 at 5:03 AM, Smith, Barry F. wrote:
>
>
> > On Jun 4, 2018, at 10:32 PM, Matthew Overholt
> wrote:
> >
> > Hello,
> >
> > I am
) 653-7100 x204
overh...@capesim.com
On Tue, Jun 5, 2018 at 10:40 AM, Matthew Knepley wrote:
> On Tue, Jun 5, 2018 at 10:04 AM, Matthew Overholt
> wrote:
>
>> Barry,
>>
>> What I should have said was that I wanted to control when it does the
>> "Analysi
On Tue, Jun 5, 2018 at 1:48 PM, Smith, Barry F. wrote:
>
>
> > On Jun 5, 2018, at 4:08 PM, Matthew Overholt
> wrote:
> >
> > Yes to Matthew - not repeating Phase 1: Fill-reduction analysis and
> symbolic factorization . Numerical factoring is required because th
On Wed, Jun 6, 2018 at 6:30 AM, Smith, Barry F. wrote:
>
>
>
> > On Jun 5, 2018, at 7:36 PM, Matthew Overholt
> wrote:
> >
> >
> > On Tue, Jun 5, 2018 at 1:48 PM, Smith, Barry F.
> wrote:
> >
> >
> > > On Jun 5, 2018, at 4:
I am working on handling very small pivot values for a very small
percentage of my matrix (linear Ax = b solution), and I am getting an error
that I don't understand when I run the KSPCG solver in parallel.
KSPCreate(comm, )
KSPSetTolerances(ksp, rtol, ...)
KSPSetType(ksp, KSPCG)
Great idea, thanks again!
Matt...
On Mon, Jul 9, 2018 at 11:08 AM, Matthew Knepley wrote:
> On Mon, Jul 9, 2018 at 11:05 AM Matthew Overholt
> wrote:
>
>> The -sub_pc_factor_zeropivot option works, but since this is for
>> a commercial code I don't want users to have to
Thanks, Matt, that option works perfectly, and explains the -ksp_view
output.
Much appreciated!
Matt...
On Fri, Jul 6, 2018 at 6:25 PM, Matthew Knepley wrote:
> Default Case (zeropivot is 2.22045E-14):
>> mpiexec -n 1 ...
>> ==> ksp fails due to pcReason = PC_FACTOR_NUMERIC_ZEROPIVOT
>>
>>
to call
PCBjacobiGetSubKSP()
and then loop over every block, calling
KSPGetPC()
PCFactorSetZeroPivot()
on each block? That seems a little tedious, but if that is the correct
approach I'll do it.
Thanks,
Matt Overholt
On Mon, Jul 9, 2018 at 10:17 AM, Matthew Overholt
wrote:
> Thanks, M
Here is a sample makefile, like what I use with the Intel MPI compilers
(used during PETSc configuration) and MKL library.
Matt Overholt
-
#
# makefile for Linux using the Intel C++ Compiler, MKL & MPI libraries +
OpenMP
# usage: make
# or: make
Correction:
OBJFILES = \
class1.o \
class2.o \
myprogram.o
Matt Overholt
On Tue, Oct 23, 2018 at 2:29 PM Matthew Overholt
wrote:
> Here is a sample makefile, like what I use with the Intel MPI compilers
> (used during PETSc configuration) and MKL library.
>
> Matt Overholt
>
>
Hi Edoardo,
I also have the Intel Parallel Studio XE compilers and MPI installed, and I
use it to build PETSc as follows.
# Either add these to your .bashrc or run them on the command line before
beginning the PETSc installation
source /opt/intel/parallel_studio_xe_2018/bin/psxevars.sh intel64
Hello,
How does one configure the PETSc installation to download METIS and have it
use REALTYPEWIDTH 64 (as defined in metis.h)? I am using:
--with-64-bit-indices --download-metis=yes
to get IDXTYPEWIDTH 64. If I were installing METIS independently I would
set the following near the top of
tached patch provides --download-metis-use-doubleprecision option
>
> Satish
>
> On Thu, 10 Jan 2019, Matthew Overholt via petsc-users wrote:
>
> > Hello,
> >
> > How does one configure the PETSc installation to download METIS and have
> it
> > use REALTYPEW
Hi Mohammad,
We tried the same thing for our finite element heat transfer code, and
experimented with both MUMPS and MKL's Cluster PARDISO for about a year,
and were very disappointed with how they scaled.
Give the full PETSc PCG solver with the ILU(0) preconditioner a try (pure
MPI, no hybrid
25 matches
Mail list logo