Thank you everybody, number of iterations seems now to be under control, I'll
run some scaling test and hope for the best.
On Monday 29 September 2014 16:02:24 Jed Brown wrote:
Matthew Knepley knep...@gmail.com writes:
Matrix free you mean AMG (like -pc_mg_galerkin)? Does it reach the same
http://lmgtfy.com/?q=HPGMG
On Sep 30, 2014, at 1:59 AM, Filippo Leonardi
filippo.leona...@sam.math.ethz.ch wrote:
Thank you everybody, number of iterations seems now to be under control, I'll
run some scaling test and hope for the best.
Now I am intrigued, is there, by any chance, any
If the message does not display properly, pleaseClick HereHi,Spain and USA, the leading two CSP markets are running the biggest number of CSP plants, and generating quite a
Hi,
Intel released MKL 11.2 with support for Pardiso cluster version.
https://software.intel.com/en-us/articles/intel-math-kernel-library-parallel-direct-sparse-solver-for-clusters
Is it possible to use Pardiso with PETSC using MPI?
I'm on Windows if that matters.
Currently if I try to run, I
You should include make.log and configure.log (to answer the question of
whether you configured with --download-mkl_pardiso, amongst others) .
On 9/30/14 12:03 PM, Baros Vladimir wrote:
Hi,
Intel released MKL 11.2 with support for Pardiso cluster version.
Hi PETSc team,
I'm attempting to analyze and optimize a structural analysis program called
SESKA using mpe. I have configured PETSc with:
./configure
--with-mpi=1 --with-cc=gcc
--with-cxx=g++
--with-fc=gfortran
Send configure.log and make.log
On Sep 30, 2014, at 6:49 AM, Matthew Hills hillsma...@outlook.com wrote:
Hi PETSc team,
I'm attempting to analyze and optimize a structural analysis program called
SESKA using mpe. I have configured PETSc with:
./configure --with-mpi=1 --with-cc=gcc
This hasn't been tested in a while - but I think you have
to configur with:
--download-mpe
And then run the code with:
-log_mpe
Satish
On Tue, 30 Sep 2014, Matthew Hills wrote:
Hi PETSc team,
I'm attempting to analyze and optimize a structural analysis program called
SESKA using mpe. I
On Mon, Sep 29, 2014 at 4:55 PM, Miguel Angel Salazar de Troya
salazardetr...@gmail.com wrote:
Hi all
I'm bumping this post because I have more questions related to the same
problem.
I am looping over the edges of my DMNetwork, then I obtain the vertices
that make up each edge with
From: Miguel Angel Salazar de Troya salazardetr...@gmail.com
Date: Mon, 29 Sep 2014 16:55:14 -0500
To: Shri abhy...@mcs.anl.gov
Cc: petsc-users@mcs.anl.gov petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] DMPlex with spring elements
Hi all
I'm bumping this post because I have more
On Tue, Sep 30, 2014 at 11:22 AM, Abhyankar, Shrirang G.
abhy...@mcs.anl.gov wrote:
From: Miguel Angel Salazar de Troya salazardetr...@gmail.com
Date: Mon, 29 Sep 2014 16:55:14 -0500
To: Shri abhy...@mcs.anl.gov
Cc: petsc-users@mcs.anl.gov petsc-users@mcs.anl.gov
Subject: Re:
No worries. Thanks for your responses. I'm assuming you suggested to use
DMNetworkIsGhostVertex() and not modify its value for the case in which I
were using the global vectors and not the local vectors, where it is
possible, as Matt suggested.
Miguel
On Tue, Sep 30, 2014 at 11:22 AM, Abhyankar,
My reply was based on the assumption that the values set at a vertex do
not have contributions from other vertices or edges. In which case, you
would not want to set any value for the ghost vertex.
For example, consider the following network
v1e1 v2e2 v3
|
Thanks a mil, it worked perfectly.
Matt
Sent from Windows Mail
From: Satish Balay
Sent: Tuesday, 30 September 2014 15:06
To: Matthew Hills
Cc: petsc-users@mcs.anl.gov, Sebastian Skatulla
This hasn't been tested in a while - but I think you have
to configur with:
Hi,
The comment on line 419 of SNES ex 28.c
http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex28.c.html
says
that the approach used in this example is not the best way to allocate
off-diagonal blocks. Is there an example that shows a better way off
allocating memory for
Gautam Bisht gbi...@lbl.gov writes:
Hi,
The comment on line 419 of SNES ex 28.c
http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex28.c.html
says
that the approach used in this example is not the best way to allocate
off-diagonal blocks. Is there an example that shows
Do not use any of these options -download-mpicc=1 –download-f2cblaslapack=1
–download-f-blas-lapack=1
Send the resulting configure.log again if it fails.
Also we recommend installing the latest version of PETSc instead of this old
one.
Barry
On Sep 30, 2014, at 6:20 PM, Nan
Hi all,
I have four differential equations to be solved with a linear sparse matrix
system that has the form of (mat)A dot (vec)X = (vec)B. For each node, I
have dof=4. I found few tutorials or examples on the KSPSolve with dof1,
and I do know it's possible to solve this problem with Petsc. Are
On Tue, Sep 30, 2014 at 9:13 PM, Sharp Stone thron...@gmail.com wrote:
Hi all,
I have four differential equations to be solved with a linear sparse
matrix system that has the form of (mat)A dot (vec)X = (vec)B. For each
node, I have dof=4. I found few tutorials or examples on the KSPSolve
19 matches
Mail list logo