On Mon, 12 Sep 2016, Dr.-Ing. Dominik Brands wrote:
> Dear all,
>
> due to some compatibilities I must use an older version of PETSc 3.2-p7.
> In my program I want to use the already outdated solver prometheus.
> Therefore I need version 1.8.9, but my last stored version is 1.8.8
>
> Has anyone
On Tue, 13 Sep 2016, Scott Dossa wrote:
> Also I found I had to search manually for where the shared-libraries
> resided and explicitly tell PETSc where they live. For example, when I
> configure PETSc, I had to include options like
>
> > ./configure (. . .)
> > --with-mpi-dir=/panfs/roc/itascaso
On Tue, 13 Sep 2016, Matthew Knepley wrote:
> I believe your problem is that this is old PETSc. In the latest release,
> BLACS is part of SCALAPACK.
BLACS had been a part of scalapack for a few releases - so thats not the issue.
stderr:
/cm/shared/modulefiles/moose-compilers/petsc/petsc
On Tue, 13 Sep 2016, Matthew Knepley wrote:
> On Tue, Sep 13, 2016 at 12:16 PM, Timothée Nicolas <
> timothee.nico...@gmail.com> wrote:
>
> > Hi all,
> >
> > I can't seem to figure out how to specify my compilation options with
> > PETSc. For my makefiles, I've always been using Petsc examples in
On Tue, 13 Sep 2016, Kong (Non-US), Fande wrote:
> > However - as Matt refered to - its best to use latest petsc-3.7
> > release. Does MOOSE require 3.6?
> >
> >
> I think MOOSE works fine with petsc-3.7 as long as you do not use
> superlu_dist. The superlu_dist has bugs in the latest petsc-3.7.
On Tue, 13 Sep 2016, Kong (Non-US), Fande wrote:
> On Tue, Sep 13, 2016 at 12:05 PM, Satish Balay wrote:
>
> > On Tue, 13 Sep 2016, Kong (Non-US), Fande wrote:
> >
> > > > However - as Matt refered to - its best to use latest petsc-3.7
> >
On Tue, 13 Sep 2016, Satish Balay wrote:
> scalapack is getting compiled with this flag '-Df77IsF2C'. This mode
> was primarily supported by 'g77' previously - which we hardly ever use
> anymore - so this mode is not really tested?
Looks like no one ever tested s
Possible ERROR while running linker: exit code 256
stderr:
/usr/bin/ld: cannot find -lfblas
collect2: error: ld returned 1 exit status
Do not use '--prefix=' option. Don't know what it does with an empty
value. Either do an 'inplace' build [defult - when you do not specify
a prefix] - or
Do you have MacPorts installed? With Blas? Perhaps thats the source of
conflict..
Satish
On Wed, 14 Sep 2016, Barry Smith wrote:
>
> Send configure.log to petsc-ma...@mcs.anl.gov Looks like a troubled
> BLAS/LAPACK install.
>
>Barry
>
> > On Sep 14, 2016, at 12:52 PM, Gideon Simpson
On Fri, 7 Oct 2016, Anton Popov wrote:
> Hi guys,
>
> are there any news about fixing buggy behavior of SuperLU_DIST, exactly what
> is described here:
>
> http://lists.mcs.anl.gov/pipermail/petsc-users/2015-August/026802.html ?
>
> I'm using 3.7.4 and still get SEGV in pdgssvx routine. Everyth
On Fri, 7 Oct 2016, Kong, Fande wrote:
> On Fri, Oct 7, 2016 at 9:04 AM, Satish Balay wrote:
>
> > On Fri, 7 Oct 2016, Anton Popov wrote:
> >
> > > Hi guys,
> > >
> > > are there any news about fixing buggy behavior of SuperLU_DIST, exactly
> &
. But that line is in comment
> block, not the program.
>
> Sherry
>
>
> On Mon, Oct 10, 2016 at 7:27 AM, Anton Popov wrote:
>
> >
> >
> > On 10/07/2016 05:23 PM, Satish Balay wrote:
> >
> >> On Fri, 7 Oct 2016, Kong, Fande wrote:
&
> Which version of superlu_dist does this capture? I looked at the
> > original error log, it pointed to pdgssvx: line 161. But that line is in
> > comment block, not the program.
> >
> > Sherry
> >
> >
> > On Mon, Oct 10, 2016 at 7:27 AM, Anton Popov
1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably
> memory access out of range
>
>
> On 10/11/2016 03:26 PM, Anton Popov wrote:
> >
> > On 10/10/2016 07:11 PM, Satish Balay wrote:
> > > Thats from petsc-3.5
> > >
> > >
ownload-superlu_dist-commit=origin/maint
Satish
>
> Maybe we're facing a problem that is already solved.
>
> Thanks,
> Anton
> >
> > > On Oct 11, 2016, at 12:19 PM, Satish Balay wrote:
> > >
> > > This log looks truncated. Are there
On Sun, 16 Oct 2016, 丁老师 wrote:
> Dear professor:
>I met the following error for Petsc 3.7.3.
>I delcare LocalSize as int, but it doesn't work anymore. it works for
> 3.6.3.
>
>error: cannot convert ‘int*’ to ‘PetscInt* {aka long int*}’ for argument
> ‘2’ to ‘PetscErrorCode VecGetL
The issue with this test code is - using MatLoad() twice [with the
same object - without destroying it]. Not sure if thats supporsed to
work..
Satish
On Fri, 21 Oct 2016, Hong wrote:
> I can reproduce the error on a linux machine with petsc-maint. It crashes
> at 2nd solve, on both processors:
>
On Fri, 21 Oct 2016, Barry Smith wrote:
>
> > On Oct 21, 2016, at 5:16 PM, Satish Balay wrote:
> >
> > The issue with this test code is - using MatLoad() twice [with the
> > same object - without destroying it]. Not sure if thats supporsed to
> > work..
>
On Fri, 21 Oct 2016, Barry Smith wrote:
>
> valgrind first
balay@asterix /home/balay/download-pine/x/superlu_dist_test
$ mpiexec -n 2 $VG ./ex16 -f ~/datafiles/matrices/small
First MatLoad!
Mat Object: 2 MPI processes
type: mpiaij
row 0: (0, 4.) (1, -1.) (6, -1.)
row 1: (0, -1.) (1, 4.)
de,
>
> This will also make MatMPIAIJSetPreallocation() work properly with
> multiple calls (you will not need a MatReset()).
>
>Barry
>
>
> > On Oct 21, 2016, at 6:48 PM, Satish Balay wrote:
> >
> > On Fri, 21 Oct 2016, Barry Smith wrote:
the maintenance release 3.7.5 together
> > with the latest SuperLU_DIST? Or next release is a more realistic option?
> >
> > Anton
> >
> > On 10/24/2016 01:58 AM, Satish Balay wrote:
> >> The original testcode from Anton also works [i.e
A,A);
> PCSetUp(pc);
>
> MatCreate(PETSC_COMM_WORLD,&A);
> MatLoad(A,fd);
> PCSetOperators(pc,A,A);
> PCSetUp(pc); //crash here with np=2, superlu_dist, not with mumps/superlu or
> superlu_dist np=1
>
> Hong
>
> > On Oct 24, 2016, at 9:00 AM, Satish Balay wrote:
&g
On Mon, 24 Oct 2016, Barry Smith wrote:
>
> > [Or perhaps Hong is using a different test code and is observing bugs
> > with superlu_dist interface..]
>
>She states that her test does a NEW MatCreate() for each matrix load (I
> cut and pasted it in the email I just sent). The bug I fixed wa
Always look in configure.log to see the exact error.
No - configure does not do checksums - but it expcects the package to
be in a certain format [and this can change between petsc versions].
So if you are using a url from petsc-3.5 - with 3.7 -- it might not
work..
So for any version of petsc -
can tell the configure log (attached) gives the same error.
>
> If the current version/format is not found, why not just say so
> in the error message? Saying "unable to download" suggests
> something's wrong with the internet connection, or file path.
>
> Chris
&g
As you can see - the dir names don't match.
petsc-3.7 uses: https://github.com/xiaoyeli/superlu_dist/archive/0b5369f.tar.gz
If you wish to try version 5.1.2 [which is not the default for this version of
PETSc] - you can try:
https://github.com/xiaoyeli/superlu_dist/archive/v5.1.2.tar.gz
Altern
u/archive/7e10c8a.tar.gz']
Then run the script again
<<<
It tells you exactly the URLs that you should download - for the
packages that you are installing..
Satish
On Wed, 26 Oct 2016, Satish Balay wrote:
> As you can see - the dir names don't match.
>
> petsc-3.7 uses:
On Mon, 31 Oct 2016, Klaij, Christiaan wrote:
> Satish,
>
> I've noticed that SuperLU depends on metis and parmetis and that
> PETSc downloads the versions 5.1.0-p3 and 4.0.3-p3. These are
> different from the Karypis latest stable versions (without the
> -p3). Do I really need these -p3 versions
you can check src/ksp/ksp/examples/tutorials/ex44f.F90 for example usage..
Satish
On Fri, 4 Nov 2016, Barry Smith wrote:
>
> For some of the includes once you include them in a module you cannot
> include them in routines that use the module. This is generally true for
> includes that do no
The stack says - the crash is OpenMPI's orterun [aka mpiexec].
Perhaps its broken?
you can run PETSc examples without mpiexec as:
cd src/ksp/ksp/examples/tutorials
make ex2
./ex2
I don't understand the tweak. Usually 'compat' packages are used by
precompiled binaries - that were compiled with o
I just tried a of petsc-3.4 on centos-7.2 - and that went through fine.
./configure --with-clanguage=cxx --with-mpi-dir=/usr/lib64/openmpi/bin
--with-c2html=0
Satish
On Fri, 18 Nov 2016, Satish Balay wrote:
> The stack says - the crash is OpenMPI's orterun [aka mpiexec].
>
&g
Most likely you have upgraded xcode [and or OSX] - but not brew [which
provides gfortran].
If you need fortran - suggest deleting and reinstalling brew
[gfortran]..
Satish
On Wed, 23 Nov 2016, Matthew Knepley wrote:
> On Wed, Nov 23, 2016 at 2:57 PM, Sharp Stone wrote:
>
> > Hi folks,
> >
> >
On windows - you need to setup PATH to the location of dll. i.e
PATH=$PATH:/home/Elaine/petsc-3.7.4/arch-mswin-c-debug/lib make
PETSC_DIR=/home/Elaine/petsc-3.7.4 PETSC_ARCH=arch-mswin-c-debug test
However fortran examples will fail with a dll build. In this case -
its best to rebuild using conf
Hong,
petsc master is updated to download/install mumps-5.0.2
Satish
On Mon, 12 Dec 2016, Hong wrote:
> Alfredo:
> Sure, I got the tarball of mumps-5.0.2, and will test it and update
> petsc-mumps interface. I'll let you know if problem remains.
>
> Hong
>
> Dear all,
> > sorry for the late r
self.gitcommit = '026d6fa' # maint/3.7 from may-21-2026
self.download =
['git://https://bitbucket.org/petsc/petsc4py','https://bitbucket.org/petsc/petsc4py/get/'+self.gitcommit+'.tar.gz']
self.downloaddirname = 'petsc-petsc4py'
Configure is setup to use tarball that is obtained
How about using --download-fblaslapack instead of MKL?
Satish
On Sun, 18 Dec 2016, Aurelien Ponte wrote:
> Allright I got the following to complete:
>
> ### install pip:
> module load python/2.7.10_gnu-4.9.2
> wget https://bootstrap.pypa.io/get-pip.py
> python get-pip.py --user
> # edit .cshrc:
atus)
> RuntimeError: 512
>
>
> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import
> setuptools,
> tokenize;__file__='/tmp/pip-build-t7pV1u/petsc/setup.py';f=getattr(tokenize,
>
PETSc code doesn't use classes - so we don't see this isssue.
One way to fix this is:
#undef #undef __FUNCT__
#if (__INTEL_COMPILER)
#define __FUNCT__ “ClassName::FunctionName”
#else
#define __FUNCT__ “FunctionName”
#endif
Alternative is to not do this check For compiles that define __func__
[l
On Thu, 22 Dec 2016, Sharp Stone wrote:
> Dear folks,
>
> I'm now using Chombo with Petsc solver. When compiling the Chombo examples
> of Petsc, I always got errors below. I wonder if anyone has such experience
> to get rid of these. Thanks very much in advance!
>
>
> In file included from Pets
>From https://commons.lbl.gov/display/chombo/Chombo+Download+Page
>>>
The 3.2 version of this software was released March 25. 2014
Chombo 3.2 Release Notes:
We have implemented interfaces to the PETSc solver library
<<
So it might be compatible with petsc-3.4
http://www.mcs.anl.gov/petsc/docume
Looks like this issue was previously discussed
http://lists.mcs.anl.gov/pipermail/petsc-users/2015-October/027365.html
Satish
On Tue, 27 Dec 2016, Zhang, Hong wrote:
> PetscInt IIA(1)
> IIA(1) = II
> call
> MatSetValues(A_mat_uv,1,IIA,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr)
>
> Hong
Its good to always verify if a petsc example with makefile works [or
reproduces the same error]
If the petsc example with petsc makefile works - then you would look
at the differences bettween this compile - and yours..
Satish
On Tue, 27 Dec 2016, TAY wee-beng wrote:
> Hi,
>
> Sorry I just rea
On Sat, 31 Dec 2016, Eric Chamberland wrote:
> Hi,
>
> I am just starting to debug a bug encountered with and only with SuperLU_Dist
> combined with MKL on a 2 processes validation test.
>
> (the same test works fine with MUMPS on 2 processes).
>
> I just noticed that the SuperLU_Dist version i
On Sat, 31 Dec 2016, Satish Balay wrote:
> From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple of
> bugs].
ok - updated maint to use 5.1.3
Satish
> (--download-superlu_dit-commit=v5.1.3).
>
> But from what you and Matthew said, I should have 5.1.3 with petsc-master, but
> the last night log shows me library file name 5.1.0:
>
> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016.12.31.02h00m01s_configure
On Sat, 31 Dec 2016, Matthew Knepley wrote:
>
> We do not automatically upgrade the version of dependent packages.
If git is installed - then configure prefers git repo - and that will
get upgraded [or downgraded] automatically - based on gitcommit in
configure [or commandline].
> You have to d
uperlu_dist.so.5
>
> It is saying 5.1.0, but in fact you are right: it is 5.1.3 that is
> downloaded!!! :)
>
> And FWIW, the nighlty automatic compilation of PETSc starts within a brand new
> and empty directory each night...
>
> Thanks t
On Sat, 31 Dec 2016, Eric Chamberland wrote:
> ok I will test with 5.1.3 with the option you gave me
> (--download-superlu_dit-commit=v5.1.3).
BTW: I have a typo here - it should be: --download-superlu_dist-commit=v5.1.3
Satish
FWIW, the nighlty automatic compilation of PETSc starts within a brand
> > new and empty directory each night...
> >
> > Thanks to both of you again! :)
> >
> > Eric
> >
> >
> > Le 2016-12-31 à 13:17, Satish Balay a écrit :
> > > =
Do you have similar issues with gnu compilers?
It must be some incompatibility with intel compilers with this glibc change.
>
compilers: Check that C libraries can be used from Fortran
Pushing language FC
Popping language FC
On Tue, 3 Jan 2017, Matthew Knepley wrote:
> We previously fixed a bug with this error reporting:
>
> https://bitbucket.org/petsc/petsc/commits/32cc76960ddbb48660f8e7c667e293c0ccd0e7d7
>
> in August. Is it possible that your PETSc is older than this? Could
> you apply that patch, or run the conf
The patch that Matt mentioned is in 'master' branch - so its not in
3.7.4. I've now added it to 'maint' branch - so it should be in next
petsc patch tarball [i.e 3.7.6] - whenever its released.
You can apply the patch to your current sources - that should tell us
the exact error you get with the i
On Tue, 3 Jan 2017, Matthew Knepley wrote:
> Or get the new tarball when it spins tonight, since Satish has just
> added the fix to maint.
We don't spin 'maint/patch-release' tarballs everynight. Its every 1-3
months - [partly depending upon the number of outstanding patches - or
their severity]
On Wed, 4 Jan 2017, Matthew Knepley wrote:
> On Wed, Jan 4, 2017 at 9:19 AM, Klaij, Christiaan wrote:
>
> > Attached is the log for
> >
> >
> > LIBS="-L/cm/shared/apps/intel/compilers_and_libraries_2016.
> > 3.210/linux/compiler/lib/intel64_lin -lifcore"
> >
> Something is strange with the quote
BTW - You have:
Machine platform:
('Linux', 'marclus4login3', '3.10.0-327.36.3.el7.x86_64', '#1 SMP Mon Oct 24
09:16:18 CDT 2016', 'x86_64', 'x86_64')
So glibc is updated - but not kernel - so its a partial update? [or
machine not rebooted after a major upgrade (7.2 -> 7.3) ?]
I wonder if that
On Wed, 4 Jan 2017, Satish Balay wrote:
> BTW - You have:
>
> Machine platform:
> ('Linux', 'marclus4login3', '3.10.0-327.36.3.el7.x86_64', '#1 SMP Mon Oct 24
> 09:16:18 CDT 2016', 'x86_64', 'x86_64')
>
>
&g
On Wed, 4 Jan 2017, Satish Balay wrote:
> So I guess your best bet is static libraries..
Or upgrade to intel-17 compilers.
Satish
---
[balay@el7 benchmarks]$ icc --version
icc (ICC) 17.0.1 20161005
Copyright (C) 1985-2016 Intel Corporation. All rights reserved.
[balay@el7 benchma
On Thu, 5 Jan 2017, Matthew Knepley wrote:
> On Thu, Jan 5, 2017 at 2:37 AM, Klaij, Christiaan wrote:
> > So problem solved for now, thanks to you and Matt for all your
> > help! On the long run I will go for Intel-17 on SL7.3.
> >
> > What worries me though is that a simple update (which happen
On Thu, 5 Jan 2017, Satish Balay wrote:
> Well its more of RHEL - than SL. And its just Intel .so files [as far
> as we know] thats triggering this issue.
>
> RHEL generally doesn't make changes that break old binaries. But any
> code change [wihch bug fixes are] - can introd
On Fri, 6 Jan 2017, Klaij, Christiaan wrote:
> Satish,
>
> Our sysadmin is not keen on downgrading glibc.
sure
> I'll stick with "--with-shared-libraries=0" for now
thats fine.
> and wait for SL7.3 with intel 17.
Well they are not related so if you can - you should upgrade to
intel-17 [irre
We can add more entries to the lookup. The stack below looks
incomplete. Which routine is calling PetscTableCreateHashSize() with
this big size?
Satish
---
$ git diff
diff --git a/src/sys/utils/ctable.c b/src/sys/utils/ctable.c
index cd64284..761a2c6 100644
--- a/src/sys/utils/ctable.c
+++ b/
On Mon, 9 Jan 2017, Jed Brown wrote:
> Satish Balay writes:
>
> > We can add more entries to the lookup. The stack below looks
> > incomplete. Which routine is calling PetscTableCreateHashSize() with
> > this big size?
> >
> > Satish
> >
> > -
On Mon, 9 Jan 2017, Jed Brown wrote:
> Satish Balay writes:
> > Sure - I'm using a crappy algorithm [look-up table] to get
> > "prime_number_close_to(1.4*sz)" - as I don't know how to generate
> > these numbers automatically.
>
> FWIW, it only need
we may need
> to add more entries to the lookup again in the future.
>
> Fande,
>
> On Mon, Jan 9, 2017 at 2:14 PM, Satish Balay wrote:
>
> > On Mon, 9 Jan 2017, Jed Brown wrote:
> >
> > > Satish Balay writes:
> > > > Sure - I'm using a
On Mon, 9 Jan 2017, Jed Brown wrote:
> Satish Balay writes:
>
> > On Mon, 9 Jan 2017, Jed Brown wrote:
> >
> >> Satish Balay writes:
> >> > Sure - I'm using a crappy algorithm [look-up table] to get
> >> > "prime_number_close_to(1.
On Mon, 9 Jan 2017, Jed Brown wrote:
> Satish Balay writes:
>
> >> Why is it not sufficient to be coprime?
> >
> > Well whatever was implemented previsously with PETSC_HASH_FACT [a
> > prime number] didn't work well. [there were a couple of reports on it
ion.
>
> Fande,
>
> On Tue, Jan 10, 2017 at 12:33 AM, Jed Brown wrote:
>
> > Satish Balay writes:
> > > I tried looking at it - but it was easier for me to fixup current ctable
> > code.
> >
> > I mentioned it more as a long-term thing. I don'
Looks like --download-petsc4py [and --download-mpi4py] uses python
from PATH - and not the one used by configure.
'mpi4py-1.3.1-py3.6.egg-info' suggests that default 'python' in PATH is
python-3.6. So try:
python RBF_Load.py
Should we use the same python for configure and all external python pa
Can do a clean build - and see if this problem persists?
rm -rf arch-linux2-c-debug
[and rebuild]
If the problem persists - please run the following - and send us the generated
verboseinfo.i file
make -f gmakefile V=1 CFLAGS=-save-temps
arch-linux2-c-debug/obj/src/sys/info/verboseinfo.o
Sat
This is memory leak in openmpi - you can ignore it.
For a valgrind clean MPI - you can build PETSc with --download-mpich
Satish
On Mon, 30 Jan 2017, Praveen C wrote:
> -malloc_test does not report anything.
>
> Freeing all petsc vectors got rid of those error.
>
> Now I see only MPI related e
This is petsc-3.7?
Looks like mpif.h is getting included in some module that your code
is currently using.
Is this module your code - or petsc code?
Is using mpi via module [perhaps mpi.mod is preferable over mpif.h] -
then you can do something like:
#define PETSC_AVOID_MPIF_H
#include
Satish
PETSC_USE_REAL_DOUBLE should be defined in petscconf.h
You did mention earlier that you found it in configure.log. Please
verify if this exists in petscconf.h for your install.
And how are you compiling your code? Not using petsc makefiles?
Perhaps your makefiles are picking up the wrong petsccon
check config/examples/arch-cray-xt6-pkgs-opt.py
Also check if your machine already has petsc [from cray] installed
module avail petsc
Satish
On Fri, 10 Feb 2017, Patrick Sanan wrote:
> You're missing the 'g' in 'configure.py'.
>
> On Fri, Feb 10, 2017 at 6:51 PM, Sharp Stone wrote:
> > Hi al
e/lus0/space/fs1036/local/PETSc/petsc-3.7.4/config/BuildSystem/config/base.py",
> > line 126, in executeTest
> > ret = test(*args,**kargs)
> > File
> > "/mnt/lustre/lus0/space/fs1036/local/PETSc/petsc-3.7.4/config/BuildSystem/config/compilers.py",
> > line 313, in checkCLibraries
v-cray/4.2.15
> 21) pbs/11.3.1.122650
> 22) cray-mpich2/5.6.4
> 23) cray-hdf5-parallel/1.8.9
>
> Thanks in advance!
>
> On Fri, Feb 10, 2017 at 2:01 PM, Satish Balay wrote:
>
> > On Fri, 10 Feb 2017, Patrick Sanan wrote:
> >
> > > On Fri, Feb 10, 2017
On Wed, 22 Feb 2017, Gideon Simpson wrote:
> I’ve been trying to use some code that was originally developed under petsc
> 3.6.x with my new 3.7.5 installation. I’m having an issue in that the way
> the code is written, it’s spread across several .c files. Here are the
> essential features of
Perhaps there is some memory corruption - you can try runnng the code with
valgrind.
Satish
On Sat, 18 Mar 2017, Praveen C wrote:
> Dear all
>
> I get a segmentation fault when I call TSDestroy. Without TSDestroy the
> code runs fine. I have included portion of my code below.
>
> subroutine r
Did you run 'make test' after insatalling PETSc? Were there errors with this
test?
[you can send us corresponding test.log]
Satish
On Tue, 21 Mar 2017, Bikash Kanungo wrote:
> Hi,
>
> I have recently installed petsc-3.7.5. However, while linking petsc with my
> code, I get the following compi
=
> C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI
> process
> C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI
> processes
> Fortran example src/snes/examples/tutorials/ex5f run successfully with 1
> MPI process
> Completed test exa
Should have said:
make getccompiler getincludedirs getlinklibs
Satish
On Tue, 21 Mar 2017, Satish Balay wrote:
> Ok - Then the issue might be with your makefile. Perhaps its not a
> petsc formatted makefile.
>
> In this case - run 'make getincludedirs getlibs' in PET
t switch only PETSC_DIR/PETSC_ARCH
values]
Satish
> On Tue, Mar 21, 2017 at 7:11 PM, Satish Balay wrote:
>
> > make getccompiler getincludedirs getlinklibs
On Wed, 22 Mar 2017, Jose E. Roman wrote:
>
> > El 22 mar 2017, a las 19:23, Barry Smith escribió:
> >
> >
> >> On Mar 22, 2017, at 1:08 PM, Austin Herrema wrote:
> >>
> >> Thank you for the suggestion! Seems like a reasonable way to go. Not
> >> working for me, however, I suspect because I
On Thu, 23 Mar 2017, Tim Steinhoff wrote:
> Hi all,
>
> there is new version of MUMPS with some nice improvements:
> http://mumps.enseeiht.fr/index.php?page=dwnld#cl
> Is it possible to update the package in the petsc repository?
I have the changes in git branch 'balay/update-mumps-5.1.1'.
This
It would be good to have the complete configure.log to see where this
path is coming from.
Satish
On Tue, 28 Mar 2017, Matthew Knepley wrote:
> On Tue, Mar 28, 2017 at 3:12 PM, Denis Davydov wrote:
>
> > Dear all,
> >
> > Yesterday I updated to the latest XCode and now have problems configurin
I would do a minimal petsc build - without any packages from
/usr/local - and see if the problem presists..
Satish
On Fri, 7 Apr 2017, Barry Smith wrote:
>
> > On Apr 7, 2017, at 3:34 PM, Manav Bhatia wrote:
> >
> > Yes, I printed the data in both cases and they look the same.
> >
> > I als
We never had a cmake build infrastructure.. It was always configure && make.
However - earlier on - we did have a mode of using cmake to generate
gnumakefiles [currently depricated - it might still work] - but we've
moved on to using native gnumakefiles - so don't need cmake to
generate them anymo
BTW: we don't use Autotools either. [our configure tool is homegrown]
Satish
On Tue, 11 Apr 2017, Satish Balay wrote:
> We never had a cmake build infrastructure.. It was always configure && make.
>
> However - earlier on - we did have a mode of using cmake to g
It supports 2 modes:
1. inplace [default]:
./configure && make
2. prefix
./configure --prefix=/prefix/location && make && make install
So to use 'make install' - you need to run configure with the correct prefix
option.
Satish
On Tue, 11 Apr 2017, Joachim Wuttke wrote:
> Does PETSc suppor
If you use a prefix install - after the install process - you would
Not use PETSC_ARCH. i.e
PETSC_ARCH=''
Satish
On Tue, 11 Apr 2017, Joachim Wuttke wrote:
> The PETSc FAQ recommends to use the FindPETSc.cmake module
> from the repository https://github.com/jedbrown/cmake-modules.
>
> This doe
you need to give me details on how you installed petsc. [say make.log]
Satish
On Tue, 11 Apr 2017, Joachim Wuttke wrote:
> Satish, thank you very much for your prompt answers.
> Your last one, however, does not solve my problem.
>
> PETSC_ARCH='' is not the solution.
>
> Line 115 of FindPETSc.
On Tue, 11 Apr 2017, Joachim Wuttke wrote:
> > you need to give me details on how you installed petsc. [say make.log]
>
> tar zxvf petsc-3.7.5.tar.gz
>
> cd petsc-3.7.5
>
> ./configure --with-shared-libraries PETSC_DIR=/usr/local/src/petsc-3.7.5
> PETSC_ARCH=linux-amd64 --with-mpi-dir=/usr/lib/
On Tue, 11 Apr 2017, Satish Balay wrote:
> On Tue, 11 Apr 2017, Joachim Wuttke wrote:
>
> > > you need to give me details on how you installed petsc. [say make.log]
> >
> > tar zxvf petsc-3.7.5.tar.gz
> >
> > cd petsc-3.7.5
> >
> > ./configu
On Tue, 11 Apr 2017, Joachim Wuttke wrote:
> 'make test' was successful.
>
> And indeed,
> ${PETSC_DIR}/${PETSC_ARCH}/lib/petsc/conf/petscvariables
> exists.
> There was a mistake in my PETSC_DIR.
> Now FindPETSc.cmake does find petscvariables ...
>
> ... and I am at the next error:
> CMake Erro
Presumably your cluster already has a recommended MPI to use [which is
already installed. So you should use that - instead of
--download-mpich=1
Satish
On Wed, 19 Apr 2017, Pham Pham wrote:
> Hi,
>
> I just installed petsc-3.7.5 into my university cluster. When evaluating
> the computer system,
nfigure.log to
> petsc-ma...@mcs.anl.gov
>
>
> Please explain what is happening?
>
> Thank you very much.
>
>
>
>
> On Wed, Apr 19, 2017 at 11:43 PM, Satish Balay wrote:
>
> > Presumably your
You can try the attached [untested] patch. It replicates the
MPICH_NUMVERSION code and replaces it with MVAPICH2_NUMVERSION
Satish
On Tue, 25 Apr 2017, Kong, Fande wrote:
> On Tue, Apr 25, 2017 at 3:42 PM, Barry Smith wrote:
>
> >
> >The error message is generated based on the macro MPICH_
Added this patch to balay/add-mvapich-version-check
Satish
On Tue, 25 Apr 2017, Satish Balay wrote:
> You can try the attached [untested] patch. It replicates the
> MPICH_NUMVERSION code and replaces it with MVAPICH2_NUMVERSION
>
> Satish
>
> On Tue, 25 Apr 2017, Kong, Fand
t; we have error messages on this?
>
> Fande,
>
> On Tue, Apr 25, 2017 at 4:03 PM, Satish Balay wrote:
>
> > Added this patch to balay/add-mvapich-version-check
> >
> > Satish
> >
> > On Tue, 25 Apr 2017, Satish Balay wrote:
> >
> > > You
now unique macros used in particular mpi.h but with many cases
> > the code will become messy unless there is a pattern we can organize around.
> >
> >
> >
> >
> > >
> > > Fande,
> > >
> > > On Tue, Apr 25, 2017 at 4:03 PM, S
Perhaps mpiexec is hanging.
What MPI are you using? Are you able to manually run jobs with
mpiexec?
Satish
On Wed, 26 Apr 2017, Hom Nath Gharti wrote:
> Dear all,
>
> With version > 3.7.4, I notice that the configure takes very long
> about 24 hours!
>
> Configure process hangs at the line:
>
1 - 100 of 2218 matches
Mail list logo