Thanks, that does sound useful!
On Fri, Jan 31, 2020, 6:23 PM Smith, Barry F. wrote:
>
>You might find this option useful.
>
>--with-packages-download-dir=
>Skip network download of package tarballs and locate them in
> specified dir. If not found in dir, print package URL - so
You might find this option useful.
--with-packages-download-dir=
Skip network download of package tarballs and locate them in specified
dir. If not found in dir, print package URL - so it can be obtained manually.
This generates a list of URLs to download so you don't need
https://gitlab.com/petsc/petsc/-/merge_requests/2494
Will only turn off the hyper batch build if it is a KNL system. Will be
added to maint branch
Baryr
> On Jan 31, 2020, at 11:58 AM, Tomas Mondragon
> wrote:
>
> Hypre problem resolved. PETSc commit 05f86fb made in August 05,
Did you try "--with-batch=1"? A suggestion was proposed by Satish earlier
(CCing here).
Fande,
On Wed, Dec 18, 2019 at 12:36 PM Tomas Mondragon <
tom.alex.mondra...@gmail.com> wrote:
> Yes, but now that I have tried this a couple of different ways with
> different --with-mpiexec options, I am
The recommendation is to use --with-batch=1 if there is working mpiexec
Note: the option is --with-mpiexec="mpirun" where "mpirun" would work with
"mpirun -n 1 ./binary"
Satish
On Tue, 17 Dec 2019, Fande Kong wrote:
> Are you able to run your MPI code using " mpiexec_mpt -n 1 ./yourbinary"?
Are you able to run your MPI code using " mpiexec_mpt -n 1 ./yourbinary"?
You need to use --with-mpiexec to specify what exactly command lines you
can run, e.g., --with-mpiexec="mpirun -n 1".
I am also CCing the email to PETSc guys who may know the answer to these
questions.
Thanks,
Fande,
On