The problem with this code from package.py is it rejects a package for a
large variety of reasons but cannot print which reason it is rejecting it! Thus
we waste hours and tons of emails debugging something that doesn't need to be
debugged.
Barry
# if user did not request option, th
thread, and dropping PETSs.
>
> Great, thanks Barry for figuring this out.
>
> Treb and Baky need 64 bit indices, but in the mean time I can build a 32 bit
> version to let them test.
>
> I am all set up to test a 64 bit version. If you can give me a branch I can
> test.
Mark,
Just remove the error checking
ierr = PetscObjectTypeCompare((PetscObject)A,MATMPIAIJ,&flg);CHKERRQ(ierr);
if (!flg) SETERRQ(PetscObjectComm((PetscObject)A),PETSC_ERR_SUP,"This
function requires a MATMPIAIJ matrix as input");
from
MatMPIAIJGetSeqAIJ()
this routine should be
Runs fine in the master branch; do you really need it to work in maint.
Barry
> On Jul 3, 2018, at 6:39 AM, Mark Adams wrote:
>
> SNES ex56 seems to have regressed:
>
> 04:35 nid02516 master *= ~/petsc_install/petsc/src/snes/examples/tutorials$
> make
> PETSC_DIR=/global/homes/m/mada
> On Jul 4, 2018, at 9:40 AM, Mark Adams wrote:
>
> "-mat_seqaij_type seqaijmkl" just worked.
Mark,
Please clarify. Does this mean you can use -mat_seqaij_type seqaijmkl to
satisfy all your needs now without changing any code?
Barry
> On Wed, Jul 4, 2018 at 9:44 AM Mark Ada
Jed,
You could use your same argument to argue PETSc should do "something" to
help people who have (rightly or wrongly) chosen to code their application in
High Performance Fortran or any other similar inane parallel programming model.
Barry
> On Jul 4, 2018, at 11:51 PM, Jed Bro
> On Jul 5, 2018, at 8:28 AM, Mark Adams wrote:
>
>
> Please share the results of your experiments that prove OpenMP does not
> improve performance for Mark’s users.
>
> This obviously does not "prove" anything but my users use OpenMP primarily
> because they do not distribute their mesh m
I complained about this a couple of weeks ago and no one (including you)
responded. It seems nuts to me, do we really need that new a version of cmake?
Barry
> On Jul 5, 2018, at 5:07 PM, Matthew Knepley wrote:
>
> Now we require a fully C++11 compliant compiler, or the CMake build die
> On Jul 5, 2018, at 5:36 PM, Jed Brown wrote:
>
> When can we delete the legacy test system? Are we currently using it
> anywhere?
Make test currently requires the test include file
Barry
This is not the cause of your problem but you have the wrong version of hypre
installed for the version of PETSc.
CC windows-intel-debug/obj/mat/impls/hypre/mhypre.o
mhypre.c
C:\sources\petsc\src\mat\impls\hypre\mhypre.c(1453): warning C4002: too many
actual parameters for macro 'h
Try running
make test
if that works then the build is ok; it could be the warnings during the compile
triggered the message at the end about the failed build.
Barry
> On Jul 5, 2018, at 7:21 PM, Hector E Barrios Molano
> wrote:
>
> Hi PETSc Experts,
>
> I am compiling PETSc from gi
I know Mark Adams has tried recently; without limited success.
As always the big problem is facilities removing accounts, such as Satish's,
so testing gets difficult.
But yes, we want to support Titan so have users send configure.log/make.log
to petsc-ma...@mcs.anl.gov
Barry
>
deas from anyone on this list about how to use an adequate number of
> MPI ranks for PETSc while using only a subset of these ranks for the
> MPI+OpenMP application code will be appreciated, though I don't know if there
> are any good solutions.
>
> --Richard
>
> On W
uck. But this and a v3.7
> version are recent and work (on Titan).
>
> Mark
>
>
> On Fri, Jul 6, 2018 at 6:47 PM Smith, Barry F. wrote:
>
>I know Mark Adams has tried recently; without limited success.
>
>As always the big problem is facilities remov
I'm fine with stripping out as much of the old test stuff as reasonably
possible.
Barry
> On Jul 5, 2018, at 8:36 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>>> On Jul 5, 2018, at 5:36 PM, Jed Brown wrote:
>>>
>>&
usage" in PETSc so long as it is reasonably
well thought out and not intrusive (and hopefully has examples that document
the benefit).
Barry
> On Jul 5, 2018, at 10:31 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>> You could use your
Mark,
Is there any chance the Chombo-Crunch team could cook up a portable good
sized example (but not too large), preferably with a run time parameter that
determines the problem size, example that anyone could use as a benchmark for
"adding OpenMP" to PETSc. This way we could have real
> On Jul 7, 2018, at 4:40 PM, Hector E Barrios Molano
> wrote:
>
> Thanks Barry and Satish for your answers.
>
> I installed the correct version of hypre. Also, I changed the paths to short
> dos paths as Satish suggested. Now PETSc compiles without problems and the
> tests are ok.
>
> Re
> On Jul 9, 2018, at 8:33 AM, Jeff Hammond wrote:
>
>
>
> On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F. wrote:
>
> Richard,
>
> The problem is that OpenMP is too large and has too many different
> programming models imbedded in it (and it will
> On Jul 9, 2018, at 12:04 PM, Jed Brown wrote:
>
> Jeff Hammond writes:
>
>> This is the textbook Wrong Way to write OpenMP and the reason that the
>> thread-scalability of DOE applications using MPI+OpenMP sucks. It leads to
>> codes that do fork-join far too often and suffer from death b
> On Jul 9, 2018, at 8:33 AM, Jeff Hammond wrote:
>
>
>
> On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F. wrote:
>
> Richard,
>
> The problem is that OpenMP is too large and has too many different
> programming models imbedded in it (and it will
3.10 unless something important happens very soon
> On Jul 12, 2018, at 5:12 PM, Kong, Fande wrote:
>
> I want to know this info, and I can put some safeguard in the code.
>
> Thanks,
>
> Fande,
How come we don't have version checks as a matter of course for
config/packages/xxx.py we constantly get email from people using the wrong
version of external packages with obscure error messages wasting their time and
hours. Better that hypre.py (and others) compare the version number and
Hmm, I think the current code makes the most sense.
In your case what if the two "from" values destined for the same "to"
location are different values, then your algorithm is ill-defined; which of the
two values "wins"?
To get the effect you want I think you need to insure that t
Pierre,
Thanks for reporting this. It is our mistake -options_table was changed to
-options_view (to match the view pattern) in 2014 but the documentation was not
updated at the same time to reflect the change. I have fixed it in the branch
barry/fix-options-table/maint and it will soo
I am ok with "changing" the meaning of blocksize for IS since it appears to
have been already changed :-)
I have approved the pull request.
Barry
I am not sure that turning the IS into an "integer" Vec was necessary or
good but I guess we are stuck with it now.
> On Aug
DMSNESCheckFromOptions()
DMTSCheckFromOptions()
Please send configure.log and make.log
> On Aug 25, 2018, at 8:29 AM, Pierre Jolivet
> wrote:
>
> Hello,
> I tried to adapt
> https://bitbucket.org/petsc/petsc/src/master/config/examples/arch-ms-msvc2012-intelmpi-cudano-nomumps-cpardiso-indexes64-mklilp64-debug.py
> on Linux with the latest
From the manual page
Notes:
the arrays myapp and mypetsc need NOT contain the all the integers 0 to
napp-1, that is there CAN be "holes" in the indices.
Use AOCreateBasic() or AOCreateBasicIS() if they do not have holes for
better performance.
so they are two different thing
Ah, sorry for the confusion. We do support --download-openblas where PETSc
will download and install and OpenBlas for you but if you have it already
installed then you need to treat it just as a BLASLAPACK library so use
> --with-blaslapack-include=%{_includedir} \
> --with-blaslapack-lib=
PETSc developers,
There are a variety of "interpolation" modules in PETSc but the
documentation is scattered (mostly missing). Could everyone who knows anything
about the various modules provide a little information about which modes exist
as interfaces and which have actual suppor
> On Aug 31, 2018, at 2:52 PM, Matthew Knepley wrote:
>
> On Fri, Aug 31, 2018 at 2:20 PM Smith, Barry F. wrote:
>
> PETSc developers,
>
>There are a variety of "interpolation" modules in PETSc but the
> documentation is scattered (mostly
> On Sep 4, 2018, at 5:35 AM, Lisandro Dalcin wrote:
>
> I vote for Lawrence's suggestion: DMPlexCompleteToplogy()
Sounds good to me.
>
> On Sun, 2 Sep 2018 at 00:37, Hapla Vaclav wrote:
> > 31. 8. 2018 v 22:23, Lawrence Mitchell :
> >
> >
> >
> >> On 31 Aug 2018, at 20:52, Matthew
I think does the trick:
https://bitbucket.org/petsc/petsc/pull-requests/1106/change-fortran-null-pointer-to-match-c/diff
Barry
> On Sep 5, 2018, at 5:55 PM, Jed Brown wrote:
>
> It is safe in Fortran to write
>
> call DMDestroy(da, ierr)
> call DMDestroy(da, ierr)
>
> but not
>
Tamara,
The VecScatter routines are in a big state of flux now as we try to move
from a monolithic implementation (where many cases were handled with cumbersome
if checks in the code) to simpler independent standalone implementations that
easily allow new implementations orthogona
$ make alltests
rm: ./arch-simple/tests/counts: Directory not empty
make[2]: *** [pre-clean] Error 1
te more.
Barry
>
> Scott
>
>
> On 9/8/18 4:06 PM, Smith, Barry F. wrote:
>> $ make alltests
>> rm: ./arch-simple/tests/counts: Directory not empty
>> make[2]: *** [pre-clean] Error 1
>
> --
> Tech-X Corporation kru...@txcor
> On Sep 9, 2018, at 4:54 AM, Stefano Zampini wrote:
>
> I just noticed a strange behaviour in master. Take mat/examples/ex1.c
There is no such example and the examples in the tutorials and tests
subdirectories don't work with the given arguments so where is this example?
>
> ./ex1 -h
Pierre,
It is not possible to do this. When communicating parallel to parallel the
communicator for both vectors must be the same communicator.
Barry
> On Sep 9, 2018, at 5:40 AM, Pierre Jolivet wrote:
>
> Hello,
> Could someone please help me figure out how to fix this embarra
Note that since OpenCV has a dependency on OpenCL you will need to add that
dependency to the opencv.py that you create. Read the docs in package.py or
look at other packages/*.py for how to put in dependencies on other packages.
Barry
> On Sep 19, 2018, at 5:36 AM, Matthew Knepley
Look at the code in KSPSolve_Chebyshev().
Problem 1) VERY MAJOR
Once you start running the eigenestimates it always runs them, this is
because the routine begins with
if (cheb->kspest) {
but once cheb->kspest is set it is never unset. This means, for example,
that every time P
ix.
Sorry for the panic
Barry
> On Sep 20, 2018, at 5:00 AM, Mark Adams wrote:
>
>
>
> On Wed, Sep 19, 2018 at 7:44 PM Smith, Barry F. wrote:
>
> Look at the code in KSPSolve_Chebyshev().
>
> Problem 1) VERY MAJOR
>
> Once you start runn
Not a priority for us but if you implement clean code to do this we're likely
to accept it.
Barry
> On Sep 20, 2018, at 12:27 PM, Fande Kong wrote:
>
> Hi Developers,
>
> MATBAIJ actually assumes that the point-block is dense. It is fine if the
> block size is small for example less
Brian,
I have finished making the (relatively few) changes needed to get PETSc's
GAMG to run on a combination of the CPU and GPU. Any of the AMG kernels that
has a CUDA backed is run automatically on the GPU while the kernels without a
CUDA backend are run on the CPU. In particular t
Two of the jenkins builds are failing with out of disk space issues.
Barry
Reminder to PETSc developers to update this page.
https://www.mcs.anl.gov/petsc/documentation/tutorials/index.html
Sajid,
There are a variety of tutorials or slides on line at the above
address.
Barry
> On Oct 4, 2018, at 4:11 PM, Sajid Ali
> wrote:
>
> Hi Barry,
Why have
PETSC_EXTERN PetscErrorCode DMHasNamedGlobalVector(DM,const char*,PetscBool*);
PETSC_EXTERN PetscErrorCode DMGetNamedGlobalVector(DM,const char*,Vec*);
PETSC_EXTERN PetscErrorCode DMRestoreNamedGlobalVector(DM,const char*,Vec*);
PETSC_EXTERN PetscErrorCode DMHasNamedLocalVector(DM,co
msnes.c | 74 +------
> 7 files changed, 154 insertions(+), 82 deletions(-)
>
> "Smith, Barry F." writes:
>
>> Why have
>>
>> PETSC_EXTERN PetscErrorCode DMHasNamedGlobalVector(DM,const
>> char*,PetscBool*);
>>
I looked at the code and it is handled in the PETSc way. The user should not
expect KSP to error just because it was unable to solve a linear system; they
should be calling KSPGetConvergedReason() after KSPSolve() to check that the
solution was computed successfully.
Barry
> On Oct 10,
This is a harmless (and rather silly warning).
Barry
> On Oct 17, 2018, at 2:16 PM, Hector E Barrios Molano
> wrote:
>
> Hi PETSc Experts!
>
> I am compiling PETSc from git repository. Everything goes smooth. However,
> when I test the installation I get the following output:
>
>
Sorry about this problem. I think the change was only introduced in master
and should not affect 3.10.x Please confirm that master is where the failed
compile is?
Please send us the calling sequence of your routine that won't compile (cut
and paste).
Barry
> On Oct 17, 2018, at
M, Adrian Croucher
> wrote:
>
> hi Barry,
>
> On 18/10/18 11:34 AM, Smith, Barry F. wrote:
>>Sorry about this problem. I think the change was only introduced in
>> master and should not affect 3.10.x Please confirm that master is where the
>> failed compile i
ther PETSc
> 3.10.0 or the PETSc master without the Fortran interface for
> SNESsetConvergenceTest
>
> It still gives the same error as for 2)
>
>
> Do you have access to any other compilers to check whether they can
> compile your application/test_example?
>
> best regards
> On Oct 20, 2018, at 12:43 PM, Martin Diehl wrote:
>
> From: "Smith, Barry F."
> To: Martin Diehl
> Cc: Adrian Croucher , For users of the development
> version of PETSc
> Sent: 10/20/2018 6:22 PM
> Subject: Re: [petsc-dev] Fortran interface problem i
Needs some more work
./bin/spack install petsc+hypre
==> mpich@3.2 : externally installed in /usr/local
==> mpich@3.2 : generating module file
==> mpich@3.2 : registering into DB
==> zlib@1.2.8 : externally installed in /usr
==> zlib@1.2.8 : generating module file
==> zlib@1.2.8 : registerin
at 11:51 AM, Smith, Barry F. wrote:
>
>
> Needs some more work
>
>
> ./bin/spack install petsc+hypre
> ==> mpich@3.2 : externally installed in /usr/local
> ==> mpich@3.2 : generating module file
> ==> mpich@3.2 : registering into DB
> ==> zlib@1.2.8 :
nd add them to the spack compiler list but nothing about
> how to tell spack to use a particular compiler?
>
>
>
>> On Oct 21, 2018, at 11:51 AM, Smith, Barry F. wrote:
>>
>>
>> Needs some more work
>>
>>
>> ./bin/spack install petsc+hypr
e packages in xsdk need cblas interfaces - so I end up using
> openblas for most of my installs]
>
> Satish
>
> On Sun, 21 Oct 2018, Smith, Barry F. wrote:
>
>>
>> Still prefers OpenBLAS over system BLAS.
>>
>>
>>
>>> On Oct 21, 2018,
error message correctly, isn't this a problem with
> the PETSc module for Spack (which you, Satish, and myself have
> contributed to), not with anything to do with Spack itself?
>
> "Smith, Barry F." writes:
>
>> Needs some more work
>>
>>
&g
I have no problem on the installation page mentioning using spack to
install PETSc; but we shouldn't pretend that it is easy and will always work.
I fear if we just say
spack install petsc
we'll get emails that expect us to debug the person's set up of spack,
which we abs
Jed,
Why are the modules always built last?
.
FC arch-basic/obj/sys/objects/f2003-src/fsrc/optionenum.o
FC arch-basic/obj/sys/classes/bag/f2003-src/fsrc/bagenum.o
FC arch-basic/obj/mat/f90-mod/petscmatmod.o
FC arch-basic/obj/dm/f90-mod/pe
> On Oct 22, 2018, at 12:17 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>> Jed,
>>
>> Why are the modules always built last?
>>
>> .
>> FC arch-basic/obj/sys/objects/f2003-src/fsrc/optionenum.o
&g
Moved a question not needed in the public discussions to petsc-dev to ask
Mark.
Mark,
PCGAMGSetCoarseEqLim - Set maximum number of equations on coarsest grid
Is there a way to set the minimum number of equations on the coarse grid
also? This particular case goes down to 6, 54 a
of 1000 it would end up with 642 unknowns
on the coarse level which is likely better than 6 or 54.
Barry
> On Oct 29, 2018, at 8:27 AM, Mark Adams wrote:
>
>
>
> On Sun, Oct 28, 2018 at 4:54 PM Smith, Barry F. wrote:
>
>Moved a question not needed in the publi
The first error is
nvcc error : 'cicc' died due to signal 9 (Kill signal)
nvcc error : 'cicc' died due to signal 9 (Kill signal)
later
/autofs/nccs-svm1_home1/adams/petsc/arch-summit-opt64-gnu-cuda/externalpackages/git.amgx/base/src/amgx_c_common.cu(77):
catastrophic error: error while
found.\n", AMGX_ERR_BAD_MODE);
> +// FatalError("Mode not found.\n", AMGX_ERR_BAD_MODE);
> }
>
> AMGX_Mode mode = static_cast(itFound->second);
> @@ -1125,4 +1125,4 @@ inline bool remove_managed_matrix(AMGX_matrix_handle
> envl)
> } //namespa
&envl)
>> {
>> //throws...
>> //
>> -FatalError("Mode not found.\n", AMGX_ERR_BAD_MODE);
>> +// FatalError("Mode not found.\n", AMGX_ERR_BAD_MODE);
>> }
>>
>> AMGX_Mo
l on this
> thread] - and this flag is not passed in from petsc configure to amgx
> cmake - so it must be somehow set internally in this package.
>
> Satish
>
> On Wed, 4 Dec 2019, Smith, Barry F. wrote:
>
>>
>>> Also - its best to avoid -Werror in externalpac
Can you point to the pipeline test output where it fails for issue that
mentions it? Satish had issues posted for each valgrind problem (I still have
the biharmonic I will try to fix today) and I can't find that one.
Thanks
> On Dec 7, 2019, at 2:45 PM, Matthew Knepley wrote:
>
> I a
> On Dec 7, 2019, at 6:00 PM, Matthew Knepley wrote:
>
> Nope, you are right.
>
> Thanks,
>
> Matt
>
> On Sat, Dec 7, 2019 at 6:19 PM Balay, Satish wrote:
> The fix for this is in 363424266cb675e6465b4c7dcb06a6ff8acf57d2
>
> Do you have this commit in your branch - and still seeing
Maybe we should make it a GitLab issue, we always finish GitLab issues
promptly
> On Dec 10, 2019, at 11:22 AM, Jed Brown wrote:
>
> We made some first steps, but we/I dropped the ball on finishing the
> process. I'll pick it up over break.
>
> "Mills, Richard Tran" writes:
>
>> Fello
https://gitlab.com/petsc/petsc/merge_requests/2409
> On Dec 16, 2019, at 3:02 PM, Lisandro Dalcin wrote:
>
> While rebuilding a configuration with C++ on macOS, I got this weird output:
>
> -
> Using system modules:
> error: invalid argument '-std=c+
Please send configure.log and /usr/include/openmpi-x86_64/petsc/petscconf.h
In theory our configure checks if deprecated can be used for Enums but
perhaps our test is not complete.
Barry
> On Dec 22, 2019, at 12:01 PM, Antonio Trande wrote:
>
> Hi all.
>
> I don't know of these are
What is this doing in an email?
Yes, my mistake. I foolishly used the Gitlab Gui to change an error in the
documention. In reality it wasn't documentation. No more GUI changes for me,
the weird thing is it doesn't even offer you a MR, it just pushes to master.
Barry
> On Jan 1, 2020,
A long time ago Oana suggested a tool that allowed switching between PETSc
configurations after pulls etc that didn't require waiting to recompile code or
rerun configure. Based on her ideas I finally got something that has been
behaving reasonably satisfactory for me. Note that it is only
> On Jan 3, 2020, at 3:07 PM, Matthew Knepley wrote:
>
> On Fri, Jan 3, 2020 at 3:52 PM Smith, Barry F. wrote:
>
>A long time ago Oana suggested a tool that allowed switching between PETSc
> configurations after pulls etc that didn't require waiting to rec
> On Jan 3, 2020 15:04, "Smith, Barry F." wrote:
>
>
> > On Jan 3, 2020, at 3:07 PM, Matthew Knepley wrote:
> >
> > On Fri, Jan 3, 2020 at 3:52 PM Smith, Barry F. wrote:
> >
> >A long time ago Oana suggested a tool that allowed switching
le, and hurts everyone all the time.
>
>Matt
>
> On Fri, Jan 3, 2020 at 7:03 PM wrote:
> The time to rebuild Fortran modules, which is pretty much an entire lifetime.
> I disable Fortran in most arches that I rebuild frequently.
>
> On Jan 3, 2020 16:58, "Sm
Can you overload the MatCreateSubMatrices() to use your function instead of
the default. Using MatSetOperation()?
Barry
> On Jan 4, 2020, at 5:30 AM, Pierre Jolivet wrote:
>
> Hello,
> I’d like to bypass the call to MatCreateSubMatrices during PCSetUp_PCASM
> because I’m using a custo
Yes, since difference in floating point are considered not a change the
REPLACE which only updates files with changes won't update them.
I don't understand the output below, looks identical to me, why is it a diff?
Scott, Perhaps when DIFF_NUMBERS=1 is given the REPLACE should replace i
0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> on my laptop and four subdomains.
>
> I guess I can expect nice gains for our Helmholtz and Maxwell solvers at
> scale and/or with higher order discretizations!
> Pierre
>
>> On 4 Jan 2020, at 9:17 PM, S
> On Jan 6, 2020, at 11:36 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>> I think they just have the wrong algorithms in their compilers for
>> modules, I don't think there is anything fundamental to the language
>> etc that req
Scott,
It looks like you started to put in support for the test harness to compare
output against multiple files.
$ git grep altfile
config/gmakegentest.py:if len(altlist)>1: subst['altfiles']=altlist
config/gmakegentest.py: if 'altfiles' not in subst:
config/gmakegentest.py:
Jed,
Unfortunately multiple fortran compilers we use do not support type(*) so
we either configure check this stuff (annoying) or stop supporting lots of
Fortran compilers.
Satish,
I guess you need to check all the failed Fortran compilers and see if
they have versions that
Thanks
> On Oct 25, 2017, at 11:22 AM, Hong wrote:
>
> This should be cleaned by
> https://bitbucket.org/petsc/petsc/commits/a0d1c92d1d6734b184005d635c14bf9895961849
>
> Will merge it to master once it passes nightly tests.
>
> Hong
>
> On Tue, Oct 24, 2017 at 10:41 PM, Barry Smith wrot
10:04 AM, Smith, Barry F. wrote:
>>Scott,
>> It looks like you started to put in support for the test harness to
>> compare output against multiple files.
>> $ git grep altfile
>> config/gmakegentest.py:if len(altlist)>1: subst['altfiles']=altlist
ee the failures and compiler specifics. There are six red
lines.
Barry
>
> "Smith, Barry F." writes:
>
>> Jed,
>>
>>Unfortunately multiple fortran compilers we use do not support type(*) so
>> we either configure check this stuff (annoying) or
gt; On Oct 29, 2017, at 2:17 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>>> On Oct 26, 2017, at 7:55 AM, Jed Brown wrote:
>>>
>>> Which compilers don't work?
>>
>> Your time is no more valuable than anyone else's.
Ok, in theory I have added this logic to the branch.
Barry
> On Oct 29, 2017, at 7:17 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>> From below it looks like OpenMPI handles many compilers that
>> don't handle type
Adrian,
I fixed some bugs but apparently broke something at the same time. At a
meeting now, maybe you could use -start_in_debugger and get the traceback where
it crashes for you?
Barry
> On Oct 30, 2017, at 5:21 PM, Adrian Croucher
> wrote:
>
> hi,
>
> I just pulled the latest n
Please send the full traceback. Cut and paste
> On Oct 31, 2017, at 3:37 PM, Adrian Croucher
> wrote:
>
>
> On 01/11/17 03:02, Smith, Barry F. wrote:
>> Adrian,
>>
>> I fixed some bugs but apparently broke something at the same time. At a
&g
> On Oct 31, 2017, at 6:00 PM, Neelam Patel wrote:
>
> Hello PETSc users,
>
> Working in Fortran, I created 2 disjoint communicators with MPI_Group
> operations using PETSC_COMM_WORLD as the "base" comm. I created parallel
> vectors on each communicator, and set values in them equal to their
> On Nov 1, 2017, at 1:13 PM, Mark Adams wrote:
>
> Yea, I don't understand the linear solve error:
>
> -ts_monitor -ts_type beuler -pc_type lu -pc_factor_mat_solver_package mumps
> -ksp_type preonly -snes_monitor -snes_rtol 1.e-10 -snes_stol 1.e-10
> -snes_converged_reason -snes_atol 1.e-18
> On Nov 5, 2017, at 8:01 AM, Jed Brown wrote:
>
>>
>> Sure - if you hunt arround the PETSc source tree - you will find bunch
>> of stuff.. [but that would be ignoring the primary doc].
>>
>> Also There was some reason Jed didn't want to strip out the cmake
>> stuff. Perhaps FindPETSc.cmake u
Vaclav,
Actually you should not just do this! PETSc already has a full class for
managing partitioning (that Matt ignored for no good reason) see
MatPartitioningCreate(). Please look at all the functionality before doing
anything.
Any refactorization you do needs to combine, si
Vaclav,
Please don't do this as proposed. Please learn about all the partitioner
interfaces in PETSc before attempting a refactorization.
Barry
> On Nov 6, 2017, at 7:25 AM, Matthew Knepley wrote:
>
> On Mon, Nov 6, 2017 at 8:09 AM, Vaclav Hapla
> wrote:
> Hello
>
> The whole Pe
> On Nov 6, 2017, at 7:27 AM, Matthew Knepley wrote:
>
> On Mon, Nov 6, 2017 at 8:24 AM, Smith, Barry F. wrote:
>
>Vaclav,
>
> Actually you should not just do this! PETSc already has a full class
> for managing partitioning (that Matt ignored for no good r
> On Nov 6, 2017, at 7:30 AM, Vaclav Hapla wrote:
>
>
>> 6. 11. 2017 v 14:27, Matthew Knepley :
>>
>> On Mon, Nov 6, 2017 at 8:24 AM, Smith, Barry F. wrote:
>>
>>Vaclav,
>>
>> Actually you should not just do this! PETSc alr
Hmm, I think this perhaps an issue of documentation.
It seems the various PetscOptionsGetXXX() DO NOT set the value unless the
options database indicates it should be set (and when the options database does
indicate it has been set the set flag is set).
But this is only documented f
> On Nov 7, 2017, at 1:33 AM, Lisandro Dalcin wrote:
>
> On 6 November 2017 at 16:37, Matthew Knepley wrote:
>> On Mon, Nov 6, 2017 at 8:34 AM, Smith, Barry F. wrote:
>>>
>>> MatPartitioning is NOT about partitioning MATRICES it is about
>>> parti
1 - 100 of 743 matches
Mail list logo