which will also force a another configure even though non is needed
#!/usr/local/opt/python@2/bin/python2.7
if __name__ == '__main__':
import sys
import os
sys.path.insert(0, os.path.abspath('config'))
import configure
configure_options = [
'--force',
'PETSC_ARCH=arch-debug',
Pierre,
Thanks for your generous offer. Maybe you could point us to a repository
with the branch with your additions so we could take a look at it and see how
it could be adopted into PETSc?
Barry
> On Jun 7, 2019, at 11:07 AM, Pierre Gosselet via petsc-dev
> wrote:
>
> Dear Petsc
This was one of my many dreams. The sections in the users manual would have
latex names and each man page would link to appropriate ones. Given the
hopelessness of linking inside PDF documents on the web (in theory it is
possible but no browsers support it) I gave up on it. You can remove th
===
Configuring PETSc to compile on your system
===
===
to CONFIGURE with GIVEN OPTIONS(see configure.log for
details):
> On Jun 8, 2019, at 8:46 PM, Smith, Barry F. via petsc-dev
>
I've noticed this issue with Jenkins tests recently but don't see it in next
nightly builds
https://petsc-dev.org/jenkins/blue/organizations/jenkins/pj02%2Farch-jenkins-linux-gcc-pkgs-opt/detail/PR-1773/1/tests
perhaps related to last MUMPS release? One number seems oddly large
ICNTL(38) (esti
604a6f22e 5dc3e8b4b9
> Author: Satish Balay
> Date: Sun Jun 9 09:22:52 2019 -0500
>
>Merge branch 'pr1723/tappel/extend-mumps-parameters/master' [PR #1723]
>
>
> Satish
>
>
> On Mon, 10 Jun 2019, Smith, Barry F. via petsc-dev wrote:
>
sion ready. I could use this on
my machine also.
Barry
--spack-mpich etc ?
> On Jun 10, 2019, at 2:15 PM, Jed Brown wrote:
>
> "Smith, Barry F. via petsc-dev" writes:
>
>> Yes. If the testing system were smart enough we could have the tests
>> actua
That seems ok. We could also overload --download-mpich=spack for anal people
like me :-)
Somehow we also need to let configure know where the spack configuration is.
> On Jun 10, 2019, at 4:21 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>> Yes, sp
e prebuilts both with the spack approach
and without to satisfy the spack haters and may the approach with the least
failures win.
Barry
>
> Satish
>
> On Mon, 10 Jun 2019, Smith, Barry F. via petsc-dev wrote:
>
>>
>> Yes. If the testing system were smart enough
petsc-spack
>
> For now - I'll try out a simpler model [i.e manually rebuild as needed]
Not for long you don't, we have better things to do with your time.
>
> Satish
>
> On Mon, 10 Jun 2019, Smith, Barry F. via petsc-dev wrote:
>
>> That seems ok. We co
Sure, but we could have our own petsc-centric tests triggered by gitlab CI
also
> On Jun 10, 2019, at 4:51 PM, Balay, Satish wrote:
>
> On Mon, 10 Jun 2019, Smith, Barry F. via petsc-dev wrote:
>
>>
>>
>>> On Jun 10, 2019, at 4:33 PM, Balay, Satish wr
can be found here:
> https://bitbucket.org/pierre_gosselet/petscfork/src/master/
>
> Best regards
> pierre
>
> Le vendredi 07 juin 2019 à 17:58 +, Smith, Barry F. a écrit :
>> Pierre,
>>
>> Thanks for your generous offer. Maybe you could point us to
In order to get better testing on the accelerators I think we need to
abandon the -vec_type cuda approach scattered through a handful of examples and
instead test ALL examples that are feasible automatically with the various
accelerator options. I think essentially any examples that use AI
mpiexec -n ./myprogrm over options -log_trace > afile
grep "\[0\]" afile > process0
grep "\[1\]" afile > process1
paste process0 process1 | more
For the two processes pick ones that take the different paths in the code.
Almost for sure something is defined on a sub communicato
to the current HTML generation approach for the man pages and
> other docs on the website.
>
> Am Sa., 8. Juni 2019 um 09:33 Uhr schrieb Smith, Barry F.
> :
>
> This was one of my many dreams. The sections in the users manual would have
> latex names and each man page would
s C
>> (though support is, I believe, claimed).
>>
>> From: petsc-dev on behalf of Patrick Sanan
>> via petsc-dev
>> Reply-To: Patrick Sanan
>> Date: Wednesday, 12 June 2019 at 10:10
>> To: "Smith, Barry F."
>> Cc: petsc-dev
>> S
it may break something we'll see. It will miss all
the code that uses directly MatCreateAIJ() but then maybe we should change
that code :-)
Barry
>
> "Smith, Barry F. via petsc-dev" writes:
>
>> In order to get better testing on the accelerators I think
You'll never benefit from having coarser levels on the CPU but I guess you
need a general mechanism to try and see that for yourself.
I think it should be a property of the DM and never let the PCMG see it. So
something like DMSetUseGPUWhen(dm, gpuvectype, gpumattype,localsize) the
comman
> On Jun 12, 2019, at 2:55 PM, Matthew Knepley wrote:
>
> On Wed, Jun 12, 2019 at 3:41 PM Smith, Barry F. via petsc-dev
> wrote:
>
> You'll never benefit from having coarser levels on the CPU but I guess you
> need a general mechanism to try and see that for y
Folks,
We don't currently have a good handle on this. It would be good to be able
to produce a strong response. Many industrial users like to keep a relatively
low profile on their use of open source software.
In this case we can't be vague and say oh XYZ uses it (or WYZ used it 10
Patrick (and Bill)
Good timing. Bill is actually updating Sowing now and could perhaps fix
this glitch. Currently we use a . for a single entry in the list and I'd hate
to have to change them all the -. Likely it is possible to fix the formatting
for the . case to have the same indent
ly that initially we didn't use the + and - on
the lists and there was some issue so they were introduced.
Barry
>
> Am Sa., 15. Juni 2019 um 17:54 Uhr schrieb Smith, Barry F.
> :
>
> Patrick (and Bill)
>
> Good timing. Bill is actually updating Sowing now and
Given the terrible performance of BitBucket recently and the far superior
ability to do flexible CI on GitLab Satish and Jed are experimenting with using
GitLap CI. In a couple of week if all goes well we are likely to move
everything to GitLab.
If you have major concerns about such a
Patrick,
I understand the problem with DMDA and periodicity with only a single
skinny processor in one direction, local to global won't work without lots of
additional code. I don't care because PETSc is for big problems and that
problem is not big.
Could you please explain to me
Sanan wrote:
>
>
>
> Am Di., 18. Juni 2019 um 20:30 Uhr schrieb Smith, Barry F.
> :
>
> Patrick,
>
> I understand the problem with DMDA and periodicity with only a single
> skinny processor in one direction, local to global won't work without lots of
>
Ah
> On Jun 18, 2019, at 2:24 PM, Patrick Sanan wrote:
>
> It's for domains of any width on one processor + periodic boundary conditions
> in a given direction.
>
> Am Di., 18. Juni 2019 um 21:19 Uhr schrieb Smith, Barry F.
> :
>
> Is it only for super
It HAS A hid_t argument! Making it public means making HDF5 includes public,
means all PETSc applications have the HDF5 includes open in them. Likely it
should just get _Private
Barry
On Jun 20, 2019, at 9:01 AM, Hapla Vaclav via petsc-dev
wrote:
>
> On 20 Jun 2019, at 15:56, Vaclav H
Fixed and pushed.
> On Jun 22, 2019, at 3:28 PM, PETSc checkBuilds
> wrote:
>
>
>
> Dear PETSc developer,
>
> This email contains listings of contributions attributed to you by
> `git blame` that caused compiler errors or warnings in PETSc automated
> testing. Follow the links to see th
Hmm, I fixed this yesterday and pushed my branch. Just confirmed that the
branch does not use PetscFree1() and that tmp is initialized in the
declaration.
Perhaps the most recent version of the branch did not get merged to
next-tmp? Anyways please try to get the latest in next or next-t
> On Jun 26, 2019, at 9:56 AM, Balay, Satish via petsc-dev
> wrote:
>
> On Wed, 26 Jun 2019, Jakub Kruzik via petsc-dev wrote:
>
>> Hello,
>>
>> as I mentioned in PR #1819, I would like to use SLEPc in PETSc.
>>
>> Currently when PETSc is configured with --download-slepc, it defines
>> PET
> On Jun 26, 2019, at 10:55 AM, Jed Brown wrote:
>
> "Smith, Barry F. via petsc-dev" writes:
>
>>> On Jun 26, 2019, at 9:56 AM, Balay, Satish via petsc-dev
>>> wrote:
>>>
>>> On Wed, 26 Jun 2019, Jakub Kruzik via petsc-dev wrote:
> On Jun 26, 2019, at 12:39 PM, Fande Kong via petsc-dev
> wrote:
>
> It would be great if SLEPc can be merged into PETSc. Just like what we did
> for TAO. Then we do not have all these issues at all.
Next week libMesh and the following week Trilinos
Barry
>
> Any particular reason w
It is still a PC, it may as part of its computation solve an eigenvalue
problem but its use is as a PC, hence does not belong in SLEPc.
Barry
> On Jun 26, 2019, at 1:22 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>>> You can implement and regi
-ins.
Barry
> On Jun 26, 2019, at 1:15 PM, Patrick Sanan wrote:
>
> How about a plug-in PC implementation, compiled as its own dynamic library,
> depending on both SLEPc and PETSc?
>
> Smith, Barry F. via petsc-dev schrieb am Mi. 26. Juni
> 2019 um 21:07:
>
&g
> On Jun 26, 2019, at 1:53 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>> It can be a plug-in whose source sits in the PETSc source tree, even in the
>> PC directory. It gets built by the PETSc build system after the
>> build system inst
> On Jun 26, 2019, at 1:53 PM, Jed Brown wrote:
>
> "Smith, Barry F." writes:
>
>> It is still a PC, it may as part of its computation solve an eigenvalue
>> problem but its use is as a PC, hence does not belong in SLEPc.
>
> Fine; it does not bel
If we stash the --download-xxx=yyy yyy value and the state of the xxx.py
then we can know that the package may be need to be re-downloaded,
re-configured, rebuilt, reinstalled. Essentially get the dependencies of
package xxx on itself right. There is also the dependency of package xxx on
You are right, these do not belong in petscconf.h
Barry
> On Jun 28, 2019, at 12:37 PM, Jed Brown via petsc-dev
> wrote:
>
> We have a lot of lines like this
>
> $ grep -c HAVE_LIB $PETSC_ARCH/include/petscconf.h
> 96
>
> but only four of these are ever checked in src/. Delete them?
>
Pushed fix
> On Jun 28, 2019, at 3:28 PM, PETSc checkBuilds
> wrote:
>
>
>
> Dear PETSc developer,
>
> This email contains listings of contributions attributed to you by
> `git blame` that caused compiler errors or warnings in PETSc automated
> testing. Follow the links to see the full
ETCDF for example,
that are useful in the source so I don't think knowing about the specific
libraries in the source is needed.
Barry
> On Jun 28, 2019, at 4:13 PM, Matthew Knepley wrote:
>
> On Fri, Jun 28, 2019 at 2:04 PM Smith, Barry F. via petsc-dev
> wrote:
>
&g
Does it make sense to recommend/suggest git bash for Windows as an
alternative/in addition to Cygwin?
Barry
https://bitbucket.org/petsc/petsc/pull-requests/1834/remove-testing-and-inserting-into/diff
removed about 1/3 of the entries for a build with about 10 external packages
perhaps there are a small number of entries that may still be removed but I
think I got rid of most the unneeded ones.
Bar
; basically just runs my television/media center) and I'll give it a try on
> there.
>
> --Richard
>
> On 6/29/19 8:11 PM, Jed Brown via petsc-dev wrote:
>> "Smith, Barry F. via petsc-dev"
>> writes:
>>
>>
>>> Does it make sense
me bash config issue. Even if we manage to
> port build tools to wsl2 or alternative system, such sub-tool issues can
> still come up in the new system.
>
>
> Satish
>
> From: Smith, Barry F. via petsc-dev
> Sent: Monday, July 1, 2019 2:17 PM
> To: Mills, Richar
tis -Dparmetis -I.
> -I../include -c mumps_print_defined.F -o mumps_print_defined.o
> /cygdrive/c/petsc/lib/petsc/bin/win32fe/win32fe cl -MT -wd4996 -O2 -QxW
> -I/cygdrive/c/Program\ Files/MPICH2/include -I../include -DUPPPER
> -I/cygdrive/c/parmetis-4.0.3/include -I/cygdrive/c/metis-5.
PETSc on
>>> WSL. This is basically what happens in the firedrake installer, which works
>>> on WSL. Instructions are here:
>>> https://github.com/firedrakeproject/firedrake/wiki/Installing-on-Windows-Subsystem-for-Linux
>>>
>>> On 01/07/2019, 23:26, &q
Lisandro,
Both plans look good to me. Remove DMCreateAggregates completely and
reactor DMHasCreateInjection.
Barry
> On Jul 4, 2019, at 9:32 AM, Lisandro Dalcin wrote:
>
> Dear Barry,
>
> 1) Do we still need this? It is totally untested, from the docs it seems it
> may be redun
ley
mailto:knep...@buffalo.edu>>, Jed Brown
mailto:j...@jedbrown.org>>, Karl Rupp
mailto:m...@karlrupp.net>>, Richard Tran Mills
mailto:rmi...@mcs.anl.gov>>, "Smith, Barry F."
mailto:bsm...@mcs.anl.gov>>, "McInnes, Lois Curfman"
mailto:curf...@an
4 AM, Matthew Knepley via petsc-dev
>> wrote:
>>
>> On Wed, Jun 26, 2019 at 4:11 PM Jed Brown wrote:
>> Matthew Knepley writes:
>>
>> > On Wed, Jun 26, 2019 at 3:42 PM Jed Brown via petsc-dev <
>> > petsc-dev@mcs.anl.gov> wrote:
>>
libraries?
Yes, this is my understanding. Good luck.
Barry
>
> I could try to do that for the computation of eigenvector-based deflation
> space for PCDeflation next week.
>
> Jakub
>
> On 7/8/19 5:49 PM, Smith, Barry F. via petsc-dev wrote:
>>Sorry for th
> On Jul 8, 2019, at 10:37 PM, Jed Brown wrote:
>
> "Smith, Barry F. via petsc-dev" writes:
>
>>> On Jul 8, 2019, at 9:53 PM, Jakub Kruzik via petsc-dev
>>> wrote:
>>>
>>> Just to clarify, the suggested solution is a plug-in sit
Mark,
Don't worry about this. I am fixing.
> On Jul 9, 2019, at 7:28 AM, PETSc checkBuilds via petsc-checkbuilds
> wrote:
>
>
>
> Dear PETSc developer,
>
> This email contains listings of contributions attributed to you by
> `git blame` that caused compiler errors or warnings in PET
ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr);
if (nt != A->rmap->n)
SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A (%D)
and xx (%D)",A->rmap->n,nt);
ierr = VecScatterInitializeForGPU(a->Mvctx,xx);CHKERRQ(ierr);
ierr = (*a->B->ops->multtranspose)(a->B,xx,a->lvec)
you want to check cusparsestruct->workVector->size() against A->cmap->n.
>
> Stefano
>
> Il giorno mer 10 lug 2019 alle ore 15:54 Mark Adams via petsc-dev
> ha scritto:
>
>
> On Wed, Jul 10, 2019 at 1:13 AM Smith, Barry F. wrote:
>
> ierr = VecGet
ranspose operation
> then, you can reuse the same complicated code I have wrote, just by selecting
> the proper cusparse object (matstructT or matstruct)
>
>
> Il giorno mer 10 lug 2019 alle ore 18:16 Smith, Barry F.
> ha scritto:
>
>In the long run I would like to
CPU to GPU? Especially matrices?
> On Jul 11, 2019, at 9:05 AM, Jed Brown via petsc-dev
> wrote:
>
> Zstd is a remarkably good compressor. I've experimented with it for
> compressing column indices for sparse matrices on structured grids and
> (after a simple transform: subtracting the row
similar. But we'd need to demo that
> use specifically.
>
> "Smith, Barry F." writes:
>
>> CPU to GPU? Especially matrices?
>>
>>> On Jul 11, 2019, at 9:05 AM, Jed Brown via petsc-dev
>>> wrote:
>>>
>>> Zstd is a remark
I would like a mode for the PETSc matrix classes where the values are simply
shipped to and used on the GPU in single precision. In theory it is trivial to
implement.
> On Jul 11, 2019, at 3:31 PM, Jed Brown via petsc-dev
> wrote:
>
> "Zhang, Junchao" writes:
>
>> A side question: Do l
> On Jul 11, 2019, at 6:02 PM, Matthew Knepley via petsc-dev
> wrote:
>
> Barry,
>
> Do you want to handle the revert? Satish, do you?
The problem is the old code doesn't properly handle the dependencies and
always produces the error about deleting non-empty directory. I'd like to
und
Run pre-clean then report_tests but produces possibly two problems
How about
test: report_tests
report_tests: pre-clean
Haven't had a chance to test it properly but seems to work.
Barry
> On Jul 11, 2019, at 9:44 PM, Matthew Knepley wrote:
>
> On Thu, Jul 11, 2019 at 8:46 PM
Hopefully resolves the problems
https://bitbucket.org/petsc/petsc/pull-requests/1869/fix-dependencies-in-gmakefiletest/diff
> On Jul 12, 2019, at 7:25 AM, Smith, Barry F. via petsc-dev
> wrote:
>
>
> The problem is I understand why the old code shouldn't work but I
Satish,
I am confused. I checked out the commit just before this commit and do
$ touch src/mat/interface/matrix.c
$ make -j 12 -f gmakefile.test test globsearch="snes*tests*ex1*"
Use "/usr/bin/make V=1" to see verbose compile lines, "/usr/bin/make V=0" to
suppress.
CC arch-bas
sers expect.
> Cons: Have to store another scatter. A little bit more logic to maintain.
>
> Am Di., 18. Juni 2019 um 21:44 Uhr schrieb Smith, Barry F.
> :
>
> Ah
>
>
> > On Jun 18, 2019, at 2:24 PM, Patrick Sanan wrote:
> >
> > It's for domains
> On Jul 15, 2019, at 8:02 PM, Xinghua Hao via petsc-dev
> wrote:
>
> We used to install it within WSL(Windows Subsystem for Linux) and I just
> transferred to a Docker container environment. It works pretty well on both
> of this two.
Presumably you are using the GNU compiler chain to
Lisandro,
Thanks for letting us know. Could you please send configure.log for your
failed case. The code to detect and use the variable is still in the PETSc
source so I must have introduced something that makes it no longer function
correctly. As soon as I can after getting your config
pc.upv.es/buildbot/builders/athor-linux-icc-c-complex-int64-mkl/builds/534/steps/Configure%20PETSc/logs/configure.log
>
> Jose
>
>
>
> > El 18 jul 2019, a las 15:07, Smith, Barry F. via petsc-dev
> > escribió:
> >
> >
> > Lisandro,
> >
> On Jul 21, 2019, at 8:55 AM, Mark Adams via petsc-dev
> wrote:
>
> I am running ex56 with -ex56_dm_vec_type cuda -ex56_dm_mat_type aijcusparse
> and I see no GPU communication in MatSolve (the serial LU coarse grid solver).
Do you mean to say, you DO see communication?
What does -k
ld set the coarse grid solver in a more robust way in GAMG, like use
> the matrix somehow? I currently use PCSetType(pc, PCLU).
>
> I can't get an interactive shell now to run DDT, but I can try stepping
> through from MatGetFactor to see what its doing.
>
> Thanks,
&
Bug report to MPICH.
> On Jul 22, 2019, at 1:22 PM, Balay, Satish via petsc-dev
> wrote:
>
> Hm - I don't think we were monitoring the leaks via valgrind that closely.
>
> Looking at my old mpich install - I don't see a problem - so likely
> its an issue with newer versions of mpich.
>
>
Pierre,
I see four patches here.
1) for examples
2) for shared library support (but only for Linux) (two files for MUMPS and
PORD) We can't use this unless also supports Mac etc
3) MUMPS-Makefile.par.inc What is this for?
How does any of them resolve the problem with ifort failin
> On Jul 23, 2019, at 9:12 AM, Mark Adams via petsc-dev
> wrote:
>
> I've tried to add pining the matrix and prolongator to the CPU on coarse
> grids in GAMG with this:
>
> /* pin reduced coase grid - could do something smarter */
> ierr = MatPinToCPU(*a_Amat_crs,PETSC_TRUE);CHKERRQ
Yes, it needs to be able to switch back and forth between the CPU and GPU
methods so you need to move into it the setting of the methods that is
currently directly in the create method. See how
MatConvert_SeqAIJ_SeqAIJViennaCL() calls ierr =
MatPinToCPU_SeqAIJViennaCL(A,PETSC_FALSE);CHKERRQ
Indeed. Far too many possibilities, with Matlab alone there is
* binary view
* matlab binary viewer
* ascii viewer
* matlab engine
* socket viewer to matlab
> On Jul 25, 2019, at 1:32 AM, Patrick Sanan via petsc-dev
> wrote:
>
> This came up in the beginner's working g
The PETSc paradigm for this is
DMView(dm, viewer); /* save mesh info
VecView(vec,viewer);
Which has basic variants
DMView(dm, viewer); /* save mesh info
VecView(vec,viewer);
VecView(vec,viewer);
...
and
VecView(vec,viewer);
...
Use
quot;);CHKERRQ(ierr);
> if (is_viennacltype) {
> ierr = VecViennaCLAllocateCheckHost(x);CHKERRQ(ierr);
> } else
> #endif
> {
> #if defined(PETSC_HAVE_CUDA)
> ierr = VecCUDAAllocateCheckHost(x);CHKERRQ(ierr);
> #endif
> }
>
iennacltype,VECSEQVIENNACL,VECMPIVIENNACL,VECVIENNACL,"");CHKERRQ(ierr);
> > if (is_viennacltype) {
> > ierr = VecViennaCLAllocateCheckHost(x);CHKERRQ(ierr);
> > } else
> > #endif
> > {
> > #if defined(PETSC_HAVE_CUDA)
> &g
> On Jul 27, 2019, at 11:53 AM, Mark Adams wrote:
>
>
> On Sat, Jul 27, 2019 at 11:39 AM Smith, Barry F. wrote:
>
> Good catch. Thanks. Maybe the SeqCUDA has the same problem?
>
> THis is done (I may have done it).
>
> Now it seems to me that when you ca
Jed and Matt,
I have two problems with the MPI shared library check goes back to at least
3.5
1) Executing: /Users/barrysmith/soft/gnu-gfortran/bin/mpiexec
/var/folders/y5/5_h50n196d3_hpl0jbpv51phgn/T/petsc-5Abny2/config.libraries/conftest
sh: /Users/barrysmith/soft/gnu-gfortran/bin/
Agreed, I also don't know why it sets the argument to NULL in there.
> On Jul 29, 2019, at 10:28 AM, Hapla Vaclav via petsc-dev
> wrote:
>
> I don't see why DMPlexInterpolate needs a custom Fortran stub
> https://bitbucket.org/petsc/petsc/src/master/src/dm/impls/plex/ftn-custom/zplexinterpo
GAMG and CUDA. It might be nice
> > to test this in a next.
> >
> > GAMG now puts all reduced processorg grids on the CPU. This could be
> > looked at in the future.
> >
> >
> > On Sat, Jul 27, 2019 at 1:00 PM Smith, Barry F. > <
at is killing the performance in the GPU case for the
KSP solve. Anyway you can just have a stage at the end with several KSP solves
and nothing else?
Barry
> On Jul 29, 2019, at 5:26 PM, Mark Adams wrote:
>
>
>
> On Mon, Jul 29, 2019 at 5:31 PM Smith, Barry F. wrote:
>
ars (maybe ever). Just checking MPI and concluding it was not.
Barry
> On Jul 29, 2019, at 11:05 PM, Jed Brown wrote:
>
> Does this mean we've been incorrectly identifying shared libraries all this
> time?
>
> "Smith, Barry F. via petsc-dev" writes:
>
Sorry, I meant 24 CPU only
> On Jul 30, 2019, at 9:19 AM, Mark Adams wrote:
>
>
>
> On Mon, Jul 29, 2019 at 11:27 PM Smith, Barry F. wrote:
>
> Thanks. Could you please send the 24 processors with the GPU?
>
> That is in out_cuda_24
>
>
Satish,
Can you please add to MPI.py a check for this and simply reject it telling
the user there are bugs in that version of OpenMP/ubuntu?
It is not debuggable, and hence not fixable and wastes everyones time and
could even lead to wrong results (which is worse than crashing). We've
Note in init.c that, by default, PETSc does not use PetscTrMallocDefault()
when valgrind is running; because it doesn't necessarily make sense to put one
memory checker on top of another memory checker. So, at a glance, I'm puzzled
how it can be in the routine PetscTrMallocDefault(). Do you p
Make an issue
> On Jul 30, 2019, at 7:00 PM, Jed Brown wrote:
>
> "Smith, Barry F. via petsc-users" writes:
>
>> The reason this worked for 4 processes is that the largest count in that
>> case was roughly 6,653,750,976/4 which does fit into an int
It is generated automatically and put in
arch-linux2-c-debug/include/petscpkg_version.h this include file is included
at top of the "bad" source file crashes so in theory everything is in order
check that arch-linux2-c-debug/include/petscpkg_version.h contains
PETSC_PKG_CUDA_VERSION_GE an
Yes it is a bug, working on it now.
> On Aug 1, 2019, at 9:13 AM, Pierre Jolivet via petsc-dev
> wrote:
>
> Hello,
> The attached example is a little confusing for me.
> How come I don’t get the same matrix out-of-the-box?
> For me, the “correct” matrix is the SeqSBAIJ, how can I get
> Ma
Aug 1, 2019 at 12:08 PM Mark Adams wrote:
>
>
> On Thu, Aug 1, 2019 at 10:30 AM Smith, Barry F. wrote:
>
> Send
>
> ls arch-linux2-c-debug/include/
>
> That is not my arch name. It is something like arch-summit-dbg64-pgi-cuda
>
> arch-linux2-c-debug/include/pet
PM Mark Adams wrote:
>
>
> On Thu, Aug 1, 2019 at 10:30 AM Smith, Barry F. wrote:
>
> Send
>
> ls arch-linux2-c-debug/include/
>
> That is not my arch name. It is something like arch-summit-dbg64-pgi-cuda
>
> arch-linux2-c-debug/include/petscpkg_version.h
aster.
Thanks
Barry
> On Aug 1, 2019, at 11:08 AM, Mark Adams wrote:
>
>
>
> On Thu, Aug 1, 2019 at 10:30 AM Smith, Barry F. wrote:
>
> Send
>
> ls arch-linux2-c-debug/include/
>
> That is not my arch name. It is something like arch-summit-dbg64-pgi-c
Barry
> On Aug 2, 2019, at 8:39 AM, Mark Adams wrote:
>
> closer,
>
> On Fri, Aug 2, 2019 at 9:13 AM Smith, Barry F. wrote:
>
> Mark,
>
> Thanks, that was not expected to work, I was just verifying the exact
> cause of the problem and it was what I
Pierre,
I have fixed this bug in
https://bitbucket.org/petsc/petsc/pull-requests/1941/fix-bug-in-matmpisbaijsetpreallocationcsr/diff
Thanks for reporting and especially providing the test case
Barry
> On Aug 1, 2019, at 6:49 PM, Smith, Barry F. via petsc-dev
>
Pierre,
Your code did expose another error, I have pushed a fix in the same branch*.
Stefano,
I think you are partially right about the sizes of a SBAIJ matrix. I think
it is ok for the number of columns to be greater than the number of rows (no
information is lost or missing) but
There could be a bug. Perhaps check the entries for that "extra" connection,
are they all actual meaningful connections. The code that fills up these data
structures is somewhat involved.
Barry
> On Aug 8, 2019, at 9:39 AM, Pierre Jolivet via petsc-dev
> wrote:
>
> Hello,
> When I use
; Thanks,
> Pierre
>
>> On 9 Aug 2019, at 3:45 AM, Smith, Barry F. wrote:
>>
>>
>> There could be a bug. Perhaps check the entries for that "extra" connection,
>> are they all actual meaningful connections. The code that fills up these
>> data
Matt and Satish,
Years ago we had to sometimes use the C preprocessor for preprocessor
Fortran code. Isn't that no longer needed?
In FC.py there is
class Preprocessor(config.compile.C.Preprocessor):
'''The Fortran preprocessor, which now is just the C preprocessor'''
def __init__(self,
501 - 600 of 743 matches
Mail list logo