Could you send the full -log_view output?
--Junchao Zhang
On Thu, Jul 26, 2018 at 8:39 AM, Pierre Jolivet
wrote:
> Hello,
> I’m using GAMG on a shifted Laplacian with these options:
> -st_fieldsplit_pressure_ksp_type preonly
> -st_fieldsplit_pressure_pc_composite_t
rking), then we can get better scalability.
--Junchao Zhang
On Thu, Jul 26, 2018 at 10:02 AM, Pierre Jolivet wrote:
>
>
> > On 26 Jul 2018, at 4:24 PM, Karl Rupp wrote:
> >
> > Hi Pierre,
> >
> >> I’m using GAMG on a shifted Laplacian with these options:
&
On Thu, Jul 26, 2018 at 11:15 AM, Fande Kong wrote:
>
>
> On Thu, Jul 26, 2018 at 9:51 AM, Junchao Zhang
> wrote:
>
>> Hi, Pierre,
>> From your log_view files, I see you did strong scaling. You used 4X
>> more cores, but the execution time only dropped
a
exposure.
--Junchao Zhang
It seems we do not have naming conventions for private members.
--Junchao Zhang
On Fri, Aug 10, 2018 at 9:43 PM Matthew Knepley wrote:
> On Fri, Aug 10, 2018 at 5:43 PM Junchao Zhang wrote:
>
>> I met several bugs that remind me to raise this question. In PETSc,
>> ob
scMalloc4>(B->cmap->N*from->starts[from->n],&contents->rvalues,412:
B->cmap->N*to->starts[to->n],&contents->svalues,413:
from->n,&contents->rwaits,414:
to->n,&contents->swaits);
--Junchao Zhang
On Fri, Aug 10, 2018 at 10:33 P
command line
option -vecscatter_type mpi3
--Junchao Zhang
On Fri, Sep 7, 2018 at 12:26 PM Tamara Dancheva <
tamaradanceva19...@gmail.com> wrote:
> Hi,
>
> I am developing an asynchronous method for a FEM solver, and need a custom
> implementation of the VecScatterBegin and VecS
Even better, with examples showing a small matrix.
--Junchao Zhang
On Thu, Mar 26, 2020 at 2:26 PM Jacob Faibussowitsch
wrote:
> Hello,
>
> In keeping with PETSc design it would be nice to have *more* detail for
> all MAT implementations explaining in what the letters stand fo
On Wed, Apr 1, 2020 at 12:00 PM Jacob Faibussowitsch
wrote:
> but it does report the peak usage.
>
>
> On default -log_view? Or is it another option? I have always been using
> the below from -log_view, I always thought it was total memory usage.
>
Try -log_view -log_view_memory created by Barr
I could not reproduce it locally. Even in the CI, it is random.
--Junchao Zhang
On Wed, Apr 1, 2020 at 7:47 PM Matthew Knepley wrote:
> I saw Satish talking about this on the CI Tracker MR.
>
>Matt
>
> On Wed, Apr 1, 2020 at 8:36 PM Lisandro Dalcin wrote:
>
>> W
Seems caused by MR 2655
<https://gitlab.com/petsc/petsc/-/merge_requests/2655>. I reverted it and
tested in CI several times and the error did not appear. Let's assume the
MR has a bug. I am looking into it.
--Junchao Zhang
On Thu, Apr 2, 2020 at 10:58 AM Satish Balay wrote:
&
Satish,
I have a MR !2688 to fix it. How to revert your revert to get the MR
actually tested?
--Junchao Zhang
On Fri, Apr 3, 2020 at 9:11 PM Satish Balay wrote:
> https://gitlab.com/petsc/petsc/pipelines/132414153/builds
>
> this pipeline had 1 rerun for linux-cuda-double and 5 r
Ali,
Congratulations to your paper, thanks to Barry's PCVPBJACOBI.
--Junchao Zhang
On Thu, Apr 9, 2020 at 6:12 PM Ali Reza Khaz'ali
wrote:
> Dear PETSc team,
>
>
>
> I just want to thank you for implementing the PCVPBJACOBI into the PETSc
> library, which I use
Yes. Looks we need to include petsclog.h. Don't know why OMP triggered the
error.
--Junchao Zhang
On Mon, Apr 13, 2020 at 9:59 AM Mark Adams wrote:
> Should I do an MR to fix this?
>
ts on the idea of letting users keep logging with openmp?
>>
>> On Mon, Apr 13, 2020 at 11:40 AM Junchao Zhang
>> wrote:
>>
>>> Yes. Looks we need to include petsclog.h. Don't know why OMP
>>> triggered the error.
>>> --Junchao Zhang
>>>
>>>
>>> On Mon, Apr 13, 2020 at 9:59 AM Mark Adams wrote:
>>>
>>>> Should I do an MR to fix this?
>>>>
>>>
e_requests/2714 that should
fix your original compilation errors.
--Junchao Zhang
On Mon, Apr 13, 2020 at 2:07 PM Mark Adams wrote:
> https://www.mcs.anl.gov/petsc/miscellaneous/petscthreads.html
>
> and I see this on my Mac:
>
> 14:23 1 mark/feature-xgc-interface-rebase *= ~/Code
Probably matrix assembly on GPU is more important. Do you have an example
for me to play to see what GPU interface we should have?
--Junchao Zhang
On Mon, Apr 13, 2020 at 5:44 PM Mark Adams wrote:
> I was looking into assembling matrices with threads. I have a coloring to
> avoid con
I don't see problems calling _exit in PetscSignalHandlerDefault. Let me
try it first.
--Junchao Zhang
On Tue, Apr 21, 2020 at 3:17 PM John Peterson wrote:
> Hi,
>
> I started a thread on disc...@mpich.org regarding some hanging canceled
> jobs that we were seeing:
>
>
it can solve the problem you reported (actually
happened).
--Junchao Zhang
On Wed, May 6, 2020 at 10:22 AM John Peterson wrote:
> Hi Junchao,
>
> I was just wondering if there was any update on this? I saw your question
> on the discuss@mpich thread, but I gather you have no
I guess Jacob already used MPICH, since MPIDI_CH3_EagerContigShortSend() is
from MPICH.
--Junchao Zhang
On Tue, Jun 2, 2020 at 9:38 AM Satish Balay via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> use --download-mpich for valgrind.
>
> https://www.mcs.anl.gov/petsc/documen
In mumps.py, change
self.version = '5.3.1'
to
self.minversion = '5.2.1'
If we support older mumps, we can even lower the minverison.
--Junchao Zhang
On Sat, Jun 6, 2020 at 3:16 PM Jacob Faibussowitsch
wrote:
> Hello All,
>
> As the title suggest configure downloa
On Mon, Jun 15, 2020 at 8:33 PM Jacob Faibussowitsch
wrote:
> And if one needs windows native/libraries - then dealing with windows and
> its quirks is unavoidable.
>
> WSL2 allows you to run windows binaries natively inside WSL I believe
> https://docs.microsoft.com/en-us/windows/wsl/interop#run
It should be renamed as NCL (NVIDIA Communications Library) as it adds
point-to-point, in addition to collectives. I am not sure whether to
implement it in petsc as none exscale machine uses nvidia GPUs.
--Junchao Zhang
On Tue, Jun 16, 2020 at 6:44 PM Matthew Knepley wrote:
> It would seem
A dedicated mailing list has all these functionalities and is easier to see
discussion threads.
--Junchao Zhang
On Thu, Jun 18, 2020 at 9:27 PM Barry Smith wrote:
>
>I'd like to start a discussion of PETSc 4.0 aka the Grand
> Refactorization but to have that discussion we n
No. That is the plan. Petsc's script gcov.py works correctly and we need to
move it to codecov.io.
--Junchao Zhang
On Thu, Jun 25, 2020 at 9:34 AM Aagaard, Brad T via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> Are you opposed to using codecov.io to compile the results and ge
On Thu, Jul 23, 2020 at 11:35 PM Satish Balay via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> On Thu, 23 Jul 2020, Jeff Hammond wrote:
>
> > Open-MPI refuses to let users over subscribe without an extra flag to
> > mpirun.
>
> Yes - and when using this flag - it lets the run through - but there is
can benefit from derived data types is
in DMDA. The ghost points can be described with MPI_Type_vector(). We can
save the packing/unpacking and associated buffers.
--Junchao Zhang
On Wed, Sep 23, 2020 at 12:30 PM Victor Eijkhout
wrote:
> The Ohio mvapich people are working on getting bet
It is better the tool can also print out line and column numbers and
reasons why it is wrong.
--Junchao Zhang
On Thu, Sep 24, 2020 at 11:16 AM Satish Balay via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> The relevant part:
>
> No space after i
On Fri, Oct 2, 2020 at 2:59 PM Mark Adams wrote:
>
>
> On Fri, Oct 2, 2020 at 3:15 PM Barry Smith wrote:
>
>>
>> Mark,
>>
>> Looks like you are building Kokkos without CUDA.
>
>
> Yes. This is a CPU build of Kokkos.
>
>
>> You don't have --with-cuda on configure line that is used by Kokkos t
On Fri, Oct 2, 2020 at 3:02 PM Junchao Zhang
wrote:
>
>
> On Fri, Oct 2, 2020 at 2:59 PM Mark Adams wrote:
>
>>
>>
>> On Fri, Oct 2, 2020 at 3:15 PM Barry Smith wrote:
>>
>>>
>>> Mark,
>>>
>>> Looks like you are b
Let me have a look. cupminit.inc is a template for CUDA and HIP. It is OK
if you see some symbols twice.
--Junchao Zhang
On Fri, Oct 16, 2020 at 8:22 AM Mark Adams wrote:
> Junchao, I see this in cupminit.inc (twice)
>
> #if defined(PETSC_HAVE_KOKKOS)
&
Prof. Ed Bueler,
Congratulations on your book. I am eager to read it.
I was wondering if it is feasible to add your example programs to PETSc
tests so that readers will always be able to run your code.
--Junchao Zhang
On Thu, Oct 29, 2020 at 8:29 PM Ed Bueler wrote:
> All --
>
Ed,
I agree with all what you said. My thought is we don't need to add each
of your examples into corresponding src/XX/tutorials/. Your repo can be a
standalone directory and we just need PETSc CI to be able to run them.
--Junchao Zhang
On Fri, Oct 30, 2020 at 9:00 PM Ed Bueler
website later when they are finding jobs.
--Junchao Zhang
On Fri, Nov 20, 2020 at 1:04 PM Matthew Knepley wrote:
> That is a good idea. Anyone against this?
>
> Thanks,
>
> Matt
>
> On Fri, Nov 20, 2020 at 1:26 PM Barry Smith wrote:
>
>>
>> Maybe so
I think we can just send to both petsc-announce and petsc-users. First
there are not many such emails. Second, if there are, users should be
happy to see that.
I receive 10+ ad emails daily and I don't mind receiving extra 5 emails
monthly :)
--Junchao Zhang
On Fri, Nov 20, 2020 at 7:
Could be GPU resource competition. Note this test uses nsize=8.
--Junchao Zhang
On Wed, Dec 9, 2020 at 7:15 PM Mark Adams wrote:
> And this is a Cuda 11 complex build:
> https://gitlab.com/petsc/petsc/-/jobs/901108135
>
> On Wed, Dec 9, 2020 at 8:11 PM Mark Adams wrote:
&
ng thrust
complex (see MR 2822)
self.setCompilers.CUDAFLAGS += ' -std=' + self.compilers.cxxdialect.lower()
In your configure.log, there are
#define PETSC_HAVE_CXX_DIALECT_CXX11 1
#define PETSC_HAVE_CXX_DIALECT_CXX14 1
I guess without -ccbin, nvcc uses gcc by default and your gcc does not
suppo
When is the deadline of your SC paper?
--Junchao Zhang
On Thu, Dec 24, 2020 at 6:44 PM Mark Adams wrote:
> It does not look like aijkokkos is equipped with solves the way
> aijcusparse is.
>
> I would like to get a GPU direct solver for an SC paper on the Landau
> stuff with
VecScatter (i.e., SF, the two are the same thing) setup (building various
index lists, rank lists) is done on the CPU. is1, is2 must be host data.
When the SF is used to communicate device data, indices are copied to the
device..
--Junchao Zhang
On Thu, Feb 18, 2021 at 11:50 AM Patrick Sanan
On Thu, Feb 18, 2021 at 4:04 PM Fande Kong wrote:
>
>
> On Thu, Feb 18, 2021 at 1:55 PM Junchao Zhang
> wrote:
>
>> VecScatter (i.e., SF, the two are the same thing) setup (building various
>> index lists, rank lists) is done on the CPU. is1, is2 must be host data.
&
.
So, copying the indices from device to host and build a VecScatter there
seems the easiest approach.
The Kokkos-related functions are experimental. We need to decide whether
they are good or not.
--Junchao Zhang
On Fri, Feb 19, 2021 at 4:32 AM Patrick Sanan
wrote:
> Thanks! That helps a
y message aborts the commit.
4) Edit the commit message as you want, save and exit, done!
--Junchao Zhang
On Tue, Mar 2, 2021 at 6:19 PM Blaise A Bourdin wrote:
> Hi,
>
> This is not technically a petsc question.
> It would be great to have a short section in the PETSc integration
Oh, graph is an alias in my .gitconfig
[alias]
graph = log --graph --decorate --abbrev-commit --pretty=oneline
--Junchao Zhang
On Wed, Mar 3, 2021 at 1:51 PM Mark Adams wrote:
>
>
> On Tue, Mar 2, 2021 at 10:02 PM Junchao Zhang
> wrote:
>
>> I am a nai
I would rather directly change the project to use CXXFLAGS instead of
CXXPPFLAGS.
--Junchao Zhang
On Tue, Mar 23, 2021 at 10:01 AM Satish Balay via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> On Tue, 23 Mar 2021, Stefano Zampini wrote:
>
> > Just tried out of main, and and th
Can we combine CXXPPFLAGS and CXXFLAGS into one CXXFLAGS?
--Junchao Zhang
On Tue, Mar 23, 2021 at 11:38 AM Patrick Sanan
wrote:
> I had a related (I think) issue trying to build with Kokkos. Those headers
> throw an #error if they're expecting OpenMP and the compiler doesn't h
Matt,
I can reproduce the error. Let me see what is wrong.
Thanks.
--Junchao Zhang
On Mon, Mar 29, 2021 at 2:16 PM Matthew Knepley wrote:
> Junchao,
>
> I have an SF problem, which I think is a caching bug, but it is hard to
> see what is happening in the internals. I have
get different SFs. The
crashed one did 1) first and then 2). The 'good' one did 2) and then 1).
But even the good one is wrong, since it gives an empty SF (thus not
crashing the code).
--Junchao Zhang
On Tue, Mar 30, 2021 at 5:44 AM Matthew Knepley wrote:
> On Mon, Mar 29, 2021
On Mon, Apr 5, 2021 at 7:33 PM Jeff Hammond wrote:
> NVCC has supported multi-versioned "fat" binaries since I worked for
> Argonne. Libraries should figure out what the oldest hardware they are
> about is and then compile for everything from that point forward. Kepler
> (3.5) is oldest version
On Wed, Apr 21, 2021 at 5:23 AM Stefano Zampini
wrote:
> Incidentally, I found PETSc does not compile when configured using
>
> --with-scalar-type=complex --with-kokkos-dir=
> --with-kokkos_kernels-dir=...
>
With cuda or not?
>
> Some fixes are trivial, others require some more thought.
>
>
Satish, how to access this machine? I want to know why complex is screwed
up.
--Junchao Zhang
On Thu, May 13, 2021 at 7:08 PM Matthew Knepley wrote:
> Nope. I will use your fix.
>
> Thanks,
>
> Matt
>
> On Thu, May 13, 2021 at 7:55 PM Matthew Knepley wrote:
>
&
es of
*constexpr.*
Workarounds include:
- define PETSC_SKIP_CXX_COMPLEX_FIX in the offending *.cxx file.
- add CXXOPTFLAGS=-std=c++11
- update clang-6.o or gcc-4.8.5 (of 2015) on that machine.
--Junchao Zhang
On Fri, May 14, 2021 at 10:42 AM Satish Balay wrote:
> You can login to:
I don't have. I think the main problem with petsc is one usually needs to
debug with multiple MPI ranks.
For light debug, I use gdb or tmpi <https://github.com/Azrael3000/tmpi>;
for heavy debug, I use ddt on servers (need license).
--Junchao Zhang
On Fri, May 28, 2021 at 3:14 PM Aa
try gcc/6.4.0
--Junchao Zhang
On Sat, May 29, 2021 at 9:50 PM Mark Adams wrote:
> And I grief using gcc-8.1.1 and get this error:
>
> /autofs/nccs-svm1_sw/summit/gcc/8.1.1/include/c++/8.1.1/type_traits(347):
> error: identifier "__ieee128" is undefined
>
> Any ide
-o
ex5k
--Junchao Zhang
On Thu, Jun 3, 2021 at 8:32 AM Mark Adams wrote:
> I am getting this error:
>
> 09:22 adams/landau-mass-opt=
> /gpfs/alpine/csc314/scratch/adams/petsc/src/mat/tutorials$ make
> PETSC_DIR=/gpfs/alpine/csc314/scratch/adams/petsc
> PETSC_ARCH=arch-sum
$rm ex3k
$make ex3k
and run again?
--Junchao Zhang
On Sat, Jun 5, 2021 at 10:25 AM Mark Adams wrote:
> This is posted in Barry's MR, but I get this error with Kokkos-cuda on
> Summit. Failing to open a shared lib.
> Thoughts?
> Mark
>
> 11:15 barry/2020-11-11/cle
This problem was fixed in
https://gitlab.com/petsc/petsc/-/merge_requests/4056, and is waiting for
!3411 <https://gitlab.com/petsc/petsc/-/merge_requests/3411> :)
--Junchao Zhang
On Sat, Jun 5, 2021 at 9:42 PM Barry Smith wrote:
>
> Looks like the MPI libraries are not being p
Use VecGetArrayRead/Write() to get up-to-date host pointers to the vector
array.
--Junchao Zhang
On Wed, Jun 23, 2021 at 9:15 AM Mark Adams wrote:
> First, there seem to be two pages for VecGetArrayAndMemType (one has a
> pointer to the other).
>
> So I need to get a CPU ar
Mark,
I am not sure what your problem is. If it is a regression, can you
bisect it?
--Junchao Zhang
On Wed, Jun 23, 2021 at 4:04 PM Mark Adams wrote:
> I also tried commenting out the second VecView, so there is just one step
> in the file, and the .h5 file is only 8 bytes smaller a
Do you use complex? post your configure.log.
--Junchao Zhang
On Fri, Jul 16, 2021 at 9:47 AM Mark Adams wrote:
> The simple Kokkos example is failing for me on Spock.
> Any ideas?
> Thanks,
>
> 10:44 main *= /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tutorials$
> ma
is a makefile problem (I am fixing). If you do not directly
build an executable from *.kokkos.cxx, then you can avoid this problem.
For example, snes/tests/ex13 works with kokkos options on Spock.
--Junchao Zhang
On Fri, Jul 16, 2021 at 2:53 PM Mark Adams wrote:
> Not complex. THis has some
zhang/petsc and
PETSC_ARCH=arch-spock-cray-kokkos-dbg
C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process
C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes
C/C++ example src/snes/tutorials/ex3k run successfully with kokkos-kernels
Fortran examp
Mark, I can reproduce this error with PrgEnv-cray, i.e., using the Cray
compiler (clang-11). Previously I used PrgEnv-gnu, which did not have this
error.
Probably it is a problem of Spock. But I am not sure.
--Junchao Zhang
On Sat, Jul 17, 2021 at 10:17 AM Mark Adams wrote:
> And I can
Yes, the two files are just petsc headers.
--Junchao Zhang
On Fri, Aug 13, 2021 at 5:17 PM Mark Adams wrote:
> I seem to be getting Kokkos includes in my install but there is no kokkos
> in the configure and I started with a clean PETSc arch directory and
> install directory. Does
On Thu, Aug 12, 2021 at 11:22 AM Barry Smith wrote:
>
> User visible communicators generally do not have a keyval attached.
> Rather the keyval is attached to the inner communicator; because we don't
> want both PETSc and the user doing MPI operations on the same communicator
> (to prevent mixin
Barry,
Thanks for the PetscRegisterFinalize() suggestion. I made an MR at
https://gitlab.com/petsc/petsc/-/merge_requests/4238
In rare cases, if I do need to duplicate communicators, I now free them
through PetscRegisterFinalize().
--Junchao Zhang
On Sun, Aug 15, 2021 at 12:50 PM Barry
Petsc::CUPMInterface
@Jacob Faibussowitsch
--Junchao Zhang
On Mon, Aug 30, 2021 at 9:35 AM Mark Adams wrote:
> I was running fine this AM and am bouncing between modules to help two
> apps (ECP milestone season) at the same time and something broke. I did
> update main and I get
Can you use less fancy 'static const int'?
--Junchao Zhang
On Mon, Aug 30, 2021 at 1:02 PM Jacob Faibussowitsch
wrote:
> No luck with C++14
>
>
> TL;DR: you need to have host and device compiler either both using c++17
> or neither using c++17.
>
> Long vers
o
you think?
--Junchao Zhang
On Sun, Sep 12, 2021 at 2:10 PM Pierre Jolivet wrote:
>
> On 12 Sep 2021, at 8:56 PM, Matthew Knepley wrote:
>
> On Sun, Sep 12, 2021 at 2:49 PM Antonio T. sagitter <
> sagit...@fedoraproject.org> wrote:
>
>> Those attached are configu
An old issue with SF_Window is at
https://gitlab.com/petsc/petsc/-/issues/555, though which is a different
error.
--Junchao Zhang
On Sun, Sep 12, 2021 at 2:20 PM Junchao Zhang
wrote:
> We met SF + Windows errors before. Stefano wrote the code, which I don't
> think was worth doi
Hi, Stefano,
Ping you again to see if you want to resolve this problem before
petsc-3.16
--Junchao Zhang
On Sun, Sep 12, 2021 at 3:06 PM Antonio T. sagitter <
sagit...@fedoraproject.org> wrote:
> Unfortunately, it's not possible. I must use the OpenMPI provided by
> Fe
Without a standalone & valid mpi example to reproduce the error, we are not
assured to say it is an OpenMPI bug.
--Junchao Zhang
On Tue, Sep 14, 2021 at 6:17 AM Matthew Knepley wrote:
> Okay, we have to send this to OpenMPI. Volunteers?
>
> Maybe we should note this i
MPI one-sided is tricky and needs careful synchronization (like OpenMP).
An incorrect code could work in one interface but fail in another.
--Junchao Zhang
On Tue, Sep 14, 2021 at 10:01 AM Barry Smith wrote:
>
>It sounds reproducible and related to using a particular versions of
>
Yes, we can turn it off. The code without real use is just a
maintenance burden.
--Junchao Zhang
On Tue, Sep 14, 2021 at 10:45 AM Barry Smith wrote:
>
> Ok, so it could be a bug in PETSc, but if it appears with particular MPI
> implementations shouldn't we turn off the su
On Sat, Sep 25, 2021 at 4:45 PM Mark Adams wrote:
> I am testing my Landau code, which is MPI serial, but with many
> independent MPI processes driving each GPU, in an MPI parallel harness code
> (Landau ex2).
>
> Vector operations with Kokkos Kernels and cuSparse are about the same (KK
> is fast
Mark,
without cuda-memcheck, did the test run?
--Junchao Zhang
On Sun, Sep 26, 2021 at 12:38 PM Mark Adams wrote:
> FYI, I am getting this with cuda-memcheck on Summit with CUDA 11.0.3:
>
> jsrun -n 48 -a 6 -c 6 -g 1 -r 6 --smpiargs -gpu cuda-memcheck ../ex13-cu
> -dm_plex_box_
The coverage page at http://ftp.mcs.anl.gov/pub/petsc/nightlylogs/index.html
is missing. When I read certain PETSc code, I feel some if conditions will
never be met (i.e., there are dead statements). To confirm that, I want to
know which tests run into that condition.
--Junchao Zhang
For each tested line, it would be helpful if we also show the test (one is
enough) that tested the line.
--Junchao Zhang
On Fri, Mar 16, 2018 at 10:40 AM, Balay, Satish wrote:
> On Fri, 16 Mar 2018, Junchao Zhang wrote:
>
> > The coverage page at http://ftp.mcs.anl.gov/pub/
> pe
On Thu, Apr 12, 2018 at 9:48 AM, Smith, Barry F. wrote:
>
>
> > On Apr 12, 2018, at 3:59 AM, Patrick Sanan
> wrote:
> >
> > I also happened to stumble across this yesterday. Is the length
> restriction for the default printer (l assume from the array of 8*1024
> chars in PetscVFPrintfDefault() )
Min,
I suggest MPICH add tests to play with the maximal MPI tag (through
attribute MPI_TAG_UB).
PETSc uses tags from the maximal and downwards. I guess MPICH tests use
small tags. That is why the bug only showed up with PETSc.
--Junchao Zhang
On Tue, Apr 17, 2018 at 3:58 PM, Min Si wrote
imply
copy & paste.
3) It seems ex10.c had gone through large changes and it can not produce a
log summary similar to the figures anymore (e.g., no stages like "Event
Stage 4: KSPSetUp 1" ).
--Junchao Zhang
saying the test is
for the manual so that one will not accidentally overwrite it.
--Junchao Zhang
On Thu, Apr 19, 2018 at 1:01 AM, Patrick Sanan
wrote:
> Sorry I didn't catch this when cleaning up the manual recently. If you
> don't have time to update it yourself, let me know and
etscScalar
<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscScalar.html#PetscScalar>
*value,
InsertMode
<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/InsertMode.html#InsertMode>
mode) to set cnt values starting at index i?
--Junchao Zhang
On Fri, Apr 20, 2018 at 3:18 PM, Matthew Knepley wrote:
> On Fri, Apr 20, 2018 at 4:10 PM, Junchao Zhang
> wrote:
>
>> To pad a vector, i.e., copy a vector to a new one, I have to call
>> VecSetValue(newb,1,&idx,...) for each element. But to be efficient, what I
&g
I agree the extra overhead can be small, but users are forced to write a
loop where one single line gives the best.
--Junchao Zhang
On Fri, Apr 20, 2018 at 3:36 PM, Smith, Barry F. wrote:
>
>When setting values into matrices and vectors we consider the "extra"
> overhead
VecScatter is too heavy (in both coding and runtime) for this simple task.
I just want to pad a vector loaded from a PetscViewer to match an MPIBAIJ
matrix. Thus the majority is memcpy, with few neighborhood off-processor
puts.
--Junchao Zhang
On Fri, Apr 20, 2018 at 3:57 PM, Jed Brown wrote
I don't care the little extra overhead. I just feel the avoidable loop in
the user code a bit ugly.
--Junchao Zhang
On Fri, Apr 20, 2018 at 4:11 PM, Smith, Barry F. wrote:
>
> I would just use VecSetValues() since almost all values are local it
> will be scalable and a little
On Sat, Apr 21, 2018 at 6:21 AM, Matthew Knepley wrote:
> On Fri, Apr 20, 2018 at 5:02 PM, Junchao Zhang
> wrote:
>
>> VecScatter is too heavy (in both coding and runtime) for this simple
>> task. I just want to pad a vector loaded from a PetscViewer to match an
>&g
On Sat, Apr 21, 2018 at 7:51 AM, Matthew Knepley wrote:
> On Sat, Apr 21, 2018 at 8:47 AM, Junchao Zhang
> wrote:
>
>> On Sat, Apr 21, 2018 at 6:21 AM, Matthew Knepley
>> wrote:
>>
>>> On Fri, Apr 20, 2018 at 5:02 PM, Junchao Zhang
>>> wrote:
>
mpack']
--Junchao Zhang
Great, thanks.
--Junchao Zhang
On Tue, Apr 24, 2018 at 11:54 AM, Balay, Satish wrote:
> Ok - for https://github.com/pghysels/STRUMPACK/archive/v2.2.0.tar.gz to
> work - you need:
>
>
> diff --git a/config/BuildSystem/config/packages/strumpack.py
> b/config/BuildSyst
Searched but could not find this option, -mat_view::load_balance
--Junchao Zhang
On Thu, Jun 7, 2018 at 10:46 AM, Smith, Barry F. wrote:
> So the only surprise in the results is the SOR. It is embarrassingly
> parallel and normally one would not see a jump.
>
> The load balance
EEP
<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSORType.html#MatSORType>,fshift,lits,1,xx);1470:
}
--Junchao Zhang
On Thu, Jun 7, 2018 at 3:11 PM, Smith, Barry F. wrote:
>
>
> > On Jun 7, 2018, at 12:27 PM, Zhang, Junchao wrote:
> >
> >
Mark,
I think your idea is good and submitted the jobs, but the jobs are in the
queue for a whole day.
--Junchao Zhang
On Mon, Jun 11, 2018 at 8:09 AM, Mark Adams wrote:
>
>
> On Mon, Jun 11, 2018 at 12:46 AM, Junchao Zhang
> wrote:
>
>> I used an LCRC machine named B
_view
::load_balance. It gives fewer KSP iterations and but PETSc still reports
load imbalance at coarse levels.
--Junchao Zhang
On Tue, Jun 12, 2018 at 3:17 PM, Mark Adams wrote:
> This all looks reasonable to me. The VecScatters times are a little high
> but these are fast little solves
coarse levels.
>
>Is the overall scaling better, worse, or the same for periodic boundary
> conditions?
>
OK, the queue job finally finished. I would worse with periodic boundary
conditions. But they have different KSP iterations and MatSOR call counts.
So the answer is not solid
each) took 4 minutes. With hypre, 1 KSPSolve + 6 KSP iterations each, takes
6 minutes.
I will test and profile the code on a single node, and apply some
vecscatter optimizations I recently did to see what happens.
--Junchao Zhang
On Thu, Jun 14, 2018 at 11:03 AM, Mark Adams wrote:
> And
I am not sure if VecDuplicate is problematic. But we should avoid
collective calls in general. They do more harm than VecScatters, which are
often neighborhood and do not sync all processors.
--Junchao Zhang
On Tue, Jun 19, 2018 at 9:16 PM, Smith, Barry F. wrote:
>
>Jed,
>
>
.
BTW, it does not solve the imbalance problem of MatSOR which I am debugging.
--Junchao Zhang
On Tue, Jun 19, 2018 at 10:58 PM, Smith, Barry F.
wrote:
>
> Ahh yes. Since the layout can get large we wanted to avoid each vector
> having its own copy.
>
>But it seems sloppy
One of 6 MatSOR's generates a VecDuplicate_MPI_DA. They are more VecDuplicates
in MatSOR, but some of them go to VecDuplicate_MPI for reason I don't know.
--Junchao Zhang
On Wed, Jun 20, 2018 at 12:52 PM, Smith, Barry F.
wrote:
>
>
> > On Jun 20, 2018, at 11:35 AM, Z
erations 5". But it took
very long time (10 mins). Without --with-openmp=1, it took less than 1
second.
--Junchao Zhang
On Fri, Jun 22, 2018 at 3:33 PM, Mark Adams wrote:
> We are using KNL (Cori) and hypre is not working when configured
> with '--with-openmp=1', even when
1 - 100 of 154 matches
Mail list logo