I pushed an update to this branch, which adopts MPI_Type_create_resized.
--Junchao Zhang
On Tue, Mar 19, 2019 at 11:56 AM Balay, Satish via petsc-dev
mailto:petsc-dev@mcs.anl.gov>> wrote:
For now - I'm merging this branch to next. If better fix comes up later we can
merge it then.
thanks,
>
>
> Could you explain this more by adding some small examples?
>
>
Since you are considering implementing all-at-once (four nested loops,
right?) I'll give you my old code.
This code is hardwired for two AMG and for a geometric-AMG, where the
blocks of the R (and hence P) matrices are scaled
Hi Mark,
Thanks for your email.
On Thu, Mar 21, 2019 at 6:39 AM Mark Adams via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> I'm probably screwing up some sort of history by jumping into dev, but
> this is a dev comment ...
>
> (1) -matptap_via hypre: This call the hypre package to do the PtAP
On Tue, 5 Mar 2019, Balay, Satish via petsc-maint wrote:
> perhaps starting March 18 - freeze access to next - and keep
> recreating next & next-tmp dynamically as needed
A note: I've restricted access to 'next' so that the above workflow
can be used for the release [if needed].
Satish
A reminder!.
Also please check and update src/docs/website/documentation/changes/dev.html
thanks,
Satish
On Tue, 5 Mar 2019, Balay, Satish via petsc-maint wrote:
> Sure - I would add caveats such as:
>
> - its best to submit PRs early - if they are critical [i.e if the
> branch should be in
I'm probably screwing up some sort of history by jumping into dev, but this
is a dev comment ...
(1) -matptap_via hypre: This call the hypre package to do the PtAP trough
> an all-at-once triple product. In our experiences, it is the most memory
> efficient, but could be slow.
>
FYI,
I visited