Re: [petsc-users] question about MatCreateRedundantMatrix

2019-09-18 Thread hong--- via petsc-users
Michael, We have support of MatCreateRedundantMatrix for dense matrices. For example, petsc/src/mat/examples/tests/ex9.c: mpiexec -n 4 ./ex9 -mat_type dense -view_mat -nsubcomms 2 Hong On Wed, Sep 18, 2019 at 5:40 PM Povolotskyi, Mykhailo via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear

Re: [petsc-users] MKL_PARDISO question

2019-09-18 Thread Smith, Barry F. via petsc-users
This is easy thanks to the additional debugging I added recently. Your install of MKL does not have CPardiso support. When you install MKL you have to make sure you select the "extra" cluster option, otherwise it doesn't install some of the library. I only learned this myself recently

[petsc-users] question about MatCreateRedundantMatrix

2019-09-18 Thread Povolotskyi, Mykhailo via petsc-users
Dear Petsc developers, I found that MatCreateRedundantMatrix does not support dense matrices. This causes the following problem: I cannot use CISS eigensolver from SLEPC with dense matrices with parallelization over quadrature points. Is it possible for you to add this support? Thank you,

Re: [petsc-users] MKL_PARDISO question

2019-09-18 Thread Smith, Barry F. via petsc-users
> On Sep 18, 2019, at 9:15 AM, Xiangdong via petsc-users > wrote: > > Hello everyone, > > From here, > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMKL_PARDISO.html > > It seems thatMKL_PARDISO only works for seqaij. I am curious that whether one > can use

Re: [petsc-users] Strange Partition in PETSc 3.11 version on some computers

2019-09-18 Thread Smith, Barry F. via petsc-users
> On Sep 18, 2019, at 12:25 PM, Mark Lohry via petsc-users > wrote: > > Mark, > Mark, Good point. This has been a big headache forever Note that this has been "fixed" in the master version of PETSc and will be in its next release. If you use --download-parmetis in the

Re: [petsc-users] Strange Partition in PETSc 3.11 version on some computers

2019-09-18 Thread Mark Lohry via petsc-users
Mark, > The machine, compiler and MPI version should not matter. I might have missed something earlier in the thread, but parmetis has a dependency on the machine's glibc srand, and it can (and does) create different partitions with different srand versions. The same mesh on the same code on

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
Thanks for your suggestion, Matthew. I will certainly look into DMForest for refining of my base DMPlex dm. From: Matthew Knepley [mailto:knep...@gmail.com] Sent: Wednesday, September 18, 2019 10:35 PM To: Mohammad Hassan Cc: PETSc Subject: Re: [petsc-users] DMPlex Distribution On Wed,

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
I want to implement block-based AMR, which turns my base conformal mesh to non-conformal. My question is how DMPlex renders a mesh that it cannot support non-conformal meshes. If DMPlex does not work, I will try to use DMForest. From: Matthew Knepley [mailto:knep...@gmail.com] Sent:

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any way that we can construct non-conformal layout for DM in petsc? From: Mark Adams [mailto:mfad...@lbl.gov] Sent: Wednesday, September 18, 2019 9:23 PM To: Mohammad Hassan Cc: Matthew Knepley ; PETSc users list

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mark Adams via petsc-users
I'm puzzled. It sounds like you are doing non-conforming AMR (structured block AMR), but Plex does not support that. On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users < petsc-users@mcs.anl.gov> wrote: > Mark is right. The functionality of AMR does not relate to > parallelization

Re: [petsc-users] TS scheme with different DAs

2019-09-18 Thread Matthew Knepley via petsc-users
On Tue, Sep 17, 2019 at 8:27 PM Smith, Barry F. wrote: > > Don't be too quick to dismiss switching to the DMStag you may find that > it actually takes little time to convert and then you have a much less > cumbersome process to manage the staggered grid. Take a look at >