Re: [petsc-users] TS generating inconsistent data

2020-03-17 Thread Zhang, Hong via petsc-users
Zane,

Stefano’s suggestion should have fixed your code. I just want to let you know 
that it is not a requirement to call TSResetTrajectory() and there are other 
ways to fix your problem.

Based on my experience, you might have called TSSaveTrajectory() in an 
unnecessary place, for example,
  TSSaveTrajectory()
  TSSolve() /* generate a reference solution or for some other purpose */
  { /* optimization loop */
TSSolve() /* forward run for sensitivity calculation */
TSAdjointSolve() /* backward run for sensitivity calculation */
  }
Here, the first call to TSSolve() actually does not need the trajectory data. 
You can easily fix the ‘inconsistent data’ error by doing
  TSSolve() /* generate a reference solution or for some other purpose */
  TSSaveTrajectory()
  { /* optimization loop */
TSSolve() /* forward run for sensitivity calculation */
TSAdjointSolve() /* backward run for sensitivity calculation */
  }

Alternatively, you can use the command line option -ts_trajectory_use_history 0 
(you need to call TSSetFromOptions() after TSSaveTrajectory() to receive the 
TSTrajectory options).

Hong (Mr.)

On Mar 14, 2020, at 10:54 AM, Zane Charles Jakobs 
mailto:zane.jak...@colorado.edu>> wrote:

Hi PETSc devs,

I have some code that implements (essentially) 4D-VAR with PETSc, and the 
results of both my forward and adjoint integrations look correct to me (i.e. 
calling TSSolve() and then TSAdjointSolve() works correctly as far as I can 
tell). However, when I try to use a TaoSolve() to optimize my initial 
condition, I get this error message:

[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Petsc has generated inconsistent data
[0]PETSC ERROR: History id should be unique
[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12  GIT 
Date: 2020-02-21 16:53:25 -0600
[0]PETSC ERROR: ./var_ic_test on a arch-linux2-c-debug named DiffeoInvariant by 
diffeoinvariant Sat Mar 14 09:39:05 2020
[0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native 
-fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 
--with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native 
-fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich
[0]PETSC ERROR: #1 TSHistoryUpdate() line 82 in 
/usr/local/petsc/src/ts/interface/tshistory.c
[0]PETSC ERROR: #2 TSTrajectorySet() line 73 in 
/usr/local/petsc/src/ts/trajectory/interface/traj.c
[0]PETSC ERROR: #3 TSSolve() line 4005 in /usr/local/petsc/src/ts/interface/ts.c
[0]PETSC ERROR: #4 MixedModelFormVARICFunctionGradient() line 301 in mixed.c
[0]PETSC ERROR: #5 TaoComputeObjectiveAndGradient() line 261 in 
/usr/local/petsc/src/tao/interface/taosolver_fg.c
[0]PETSC ERROR: #6 TaoSolve_LMVM() line 23 in 
/usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c
[0]PETSC ERROR: #7 TaoSolve() line 219 in 
/usr/local/petsc/src/tao/interface/taosolver.c
[0]PETSC ERROR: #8 MixedModelOptimize() line 639 in mixed.c
[0]PETSC ERROR: #9 MixedModelOptimizeInitialCondition() line 648 in mixed.c
[0]PETSC ERROR: #10 main() line 76 in var_ic_test.c
[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: End of Error Message ---send entire error 
message to petsc-ma...@mcs.anl.gov--

In the function MixedModelFormVARICFunctionGradient(), I do

ierr = TSSetTime(model->ts, 0.0);CHKERRQ(ierr);
ierr = TSSetStepNumber(model->ts, 0);CHKERRQ(ierr);
ierr = TSSetFromOptions(model->ts);CHKERRQ(ierr);
ierr = TSSetMaxTime(model->ts, model->obs->t);CHKERRQ(ierr);
ierr = TSSolve(model->ts, model->X);CHKERRQ(ierr);
... [allocating and setting cost gradient vec]
ierr = TSSetCostGradients(model->ts, 1, model->lambda, NULL);CHKERRQ(ierr);
ierr = TSAdjointSolve(model->ts);CHKERRQ(ierr);
ierr = VecCopy(model->lambda[0], G);CHKERRQ(ierr);

What might be causing the above error? Am I using a deprecated version of the 
Tao interface? (I'm using TaoSetObjectiveAndGradientRoutine, as done in 
ex20_opt_ic.c)

Thanks!

-Zane Jakobs




Re: [petsc-users] --download-fblaslapack libraries cannot be used

2020-03-17 Thread Satish Balay via petsc-users
Thanks for the update.

Hopefully Matt can check on the issue with missing stuff in configure.log.

The MR is at https://gitlab.com/petsc/petsc/-/merge_requests/2606

Satish


On Tue, 17 Mar 2020, Fande Kong wrote:

> On Tue, Mar 17, 2020 at 9:24 AM Satish Balay  wrote:
> 
> > So what was the initial problem? Did conda install gcc without glibc? Or
> > was it using the wrong glibc?
> >
> 
> Looks like GCC installed by conda uses an old version of glibc (2.12).
> 
> 
> > Because the compiler appeared partly functional [well the build worked
> > with just LIBS="-lmpifort -lgfortran"]
> >
> > And after the correct glibc was installed - did current maint still fail
> > to build?
> >
> 
> Still failed because PETSc claimed that: there were no needed fortran
> libraries when using mpicc as the linker. But in fact, we need these
> fortran stuffs when linking blaslapack and mumps.
> 
> 
> >
> > Can you send configure.log for this?
> >
> > And its not clear to me why balay/fix-checkFortranLibraries/maint broke
> > before this fix. [for one configure.log was incomplete]
> >
> 
> I am not 100% sure, but I think the complied and linked executable can not
> run because  of "glibc_2.14' not found". The version of glibc was too low.
> 
> 
> So current solution for me is that: your branch + a new version of glibc
> (2.18).
> 
> Thanks,
> 
> Fande,
> 
> 
> 
> >
> > Satish
> >
> > On Tue, 17 Mar 2020, Fande Kong wrote:
> >
> > > Hi Satish,
> > >
> > > Could you merge your branch, balay/fix-checkFortranLibraries/maint, into
> > > maint?
> > >
> > > I added glibc to my conda environment (conda install -c dan_blanchard
> > > glibc), and your branch ran well.
> > >
> > > If you are interested, I attached the successful log file here.
> > >
> > > Thanks,
> > >
> > > Fande
> > >
> > > On Sat, Mar 14, 2020 at 5:01 PM Fande Kong  wrote:
> > >
> > > > Without touching the configuration file, the
> > > > option: --download-hypre-configure-arguments='LIBS="-lmpifort
> > -lgfortran"',
> > > > also works.
> > > >
> > > >
> > > > Thanks, Satish,
> > > >
> > > >
> > > > Fande,
> > > >
> > > > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong 
> > wrote:
> > > >
> > > >> OK. I finally got PETSc complied.
> > > >>
> > > >> "-lgfortran" was required by fblaslapack
> > > >> "-lmpifort" was required by mumps.
> > > >>
> > > >> However, I had to manually add the same thing for hypre as well:
> > > >>
> > > >> git diff
> > > >> diff --git a/config/BuildSystem/config/packages/hypre.py
> > > >> b/config/BuildSystem/config/packages/hypre.py
> > > >> index 4d915c312f..f4300230a6 100644
> > > >> --- a/config/BuildSystem/config/packages/hypre.py
> > > >> +++ b/config/BuildSystem/config/packages/hypre.py
> > > >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage):
> > > >>  args.append('--with-lapack-lib=" "')
> > > >>  args.append('--with-blas=no')
> > > >>  args.append('--with-lapack=no')
> > > >> +args.append('LIBS="-lmpifort -lgfortran"')
> > > >>  if self.openmp.found:
> > > >>args.append('--with-openmp')
> > > >>self.usesopenmp = 'yes'
> > > >>
> > > >>
> > > >> Why hypre could not pick up LIBS options automatically?
> > > >>
> > > >>
> > > >> Thanks,
> > > >>
> > > >> Fande,
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users <
> > > >> petsc-users@mcs.anl.gov> wrote:
> > > >>
> > > >>> Configure Options: --configModules=PETSc.Configure
> > > >>> --optionsModule=config.compilerOptions --download-hypre=1
> > > >>> --with-debugging=no --with-shared-libraries=1
> > --download-fblaslapack=1
> > > >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1
> > > >>> --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1
> > > >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git
> > > >>> --download-slepc-commit= 59ff81b --with-mpi=1
> > --with-cxx-dialect=C++11
> > > >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona
> > > >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong
> > -fno-plt -O2
> > > >>> -ffunction-sections -pipe -isystem
> > > >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2
> > > >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now
> > > >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections
> > > >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib
> > > >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib
> > > >>> -L/home/kongf/workhome/rod/miniconda3/lib
> > > >>>
> > AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar
> > > >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran
> > -lmpifort
> > > >>>
> > > >>> You are missing quotes with LIBS option - and likely the libraries in
> > > >>> the wrong order.
> > > >>>
> > > >>> Suggest using:
> > > >>>
> > > >>> LIBS="-lmpifort -lgfortran"
> > > >>> or
> > > >>> 'LIBS=-lmpifort -lgfortran'
> > > >>>
> > > >>> Assuming you are invoking configure from shell.
> > > >>>

Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-17 Thread Mark Adams
On Tue, Mar 17, 2020 at 1:42 PM Sajid Ali 
wrote:

> Hi Mark/Jed,
>
> The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy
> + F_t*u, with the familiar 5 point central difference as the derivative
> approximation,
>

I assume this is definite HelmHoltz. The time integrator will also add a
mass term. I'm assuming F_t looks like a mass matrix.


> I'm also attaching the result of -info | grep GAMG if that helps). My goal
> is to get weak and strong scaling results for the FD solver (leading me to
> double check all my parameters). I ran the sweep again as Mark suggested
> and it looks like my base params were close to optimal ( negative threshold
> and 10 levels of squaring
>

For low order discretizations, squaring every level, as you are doing,
sound right. And the mass matrix confuses GAMG's filtering heuristics so no
filter sounds reasonable.

Note, hypre would do better than GAMG on this problem.


> with gmres/jacobi smoothers (chebyshev/sor is slower)).
>

You don't want to use GMRES as a smoother (unless you have
indefinite Helmholtz). SOR will be more expensive but often converges a lot
faster. chebyshev/jacobi would probably be better for you.

And you want CG (-ksp_type cg) if this system is symmetric positive
definite.


>
> [image: image.png]
>
> While I think that the base parameters should work well for strong
> scaling, do I have to modify any of my parameters for a weak scaling run ?
> Does GAMG automatically increase the number of mg-levels as grid size
> increases or is it upon the user to do that ?
>
> @Mark : Is there a GAMG implementation paper I should cite ? I've already
> added a citation for the Comput. Mech. (2007) 39: 497–507 as a reference
> for the general idea of applying agglomeration type multigrid
> preconditioning to helmholtz operators.
>
>
> Thank You,
> Sajid Ali | PhD Candidate
> Applied Physics
> Northwestern University
> s-sajid-ali.github.io
>
>


Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-17 Thread Sajid Ali
 Hi Mark/Jed,

The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy +
F_t*u, with the familiar 5 point central difference as the derivative
approximation, I'm also attaching the result of -info | grep GAMG if that
helps). My goal is to get weak and strong scaling results for the FD solver
(leading me to double check all my parameters). I ran the sweep again as
Mark suggested and it looks like my base params were close to optimal (
negative threshold and 10 levels of squaring with gmres/jacobi smoothers
(chebyshev/sor is slower)).

[image: image.png]

While I think that the base parameters should work well for strong scaling,
do I have to modify any of my parameters for a weak scaling run ? Does GAMG
automatically increase the number of mg-levels as grid size increases or is
it upon the user to do that ?

@Mark : Is there a GAMG implementation paper I should cite ? I've already
added a citation for the Comput. Mech. (2007) 39: 497–507 as a reference
for the general idea of applying agglomeration type multigrid
preconditioning to helmholtz operators.


Thank You,
Sajid Ali | PhD Candidate
Applied Physics
Northwestern University
s-sajid-ali.github.io


Re: [petsc-users] --download-fblaslapack libraries cannot be used

2020-03-17 Thread Satish Balay via petsc-users
So what was the initial problem? Did conda install gcc without glibc? Or was it 
using the wrong glibc?

Because the compiler appeared partly functional [well the build worked with 
just LIBS="-lmpifort -lgfortran"]

And after the correct glibc was installed - did current maint still fail to 
build?

Can you send configure.log for this?

And its not clear to me why balay/fix-checkFortranLibraries/maint broke before 
this fix. [for one configure.log was incomplete]

Satish

On Tue, 17 Mar 2020, Fande Kong wrote:

> Hi Satish,
> 
> Could you merge your branch, balay/fix-checkFortranLibraries/maint, into
> maint?
> 
> I added glibc to my conda environment (conda install -c dan_blanchard
> glibc), and your branch ran well.
> 
> If you are interested, I attached the successful log file here.
> 
> Thanks,
> 
> Fande
> 
> On Sat, Mar 14, 2020 at 5:01 PM Fande Kong  wrote:
> 
> > Without touching the configuration file, the
> > option: --download-hypre-configure-arguments='LIBS="-lmpifort -lgfortran"',
> > also works.
> >
> >
> > Thanks, Satish,
> >
> >
> > Fande,
> >
> > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong  wrote:
> >
> >> OK. I finally got PETSc complied.
> >>
> >> "-lgfortran" was required by fblaslapack
> >> "-lmpifort" was required by mumps.
> >>
> >> However, I had to manually add the same thing for hypre as well:
> >>
> >> git diff
> >> diff --git a/config/BuildSystem/config/packages/hypre.py
> >> b/config/BuildSystem/config/packages/hypre.py
> >> index 4d915c312f..f4300230a6 100644
> >> --- a/config/BuildSystem/config/packages/hypre.py
> >> +++ b/config/BuildSystem/config/packages/hypre.py
> >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage):
> >>  args.append('--with-lapack-lib=" "')
> >>  args.append('--with-blas=no')
> >>  args.append('--with-lapack=no')
> >> +args.append('LIBS="-lmpifort -lgfortran"')
> >>  if self.openmp.found:
> >>args.append('--with-openmp')
> >>self.usesopenmp = 'yes'
> >>
> >>
> >> Why hypre could not pick up LIBS options automatically?
> >>
> >>
> >> Thanks,
> >>
> >> Fande,
> >>
> >>
> >>
> >>
> >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users <
> >> petsc-users@mcs.anl.gov> wrote:
> >>
> >>> Configure Options: --configModules=PETSc.Configure
> >>> --optionsModule=config.compilerOptions --download-hypre=1
> >>> --with-debugging=no --with-shared-libraries=1 --download-fblaslapack=1
> >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1
> >>> --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1
> >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git
> >>> --download-slepc-commit= 59ff81b --with-mpi=1 --with-cxx-dialect=C++11
> >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona
> >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt 
> >>> -O2
> >>> -ffunction-sections -pipe -isystem
> >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2
> >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now
> >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections
> >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib
> >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib
> >>> -L/home/kongf/workhome/rod/miniconda3/lib
> >>> AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar
> >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran -lmpifort
> >>>
> >>> You are missing quotes with LIBS option - and likely the libraries in
> >>> the wrong order.
> >>>
> >>> Suggest using:
> >>>
> >>> LIBS="-lmpifort -lgfortran"
> >>> or
> >>> 'LIBS=-lmpifort -lgfortran'
> >>>
> >>> Assuming you are invoking configure from shell.
> >>>
> >>> Satish
> >>>
> >>> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote:
> >>>
> >>> > to work around - you can try:
> >>> >
> >>> > LIBS="-lmpifort -lgfortran"
> >>> >
> >>> > Satish
> >>> >
> >>> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote:
> >>> >
> >>> > > Its the same location as before. For some reason configure is not
> >>> saving the relevant logs.
> >>> > >
> >>> > > I don't understand saveLog() restoreLog() stuff. Matt, can you check
> >>> on this?
> >>> > >
> >>> > > Satish
> >>> > >
> >>> > > On Sat, 14 Mar 2020, Fande Kong wrote:
> >>> > >
> >>> > > > The configuration crashed earlier than before with your changes.
> >>> > > >
> >>> > > > Please see the attached log file when using your branch. The
> >>> trouble lines
> >>> > > > should be:
> >>> > > >
> >>> > > >  "asub=self.mangleFortranFunction("asub")
> >>> > > > cbody = "extern void "+asub+"(void);\nint main(int argc,char
> >>> > > > **args)\n{\n  "+asub+"();\n  return 0;\n}\n";
> >>> > > > "
> >>> > > >
> >>> > > > Thanks,
> >>> > > >
> >>> > > > Fande,
> >>> > > >
> >>> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay 
> >>> wrote:
> >>> > > >
> >>> > > > > I can't figure out what the stack in the attached configure.log.
> >>> [likely
> >>> > > > > some stuff isn't getting 

Re: [petsc-users] About the initial guess for KSP method.

2020-03-17 Thread Xiaodong Liu
Thanks. It is very useful.

Xiaodong Liu, PhD
X: Computational Physics Division
Los Alamos National Laboratory
P.O. Box 1663,
Los Alamos, NM 87544
505-709-0534


On Tue, Mar 17, 2020 at 8:17 AM Matthew Knepley  wrote:

> On Mon, Mar 16, 2020 at 4:32 PM Xiaodong Liu  wrote:
>
>> Hi, all,
>>
>> I am testing KSPSetInitialGuessNonzero using the case
>>
>>
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex50.c.html
>>
>> This case is special. WIth zero initial guess, 1 iteration can deliver
>> the exact solution. But when I tried true for KSPSetInitialGuessNonzero, it
>> shows the same convergence history as false.
>>
>> Does the code  change KSPSetInitialGuessNonzero to false automatically?
>>
>
> For that flag to make any difference, you have to pass in a nonzero vector
> to KSPSolve(). This example does not do that.
> Were you doing that?
>
>   Thanks,
>
>  Matt
>
>
>> Take care.
>>
>> Thanks,
>>
>> Xiaodong Liu, PhD
>> X: Computational Physics Division
>> Los Alamos National Laboratory
>> P.O. Box 1663,
>> Los Alamos, NM 87544
>> 505-709-0534
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] About the initial guess for KSP method.

2020-03-17 Thread Matthew Knepley
On Mon, Mar 16, 2020 at 4:32 PM Xiaodong Liu  wrote:

> Hi, all,
>
> I am testing KSPSetInitialGuessNonzero using the case
>
>
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex50.c.html
>
> This case is special. WIth zero initial guess, 1 iteration can deliver the
> exact solution. But when I tried true for KSPSetInitialGuessNonzero, it
> shows the same convergence history as false.
>
> Does the code  change KSPSetInitialGuessNonzero to false automatically?
>

For that flag to make any difference, you have to pass in a nonzero vector
to KSPSolve(). This example does not do that.
Were you doing that?

  Thanks,

 Matt


> Take care.
>
> Thanks,
>
> Xiaodong Liu, PhD
> X: Computational Physics Division
> Los Alamos National Laboratory
> P.O. Box 1663,
> Los Alamos, NM 87544
> 505-709-0534
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] node DG with DMPlex

2020-03-17 Thread Matthew Knepley
On Mon, Mar 16, 2020 at 5:20 PM Yann Jobic  wrote:

> Hi all,
>
> I would like to implement a nodal DG with the DMPlex interface.
> Therefore, i must add the internal nodes to the DM (GLL nodes), with the
> constrains :
> 1) Add them as solution points, with correct coordinates (and keep the
> good rotational ordering)
> 2) Find the shared nodes at faces in order to compute the fluxes
> 3) For parallel use, so synchronize the ghost node at each time steps
>

Let me get the fundamentals straight before advising, since I have never
implemented nodal DG.

  1) What is shared?

  We have an implementation of spectral element ordering (
https://gitlab.com/petsc/petsc/-/blob/master/src/dm/impls/plex/examples/tutorials/ex6.c).
Those share
  the whole element boundary.

  2) What ghosts do you need?

  3) You want to store real space coordinates for a quadrature?

  We usually define a quadrature on the reference element once.

  Thanks,

Matt


> I found elements of answers in those threads :
> https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html
>
> https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html
>
> However, it's not clear for me where to begin.
>
> Quoting Matt, i should :
> "  DMGetCoordinateDM(dm, );
>
>   DMCreateLocalVector(cdm, );
>   
>   DMSetCoordinatesLocal(dm, coordinatesLocal);"
>
> However, i will not create ghost nodes this way. And i'm not sure to
> keep the good ordering.
> This part should be implemented in the PetscFE interface, for high order
> discrete solutions.
> I did not succeed in finding the correct part of the source doing it.
>
> Could you please give me some hint to begin correctly thoses tasks ?
>
> Thanks,
>
> Yann
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Random initial states of EPSSolve

2020-03-17 Thread Jose E. Roman
You can set a different seed for the random number generator as follows:
- Use EPSGetBV() to extract the BV object
- Use BVGetRandomContext() to extract the PetscRandom object
- Use PetscRandomSetSeed() to set the new seed.

Jose


> El 17 mar 2020, a las 13:23, Yang Bo (Asst Prof)  
> escribió:
> 
> Hi everyone,
> 
> I am diagonalising a large symmetric real matrix for its null space (highly 
> degenerate eigenstates with zero eigenvalues). I am using krylovschur which 
> is variational, which is supposed to start with a set of random initial 
> vectors. They will eventually converge to random vectors in the null space.
> 
> The problem is that if I run the EPSSOLVE with the same set of parameters 
> (e.g. -eps_ncv and -eps_mpd), I always get the same eigenstates in the null 
> space. This implies that the solver always start the same set of initial 
> “random” vectors. 
> 
> How does petsc generate the initial random vectors for krylovschur? Is there 
> a way for me to generate different random initial vectors every time I run 
> the diagonalisation (of the same matrix)?
> 
> Thanks, and stay safe and healthy!
> 
> Cheers,
> 
> Yang Bo
> CONFIDENTIALITY: This email is intended solely for the person(s) named and 
> may be confidential and/or privileged. If you are not the intended recipient, 
> please delete it, notify us and do not copy, use, or disclose its contents. 
> Towards a sustainable earth: Print only when necessary. Thank you.
> 



[petsc-users] Random initial states of EPSSolve

2020-03-17 Thread Yang Bo (Asst Prof)
Hi everyone,

I am diagonalising a large symmetric real matrix for its null space (highly 
degenerate eigenstates with zero eigenvalues). I am using krylovschur which is 
variational, which is supposed to start with a set of random initial vectors. 
They will eventually converge to random vectors in the null space.

The problem is that if I run the EPSSOLVE with the same set of parameters (e.g. 
-eps_ncv and -eps_mpd), I always get the same eigenstates in the null space. 
This implies that the solver always start the same set of initial “random” 
vectors.

How does petsc generate the initial random vectors for krylovschur? Is there a 
way for me to generate different random initial vectors every time I run the 
diagonalisation (of the same matrix)?

Thanks, and stay safe and healthy!

Cheers,

Yang Bo


CONFIDENTIALITY: This email is intended solely for the person(s) named and may 
be confidential and/or privileged. If you are not the intended recipient, 
please delete it, notify us and do not copy, use, or disclose its contents.
Towards a sustainable earth: Print only when necessary. Thank you.