---------------------------------------------- PETSc Performance
Summary: ----------------------------------------------
./ex7 on a arch-cygw named LIZZYB with 1 processor, by John Wed Jul 27
15:47:39 2011
Using Petsc Release Version 3.1.0, Patch 7, Mon Dec 20 14:26:37 CST 2010
Max Max/Min Avg Total
Time (sec): 1.312e+03 1.00000 1.312e+03
Objects: 2.003e+04 1.00000 2.003e+04
Flops: 2.564e+11 1.00000 2.564e+11 2.564e+11
Flops/sec: 1.955e+08 1.00000 1.955e+08 1.955e+08
Memory: 1.029e+09 1.00000 1.029e+09
MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Reductions: 2.404e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length
N --> 2N flops
and VecAXPY() for complex vectors of
length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- ---
Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts
%Total Avg %Total counts %Total
0: Main Stage: 1.3119e+03 100.0% 2.5645e+11 100.0% 0.000e+00
0.0% 0.000e+00 0.0% 2.002e+04 83.3%
On Wed, Jul 27, 2011 at 2:21 PM, Jose E. Roman <jroman at dsic.upv.es> wrote:
>
> Don't use "time" to measure performance; instead use -log_summary or
> PetscGetTime for the interesting part of the computation. In this case,
> computing the residuals will take a lot of time.
> Jose
>
>
>
> El 27/07/2011, a las 19:28, John Chludzinski escribi?:
>
> > $ time ./ex7.exe -f1 k.dat -f2 m.dat -eps_gen_hermitian -eps_type lapack
> > -eps_smallest_real > x.out 2>&1
> >
> > real 19m4.487s
> > user 18m19.650s
> > sys 0m1.762s
> >
> > ---John
> >
> >
> > On Wed, Jul 27, 2011 at 7:03 AM, Jose E. Roman <jroman at dsic.upv.es>
> > wrote:
> > Try running with -eps_gen_hermitian (since ex7 does not assume that the
> > problem is symmetric).
> > Jose
> >
> >
> >
> > El 27/07/2011, a las 12:51, John Chludzinski escribi?:
> >
> > > I let the SLEPc code run for ~45 min. when it terminated with the same
> > > values I was getting using DSYGV in LAPACK.
> > >
> > > If I write code to directly call LAPACK (i.e., DSYGV), it uses ~3.93 min.
> > > ?What's up with this?
> > >
> > > ---John
> > >
> > >
> > > On Wed, Jul 27, 2011 at 5:29 AM, John Chludzinski <jchludzinski at
> > > gmail.com> wrote:
> > > I'm trying to create a dense matrices from values I'm reading from
> > > (binary) files. ?I tried the following code:
> > >
> > > Mat A;
> > > int n = SIZE; //4002
> > > double *K = (double *)calloc( sizeof(double), SIZE*SIZE );
> > > ...
> > > MatCreateSeqDense(PETSC_COMM_SELF, n, n, K, &A);
> > > MatView(A,PETSC_VIEWER_BINARY_(PETSC_COMM_WORLD));
> > > ierr = PetscFinalize();CHKERRQ(ierr);
> > >
> > > NOTE:*** I'm converting K to the FORTRAN column major from the C row
> > > major order before I call ?MatCreateSeqDense(...).
> > >
> > > This appears to work but when I try to use the 2 matrices I thus created
> > > with SLEPc ex7 (generalized eigenvalue problem) it never terminates,
> > > using:
> > >
> > > ./ex7.exe -f1 k.dat -f2 m.dat -eps_type lapack -eps_smallest_real
> > >
> > > Am I creating the proper PETSc binary (canonical) format for my 2
> > > matrices?
> > >
> > > ---John
> > >
> > >
> > >
> >
> >
>