[petsc-users] Galerkin multigrid coarsening

2014-01-30 Thread Boris Kaus
Hi,


While doing multigrid it is possible to explicitly set restriction and 
prolongation/interpolation operators in PETSC using PCMGSetRestriction and 
PCMGSetInterpolation.  In many cases, the restriction (R) and prolongation (P) 
operators are simply the transpose of each other (R=P’). 

Yet, in certain cases, for example variable viscosity Stokes with a staggered 
finite difference discretization, it is advantageous if they are different (see 
http://arxiv.org/pdf/1308.4605.pdf).

Accordding to the Wesseling textbook, Galerkin coarsening of the fine grid 
matrix is defined as:

Acoarse = R*A*P

Yet, as far as I can tell, PETSc seems to implement this as:

Acoarse = R*A*R’

even in the case that R is different from P’. 
Is there a way for PETSc to use the actually provided prolongation and 
restriction operators to compute the coarse grid approximation?

thanks!

Boris





___

Boris J.P. Kaus

Institute of Geosciences, 
Geocycles Research Center 
Center for Computational Sciences.
University of Mainz, Mainz, Germany
Office: 00-285
Tel:+49.6131.392.4527

http://www.geo-dynamics.eu
___




Re: [petsc-users] Galerkin multigrid coarsening

2014-01-30 Thread Jed Brown
Boris Kaus k...@uni-mainz.de writes:

 While doing multigrid it is possible to explicitly set restriction and
 prolongation/interpolation operators in PETSC using PCMGSetRestriction
 and PCMGSetInterpolation.  In many cases, the restriction (R) and
 prolongation (P) operators are simply the transpose of each other
 (R=P’).

 Yet, in certain cases, for example variable viscosity Stokes with a
 staggered finite difference discretization, it is advantageous if they
 are different (see http://arxiv.org/pdf/1308.4605.pdf).

 Accordding to the Wesseling textbook, Galerkin coarsening of the fine
 grid matrix is defined as:

 Acoarse = R*A*P

 Yet, as far as I can tell, PETSc seems to implement this as:

 Acoarse = R*A*R’

Certainly, the code just had an outdated assumption.  I think I have
added support for this (it does R*A*P when R!=P), but I don't have a
test case readily available.  Could you try branch
'jed/pcmg-galerkin-rap' (if you were using v3.4/maint) or 'next' if
you've been tracking petsc-dev?


pgpJYCCQqyXQs.pgp
Description: PGP signature


Re: [petsc-users] Configuration of Hybrid MPI-OpenMP

2014-01-30 Thread Danyang Su
I made a second check on initialization of PETSc, and found that the 
initialization does not take effect. The codes are as follows.


call PetscInitialize(Petsc_Null_Character,ierrcode)
call MPI_Comm_rank(Petsc_Comm_World,rank,ierrcode)
call MPI_Comm_size(Petsc_Comm_World,nprcs,ierrcode)

The value of rank and nprcs are always 0 and 1, respectively, whatever 
how many processors are used in running the program.


Danyang

On 29/01/2014 6:08 PM, Jed Brown wrote:

Danyang Su danyang...@gmail.com writes:


Hi Karli,

--with-threadcomm --with-openmp can work when configure PETSc with
MPI-OpenMP. Sorry for making a mistake before.
The program can be compiled but I got a new error while running my program.

Error: Attempting to use an MPI routine before initializing MPICH

This error occurs when calling MPI_SCATTERV. I have already called
PetscInitialize, and MPI_BCAST, which is just before the calling of
MPI_SCATTERV, can also work without throwing error.

When PETSc is configured without openmp, there is no error in this section.

Are you calling this inside an omp parallel block?  Are you initializing
MPI with MPI_THREAD_MULTIPLE?  Do you have other threads doing something
with MPI?

I'm afraid we'll need a reproducible test case if it still doesn't work
for you.




Re: [petsc-users] Configuration of Hybrid MPI-OpenMP

2014-01-30 Thread Jed Brown
Danyang Su danyang...@gmail.com writes:

 I made a second check on initialization of PETSc, and found that the 
 initialization does not take effect. The codes are as follows.

  call PetscInitialize(Petsc_Null_Character,ierrcode)
  call MPI_Comm_rank(Petsc_Comm_World,rank,ierrcode)
  call MPI_Comm_size(Petsc_Comm_World,nprcs,ierrcode)

 The value of rank and nprcs are always 0 and 1, respectively, whatever 
 how many processors are used in running the program.

The most common reason for this is that you have more than one MPI
implementation on your system and they are getting mixed up.


pgpU97SWmiOFs.pgp
Description: PGP signature


Re: [petsc-users] Galerkin multigrid coarsening

2014-01-30 Thread Kaus, Boris
Thanks Jed,

I'll give it a try and let you know. Might take a bit though.


Boris 

 On Jan 30, 2014, at 17:45, Jed Brown j...@jedbrown.org wrote:
 
 Boris Kaus k...@uni-mainz.de writes:
 
 While doing multigrid it is possible to explicitly set restriction and
 prolongation/interpolation operators in PETSC using PCMGSetRestriction
 and PCMGSetInterpolation.  In many cases, the restriction (R) and
 prolongation (P) operators are simply the transpose of each other
 (R=P’).
 
 Yet, in certain cases, for example variable viscosity Stokes with a
 staggered finite difference discretization, it is advantageous if they
 are different (see http://arxiv.org/pdf/1308.4605.pdf).
 
 Accordding to the Wesseling textbook, Galerkin coarsening of the fine
 grid matrix is defined as:
 
 Acoarse = R*A*P
 
 Yet, as far as I can tell, PETSc seems to implement this as:
 
 Acoarse = R*A*R’
 
 Certainly, the code just had an outdated assumption.  I think I have
 added support for this (it does R*A*P when R!=P), but I don't have a
 test case readily available.  Could you try branch
 'jed/pcmg-galerkin-rap' (if you were using v3.4/maint) or 'next' if
 you've been tracking petsc-dev?


[petsc-users] Input arguments of DMPlexCreateFromDAG

2014-01-30 Thread Cedric Doucet
Hello, 
I tried to use DMPlexCreateFromDAG function to create a DM structure from a 
hybrid mesh. 
To understand how it works, I look at ex5.c file. 
Unfortunately, there are some things that I do not understand. 
1. Why does coneSize lists faces of cells first, then faces of vertices and 
finally faces of edges (in 2D)? Listing faces of vertices, then faces of edges 
and finally faces of cells is not simpler? 
2 What does coneOrientations contain? For two counterclockwise oriented 
triangles sharing an edge e={v0,v1}, I understand that {v0,v1} is the right 
oriented edge for the second triangle (and e for the first one) but what is the 
meaning of value -2? I read that it is -(o+1) with o=1 but why does o equal 1 
in this case? 
Thank you very much for your help! 
Best regards, 
Cédric Doucet 



Re: [petsc-users] Input arguments of DMPlexCreateFromDAG

2014-01-30 Thread Matthew Knepley
On Thu, Jan 30, 2014 at 12:19 PM, Cedric Doucet cedric.dou...@inria.frwrote:

 Hello,
 I tried to use DMPlexCreateFromDAG function to create a DM structure from
 a hybrid mesh.
 To understand how it works, I look at ex5.c file.
 Unfortunately, there are some things that I do not understand.
 1. Why does coneSize lists faces of cells first, then faces of vertices
 and finally faces of edges (in 2D)? Listing faces of vertices, then faces
 of edges and finally faces of cells is not simpler?


This is really about what order you number points. I wanted to support
meshes with just cells and vertices, as well as
those with face and edges. I also wanted to be able to convert between
them. Thus it made sense to leave the cell
and vertex numbers invariant under this change. I still think this is the
best pragmatic alternative.


 2  What does coneOrientations contain? For two counterclockwise oriented
 triangles sharing an edge e={v0,v1}, I understand that {v0,v1} is the right
 oriented edge for the second triangle (and e for the first one) but what
 is the meaning of value -2? I read that it is -(o+1) with o=1 but why does
 o equal 1 in this case?


Right now, orientation o = signstart means:

  sign: + means traverse in cone order
 - means traverse in the reverse of cone order

  start: cone point to start iteration on
if sign is -, then start on point -(start+1)

Thus -2 means start on point 1 and go in reverse order, so
for an edge that would be {1, 0} which is what you want.

What we should really have is start identify a group element from the
symmetry
group of the point, and sign indicate inversion. However, that will be a
big rewrite
and needs to be motivated by applications.

   Matt

Thank you very much for your help!
 Best regards,
 Cédric Doucet




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Configuration of Hybrid MPI-OpenMP

2014-01-30 Thread Danyang Su

On 30/01/2014 9:30 AM, Jed Brown wrote:

Danyang Su danyang...@gmail.com writes:


I made a second check on initialization of PETSc, and found that the
initialization does not take effect. The codes are as follows.

  call PetscInitialize(Petsc_Null_Character,ierrcode)
  call MPI_Comm_rank(Petsc_Comm_World,rank,ierrcode)
  call MPI_Comm_size(Petsc_Comm_World,nprcs,ierrcode)

The value of rank and nprcs are always 0 and 1, respectively, whatever
how many processors are used in running the program.

The most common reason for this is that you have more than one MPI
implementation on your system and they are getting mixed up.
Yes, I have MPICH2 and Microsoft HPC on the same OS. The PETSc was build 
using MPICH2. I will uninstall Microsoft HPC to see if it works.


Thanks,

Danyang


Re: [petsc-users] Galerkin multigrid coarsening

2014-01-30 Thread Torquil Macdonald Sørensen
On 30/01/14 17:45, Jed Brown wrote:
 Boris Kaus k...@uni-mainz.de writes:

 While doing multigrid it is possible to explicitly set restriction and
 prolongation/interpolation operators in PETSC using PCMGSetRestriction
 and PCMGSetInterpolation.  In many cases, the restriction (R) and
 prolongation (P) operators are simply the transpose of each other
 (R=P’).

 Yet, in certain cases, for example variable viscosity Stokes with a
 staggered finite difference discretization, it is advantageous if they
 are different (see http://arxiv.org/pdf/1308.4605.pdf).

 Accordding to the Wesseling textbook, Galerkin coarsening of the fine
 grid matrix is defined as:

 Acoarse = R*A*P

 Yet, as far as I can tell, PETSc seems to implement this as:

 Acoarse = R*A*R’
 Certainly, the code just had an outdated assumption.  I think I have
 added support for this (it does R*A*P when R!=P), but I don't have a
 test case readily available.  Could you try branch
 'jed/pcmg-galerkin-rap' (if you were using v3.4/maint) or 'next' if
 you've been tracking petsc-dev?

This question was similar to my recent unanswered one:

http://lists.mcs.anl.gov/pipermail/petsc-users/2014-January/020382.html

I just tried jed/pcmg-galerkin-rap, and it seems to work fineI verified
that the coarse system matrix is R*A*P, for a two-level PCMG. I'm first
calling PCMGSetInterpolation() with a matrix P, and then
PCMGSetInterpolation() with a matrix R. Previously R was ignored, and I got

A_c = P^T*A*P

With jed/pcmg-galerkin-rap, I'm getting:

A_c = R*A*P

So now I can experiment with using R or P^T, which is very nice!

I did not have an opportunity to test what happens if I don't set any
restriction, but I'm hoping it will then use P^T, as before.

Best regards
Torquil Sørensen



Re: [petsc-users] Galerkin multigrid coarsening

2014-01-30 Thread Jed Brown
Torquil Macdonald Sørensen torq...@gmail.com writes:
 This question was similar to my recent unanswered one:

 http://lists.mcs.anl.gov/pipermail/petsc-users/2014-January/020382.html

That's why I Cc'd you.

 I just tried jed/pcmg-galerkin-rap, and it seems to work fineI verified
 that the coarse system matrix is R*A*P, for a two-level PCMG. I'm first
 calling PCMGSetInterpolation() with a matrix P, and then
 PCMGSetInterpolation() with a matrix R. Previously R was ignored, and I got

 A_c = P^T*A*P

 With jed/pcmg-galerkin-rap, I'm getting:

 A_c = R*A*P

 So now I can experiment with using R or P^T, which is very nice!

Thanks for testing.

 I did not have an opportunity to test what happens if I don't set any
 restriction, but I'm hoping it will then use P^T, as before.

Yes, it will.


pgpE1H7plLNgn.pgp
Description: PGP signature


Re: [petsc-users] Galerkin multigrid coarsening

2014-01-30 Thread Torquil Macdonald Sørensen
On 30/01/14 22:05, Jed Brown wrote:
 Torquil Macdonald Sørensen torq...@gmail.com writes:
 This question was similar to my recent unanswered one:

 http://lists.mcs.anl.gov/pipermail/petsc-users/2014-January/020382.html
 That's why I Cc'd you.


Thanks, I didn't notice that. I am actually very impressed with the
helpfulness from the Petsc developers on this mailing list.

- Torquil




[petsc-users] saving log info

2014-01-30 Thread Mohammad Mirzadeh
Hi guys,

I'd like to be able to post-process the values reported by -log_summary. Is
there any programmatic way to extract the data and save it manually to a
file?

If not, is there any parser that go over stdout dump and extract the info?

Mohammad


Re: [petsc-users] Prometheus vs GAMG for elasticity/plasticity problems

2014-01-30 Thread Mark Adams
Try running prometheus with -out_verbose 2
and gamg with -pc_gamg_verbose 2
and send me the output.

Mark


On Thu, Jan 30, 2014 at 4:40 AM, Thomas Gross tgr...@ilsb.tuwien.ac.atwrote:

 Please find enclosed the output for GAMG using  -mg_levels_ksp_max_it 1:

 Prometheus:
 -ksp_type cg -pc_type prometheus -log_summary -ksp_monitor -ksp_view
 -options_left  Prometheus_Large_Log_New.txt

 Gamg:
 -ksp_type cg -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1
 -log_summary -ksp_monitor -ksp_view -options_left -mg_levels_ksp_type
 richardson -mg_levels_pc_type sor -mg_levels_ksp_max_it 1
 GAMG_Rich_max_it.txt

 Best regards,
 Thomas




 On Jan 30, 2014, at 2:14 AM, Jed Brown j...@jedbrown.org wrote:

  Thomas Gross tgr...@ilsb.tuwien.ac.at writes:
 
  I have now compiled Petsc 3.2.0 with Prometheus and Petsc 3.4.3 with
 GAMG using the same compiler. Furthermore I am using the same MPI
 implementation for both runs. Still Prometheus is considerably faster (see
 attached log files).
 
  Prometheus:
  -ksp_type cg -pc_type prometheus -log_summary -ksp_monitor -ksp_view
 -options_left  Prometheus_Large_Log_New.txt
 
  Gamg:
  -ksp_type cg -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1
 -log_summary -ksp_monitor -ksp_view -options_left -mg_levels_ksp_max_it 1 
 GAMG_Large_max_it_log_New.txt
  -ksp_type cg -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1
 -log_summary -ksp_monitor -ksp_view -options_left mg_levels_ksp_type
 richardson -mg_levels_pc_type sor  GAMG_Large_Rich_Log_New.txt
 
  Can you add -mg_levels_ksp_max_it 1 to this last configuration?  I see a
  lot of time in MatSOR.  We can actually speed that up slightly with some
  extra caching, which may be worthwhile.
 
  We're seeing a bit lower performance in MatMult with GAMG, perhaps
  because we are not using block formats specialized for elasticity.
 
  Mark, what else is different?  What does Prometheus do differently in
  setup (not the bottleneck here, but I'm curious).





Re: [petsc-users] Prometheus vs GAMG for elasticity/plasticity problems

2014-01-30 Thread Mark Adams
And is this just a simple cube problem in elasticity as it looks like from
your input file name?


On Thu, Jan 30, 2014 at 4:40 AM, Thomas Gross tgr...@ilsb.tuwien.ac.atwrote:

 Please find enclosed the output for GAMG using  -mg_levels_ksp_max_it 1:

 Prometheus:
 -ksp_type cg -pc_type prometheus -log_summary -ksp_monitor -ksp_view
 -options_left  Prometheus_Large_Log_New.txt

 Gamg:
 -ksp_type cg -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1
 -log_summary -ksp_monitor -ksp_view -options_left -mg_levels_ksp_type
 richardson -mg_levels_pc_type sor -mg_levels_ksp_max_it 1
 GAMG_Rich_max_it.txt

 Best regards,
 Thomas




 On Jan 30, 2014, at 2:14 AM, Jed Brown j...@jedbrown.org wrote:

  Thomas Gross tgr...@ilsb.tuwien.ac.at writes:
 
  I have now compiled Petsc 3.2.0 with Prometheus and Petsc 3.4.3 with
 GAMG using the same compiler. Furthermore I am using the same MPI
 implementation for both runs. Still Prometheus is considerably faster (see
 attached log files).
 
  Prometheus:
  -ksp_type cg -pc_type prometheus -log_summary -ksp_monitor -ksp_view
 -options_left  Prometheus_Large_Log_New.txt
 
  Gamg:
  -ksp_type cg -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1
 -log_summary -ksp_monitor -ksp_view -options_left -mg_levels_ksp_max_it 1 
 GAMG_Large_max_it_log_New.txt
  -ksp_type cg -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1
 -log_summary -ksp_monitor -ksp_view -options_left mg_levels_ksp_type
 richardson -mg_levels_pc_type sor  GAMG_Large_Rich_Log_New.txt
 
  Can you add -mg_levels_ksp_max_it 1 to this last configuration?  I see a
  lot of time in MatSOR.  We can actually speed that up slightly with some
  extra caching, which may be worthwhile.
 
  We're seeing a bit lower performance in MatMult with GAMG, perhaps
  because we are not using block formats specialized for elasticity.
 
  Mark, what else is different?  What does Prometheus do differently in
  setup (not the bottleneck here, but I'm curious).





Re: [petsc-users] saving log info

2014-01-30 Thread Jed Brown
Mohammad Mirzadeh mirza...@gmail.com writes:

 Hi guys,

 I'd like to be able to post-process the values reported by -log_summary. Is
 there any programmatic way to extract the data and save it manually to a
 file?

 If not, is there any parser that go over stdout dump and extract the info?

I made a somewhat specialized parser and plotter here

  https://github.com/jedbrown/petscplot

Unfortunately, I never turned it into a library or easily-reusable tool.
These plots were for convergence performance with grid sequencing.

https://github.com/jedbrown/petscplot/wiki/PETSc-Plot


The parser is robust and easy to modify for whatever grammar you have
(depending on what you're logging).


pgpBd468x0jq0.pgp
Description: PGP signature


Re: [petsc-users] saving log info

2014-01-30 Thread Mohammad Mirzadeh
Thanks Jed. This looks interesting.


On Thu, Jan 30, 2014 at 2:45 PM, Jed Brown j...@jedbrown.org wrote:

 Mohammad Mirzadeh mirza...@gmail.com writes:

  Hi guys,
 
  I'd like to be able to post-process the values reported by -log_summary.
 Is
  there any programmatic way to extract the data and save it manually to a
  file?
 
  If not, is there any parser that go over stdout dump and extract the
 info?

 I made a somewhat specialized parser and plotter here

   https://github.com/jedbrown/petscplot

 Unfortunately, I never turned it into a library or easily-reusable tool.
 These plots were for convergence performance with grid sequencing.

 https://github.com/jedbrown/petscplot/wiki/PETSc-Plot


 The parser is robust and easy to modify for whatever grammar you have
 (depending on what you're logging).



Re: [petsc-users] Prometheus vs GAMG for elasticity/plasticity problems

2014-01-30 Thread Mark Adams



 We're seeing a bit lower performance in MatMult with GAMG, perhaps
 because we are not using block formats specialized for elasticity.


The block info seems to be there.


 Mark, what else is different?


* Prometheus seems to be coarsening slower and taking nearly twice the
iterations.
* GAMG is running slower and the kernels are too.  This would indicate that
the partitions are different (not likely) or the coarse grids are larger,
or something else?



 What does Prometheus do differently in
 setup (not the bottleneck here, but I'm curious).


The setup is quite different.  My RAP just does brute force four nested
loops with a lot of unrolling.
My graph setup stuff is more highly optimized in Prometheus in some ways.
 I would think GAMG is doing more work on coarse grids, but it seems to
have fewer of them.  The verbose output should shed some light on this.

These two runs do about the same number of flops and Prometheus is running
a little faster flop rate and is solving a little faster. The GAMG setup is
a lot slower and that has a small effect on the total solve time.

SOR is two iterations is twice as expensive and twice as powerful, roughly.

So GAMG is just running slower.

Prometheus does repartition coarse grids. Perhaps this is a poor network
and/or the initial partitioning is poor so the coarse grid repartitioning
is a big help.


Re: [petsc-users] Prometheus vs GAMG for elasticity/plasticity problems

2014-01-30 Thread Jed Brown
Mark Adams mfad...@lbl.gov writes:

 We're seeing a bit lower performance in MatMult with GAMG, perhaps
 because we are not using block formats specialized for elasticity.


 The block info seems to be there.

I meant that GAMG isn't using BAIJ, but Prometheus is.  That affects
kernel efficiency.

 SOR is two iterations is twice as expensive and twice as powerful, roughly.

It should be less than twice as expensive because the zero initial guess
case is optimized.  If we also optimize triangular reuse in the
subsequent residual evaluation, it should be about the same cost.


pgp9vb5xD1Eyb.pgp
Description: PGP signature


Re: [petsc-users] saving log info

2014-01-30 Thread Barry Smith

  master now has a viewer that dumps the raw data in Python, then you can write 
python scripts to process it any way you want. -log_view 
ascii:filename:ascii_info_detail  sample python script in 
bin/pythonscripts/petsclogformat.py

  I personally think that parsing the output of -log_summary (which has already 
been formatted) is a silly wrong headed approach.

   Barry

On Jan 30, 2014, at 4:26 PM, Mohammad Mirzadeh mirza...@gmail.com wrote:

 Hi guys,
 
 I'd like to be able to post-process the values reported by -log_summary. Is 
 there any programmatic way to extract the data and save it manually to a file?
 
 If not, is there any parser that go over stdout dump and extract the info?
 
 Mohammad



Re: [petsc-users] Prometheus vs GAMG for elasticity/plasticity problems

2014-01-30 Thread Mark Adams



 I meant that GAMG isn't using BAIJ, but Prometheus is.  That affects
 kernel efficiency.


Ah yes, that must be it.


[petsc-users] How can PETSc configure with MSMPI

2014-01-30 Thread Danyang Su

Hi All,

When configure petsc with msmpi

./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' 
--with-cxx='win32fe cl' --download-f-blas-lapack --with-threadco
mm --with-openmp --with-mpi-include=/cygdrive/c/Program Files/Microsoft 
HPC Pack 2008 R2/Inc --with-mpi-lib=/cygdrive/c/Prog

ram Files/Microsoft HPC Pack 2008 R2/Lib/amd64/msmpi.lib

I get the following error

--with-mpi-lib=['/cygdrive/c/Program', 'Files/Microsoft', 'HPC', 'Pack', 
'2008', 'R2/Lib/amd64/msmpi.lib'] and
--with-mpi-include=['/cygdrive/c/Program Files/Microsoft HPC Pack 2008 
R2/Inc'] did not work


Thanks,

Danyang


Re: [petsc-users] saving log info

2014-01-30 Thread Matthew Knepley
On Thu, Jan 30, 2014 at 5:56 PM, Barry Smith bsm...@mcs.anl.gov wrote:


   master now has a viewer that dumps the raw data in Python, then you can
 write python scripts to process it any way you want. -log_view
 ascii:filename:ascii_info_detail  sample python script in
 bin/pythonscripts/petsclogformat.py

   I personally think that parsing the output of -log_summary (which has
 already been formatted) is a silly wrong headed approach.


Cool, I will look at switching src/benchmarks/benchmarkExample.py to this.

   Matt



Barry

 On Jan 30, 2014, at 4:26 PM, Mohammad Mirzadeh mirza...@gmail.com wrote:

  Hi guys,
 
  I'd like to be able to post-process the values reported by -log_summary.
 Is there any programmatic way to extract the data and save it manually to a
 file?
 
  If not, is there any parser that go over stdout dump and extract the
 info?
 
  Mohammad




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener