The main problem with your solution is that for the moment pg coq needs to
   see the informations that were present in the prompt before a command was
   issued to know how to back just before this command. Therefore if we want
   to be able to backtrack inside the chunks we need to read all prompts and
   link each of them to the corresponding command. If we hide the prompts then
   backtracking inside chunks will be impossible. 

Well, there is no backtracking to a position inside a proof
once you finished the proof. So if a chunk is a whole proof, you
loose nothing. Otherwise, I would say, if you switch on the
speedup-option for processing in chunks, then you loose the
ability to backtrack inside the chunks. If you want to go there,
you have to backtrack the whole chunk and then start asserting
again in smaller steps. Similarly to what you have to do now when
you want to go into the middle of a proof after finishing it.

   Dealing with errors, as David said, will be a mess.

Indeed. My plan would be the following: Before processing a chunk
I remember the chunk start position somewhere. If there is an
error inside the chunk I backtrack to the beginning of the chunk.
I am pretty sure that this can work out without creating a mess. 

   On the problem of speeding up pgcoq Hendrik please consider waiting for the
   next big change in Coq kernel: asynchronism. Or at least proof lazy
   evaluation. I have seen a prototype of that (by Enrico Tassi) and it surely
   will make pg and any other interface much faster from the user perspective
   
I believe that these changes will speed up the compilation of
whole files, i.e. coqc. But I doubt they will speed up my 2000
line files. If Proof General is too slow to feed coqtop such that
the latter saturates 1 of my 4 cores, how could I gain from more
parallelism? On the contrary, processing bigger chunks,
especially whole proofs, would certainly increase the chance to
profit from these improvements.

   These results are indeed surprising. As I don't see why sending commands
   separately would speed down Coq, I guess pg is slow at interpreting prompts
   (see below). Maybe I should have a look at my code on this.
   
It might be the case that I see this big speedup, because my
tactics are too low level. 

For the measurement I disabled the parser cache and inserted the
following hack into coq-script-parse-cmdend-forward at the end,
where it returns 'cmd:

          (progn
            (if (save-excursion
                  (forward-line 0)
                  (looking-at " *Proof"))
                (progn
                  (re-search-forward "Qed\\." limit t)
                  (end-of-line)))
          'cmd
          )

Further I inserted (message "QUEUE EMPTY at %s" (float-time))
inside proof-shell-filter-manage-output in the if where
proof-shell-exec-loop returns with true. I processed the buffer
with M-x eval-expression 
   (progn (message "START %s" (float-time))
          (proof-process-buffer))

The above measurement hack works only if
- you first delete all Qed's inside comments
- make sure each Qed / Defined is the last command on its line
- add ``(* Qed. *)'' after each Defined

If you send me a file, which you are interested in, I can easily
do the measurements.


Bye,

Hendrik
_______________________________________________
ProofGeneral-devel mailing list
ProofGeneral-devel@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/proofgeneral-devel

Reply via email to