On Tue, 7 Dec 2010, Thomas Sewell wrote:
I was recently reminded of one of the things on my wish-list for
Isabelle: a "make -k" mode in which the system continues for as long as
possible and reports as many errors as possible, rather than halting
after the first one. I think this would be generally useful to Isabelle
users, but it would be especially useful when fiddling with tactics that
are already in use. The usual approach is to try several variations
aiming to minimise incompatibility, but there's no easy way to get an
idea of the level of incompatibility without repairing the first few
errors.
To clarify, I'd like to be able to type something like "isabelle
testmake (arguments)", and the system would run through the same sources
as on an "isabelle make", only errors in proof scripts would be
suppressed. Once everything is loaded and any image to be saved is
saved, the system would print all the errors encountered at the end of
the log.
It occurred to me how to implement this feature using the existing
parallel apparatus. What's needed is a way to defer an exception raised
in computing a future object forever, leaving the exception in the
future and allowing all other independent work to conclude. Since
parallel proofs are executed in future calculations that never need to
be joined (unless someone is inspecting proof terms), this could provide
a solution. I had a go at doing this myself but got nowhere. Does anyone
familiar with the concurrency code think this is possible?
The strictly sequential treatment of errors is indeed an artifact of the
obsolete TTY model. I am still working hard on its replacement in
Isabelle/Scala and Isabelle/jEdit. Just some weeks ago, for the first
Isabelle course with the new interface, I also made some small
improvements concerning recovery after errors, although not as seriously
as is required here.
The whole theory loading process is approching a substantial reform.
During the summer I managed to remove lots of historical baggage. The
next step is to unify its parallel scheduler with the new interactive
document model.
While I'm putting things on the concurrency wish-list, I have one more:
a limit on the length of the work queue. The reason I request this is
that some of our large sessions (1-2 hrs) seem to perform poorly with
parallelism, and I suspect part of the problem is that the problem
parsing thread is getting so far ahead of the problem-solving threads
that the majority of the contexts and problems from the session are
living in memory waiting to be solved. I think the way you would
implement this is by switching mode once the work queue is large enough
and dropping any threads attempting further forks to the bottom of the
priority list. Does anybody else think this might be a good idea? I
guess I should try to conjure up an example session which demonstrates
the performance problems.
Did you check my publications on that? There are some fine-grained charts
of the future scheduler state, with some explanations how they were
produced.
Even that limited information can help in isolating bottle necks. There
is always more than one surprise awaiting ...
Of course, publicly available proofs are easier to profile, but that is a
different story.
Makarius
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev