On 16/07/16 16:37, Lars Hupel wrote: >> I am open to hear arguments why latex documents need to be continuously >> available. So far there were none. > > That statement is unrelated to what you wrote afterwards, but let me > give you an argument anyway: The generated PDFs are, as you probably > agree, formal documents. In that sense, they need to be checked > continuously.
I invented that notion of formal documents myself, but I don't see the reasoning why the latex runs need to be continuously checked. Nothing depends on the resulting PDFs. When there is something wrong with it, it is usually just a matter of untyped/unscoped latex macros and not formally relevant. So Latex could be run once a week or once a month, and very little will happen. In contrast, a broken proof, definition, or ML programming interface causes a complete halt of the subsequent sessions building on that. Broken formal content is particularly annoying, because nothing can be reasonably "pushed on top of it". It would make the formal holes bigger and bigger. > But the situation is completely clear. Jenkins gives you an overview of > what exactly broke, by means of the "changes" page: > > <https://ci.isabelle.systems/jenkins/job/isabelle-repo/changes> > > It even provides handy links to the changesets. I still have problems reading these Jenkins web pages: they seem to contain 90% irrelevant information. > To give useful performance metrics, here are > some questions off the top of my head: > > - Measure the whole build time or just sessions? Separate session build times is important, but overall times also useful to know. > - Which sessions? What should be the minimum threshold for session > runtime to be considered useful? There could be indeed a threshold, e.g. 10-20s minimum session build time to show it at all. > - Elapsed time or CPU time? Both is important. Also the detailed parallel runtime parameters that are emitted every 0.5s during a session running: number of active threads, pending futures etc. > - What about sessions that grow in size over time? That is indeed important, although we have just ignored it historically. The AFP/Collections sessions are pretty close to the inevitable concrete wall. It would be better to see in some nice graph, how the approach towards hard limits progresses, and how attempts to postpone the hit come out as changed performance figures. Makarius _______________________________________________ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev