Re: [isabelle-dev] NEWS: elimination rules for recursive functions and new command fun_cases
On 09/30/2013 02:41 PM, Makarius wrote: On Mon, 30 Sep 2013, Manuel Eberl wrote: On 30/09/13 11:49, Makarius wrote: On Mon, 23 Sep 2013, Manuel Eberl wrote: I sent my changes to Alexander Krauss last Wednesday so that he can review them. We are now getting very close to the fork-point for the release. So can you just post the changeset here, or send it to me privately? Of course. Here you go. Thanks. This is now Isabelle/d8f7f3d998de. Thanks for taking care of this. When looking at the original code earlier, I was also a bit irritated by the use of term patterns, but then it got lost somehow. Nevertheless, it is also a bit irritating that there seems to be no proper way of adding support for wildcards to fun_cases (and inductive_cases). This should be addressed at some point, but for now we are fine. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Testboard
On 09/27/2013 11:49 AM, Lars Noschinski wrote: It might be a good idea to implement a strategy which tests the existing heads in reverse chronological order (commits pushed last get tested first), but I am not sure whether this information is available in Mercurial (we have the commit date, but this is not necessarily related to the push-date). Such a strategy is easy to implement for a single repository. But for multiple repositories (Isabelle+AFP) there is no useful notion of heads (The obvious lifting to products does work theoretically, but not practically, since there are just too many of them...) Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Monad_Syntax
On 09/11/2013 05:04 PM, Makarius wrote: On Tue, 20 Aug 2013, Christian Sternagel wrote: any opinions on making the type of monadic bind more general (see the attached patch)? This thread seems to be still open. I now pushed the rebased change as eeff8139b3d8. Do monadic people have a standard Unicode point to render that operator? I would expect that most monadic people don't care very much about Unicode and are happy with latex and ascii... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] functions over abstract data
Hi Chris, [...] When defining a function f, whose result type is T, it might be necessary to peek under the hood in order to show termination. When doing this manually, we destroy the abstraction and always have to reason about the raw-level and even worse, also all the existing constants with result type T have to be deconstructed in the definition. I discussed similar ideas with John Matthews around 2007/8, where we also had a recursive specification of a value that could internally be expressed as a recursive function, even though it was not of function type itself. The (single) motivation at the time was the attempt to define infinite streams, modelled basically as nat = 'a. However, I abandoned the approach as too complicated. For streams, I believe they should be more naturally defined corecursively. The same might hold for your parsers if you find a good way of expressing it: Your par definition is well-formed because the recursive call is wrapped into something else... This looks more like a productivity argument then a termination one, even though, when unfolding, one can be seen as the other. Do you know the work of Nils Anders Danielsson on parsers, in particular http://www.cse.chalmers.se/~nad/publications/danielsson-parser-combinators.pdf ? I'm not sure what this would mean in HOL, but it is certainly relevant. [...] Here comes my suggestion [...] What you are proposing would add substantial complexity to pretty much everything in the function package. While it might be possible to do such a thing (no obvious deficiencies), you would pay later when maintaining the stuff. This is too much for what seems to me like a one-man-feature. PS: I started to dive into the function package. My first hurdle is that for f not of function type (as in par), no recursive calls are generated, since Function_Ctx_Tree.mk_tree says that No cong rules could be found. The analysis in the function package assumes basically a form where the arguments of recursive calls can be extracted explicitly. This is a hard assumption, and I see no chance of getting rid of it. The only sensible way of lifting this restriction is to build some sort of wrapper that reduces some other format to a normal function definition. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] The coming release
On 08/17/2013 04:05 PM, Makarius wrote: in the past few weeks the coming release has been mentioned in passing several times. So far the precise schedule is not clear, but just from the distance to Isabelle2013 and the amount of material that is about to be finished for Isabelle2013-1, it has to be rather soon after the summer. Since Isabelle is a huge and complex system, things that are relevant for a release need to be known well in advance. There is a small extension to the function package pending, which was written by Manuel Eberl. It produces elimination rules of a new format, and also provides a fun_cases command for ad-hoc elimination that is analogous to inductive_cases. Since there is some user demand for it and it is already functionally complete, I'd like to integrate this when I am back from vacation in two weeks, even if there are some minor things to be sorted out. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Monad_Syntax
Hi Chris, any opinions on making the type of monadic bind more general (see the attached patch)? Generalizing bind itself would rather be a topic for ICFP or POPL, and I cannot comment on that :-) Concerning the constant that represents it syntactically, I would say that if it does not break anything then it is fine. After all, this is just ad-hoc overloading, so generalizations can also be ad-hoc. I tested the change against IsaFoR (which makes heavy use of Monad_Syntax). Unfortunately, running JinjaThreads (which also uses Monad_Syntax) timed out on my machine (hopefully not due to the patch). Could anybody with access to a more powerful machine check this please? Pushed to testboard, which should run it on decent hardware: http://isabelle.in.tum.de/testboard/Isabelle/rev/eeff8139b3d8 Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] New (Co)datatypes: Status Plan (FYI)
Hi Jasmin, On 08/06/2013 01:59 AM, Jasmin Blanchette wrote: - The package registers a datatype interpretation to generate congruence rules the case combinator for each type. I guess it would make sense to have the package do that, or is there a specific reason why it is better to do it in a function-related extension? It is just about decoupling things: The current picture is that datatype and core function do not know each other. fun (which is an extra layer on top of function) combines them by means of the datatype interpretation. [...] - The generation of size functions (which was moved into the Function/ subdirectory at some point, but without any real reason) is extremely cryptic. This is mostly because it does a too complex task, essentially reversing the nested-to-mutual conversion in this particular instance. It appears that this should be re-done completely based on the infrastructure of the new package, and it would probably become much simpler. Probably. But Dmitriy told me you had a different idea on how to do this, based on well-founded relations instead of size functions? Do you remember what that could be and if so would you be willing to elaborate? The point was just that size functions to nat are too weak for some cases, which is the only conceptual point where fun does not subsume primrec. So theoretically it would be better to switch to either ordinals or well-founded relations (with some sort of composition principle). However, one would need very robust automation for things like ordinal arithmetic, to make the termination prover really work. This is overkill, just for the sake of these rare examples involving infinitely-branching recursion. So we should leave it as it is. In principle, the datatype package could also just generate a wf relation for use in manual termination proofs. This might make sense, but you clearly have more important things to do just now. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] New (Co)datatypes: Status Plan (FYI)
Hi Jasmin, On 07/30/2013 05:40 PM, Jasmin Christian Blanchette wrote: A rough, optimistic time plan follows. [...] Wow, I'm looking forward to it! Let me quickly review the dependencies of the function package on the datatype package: - The package registers a datatype interpretation to generate congruence rules the case combinator for each type. - The pattern splitting code and the pat_completeness method both retrieves the constructors for a given type, using Datatype.get_constrs. - The pat_completeness method also uses the exhaust theorem. - partial_function also destructs the case combinator and looks up rewrite rules for it. - The automated pattern proofs implicitly invoked by fun (essentially, the auto method) rely implicitly on various things in the simpset, in particular the distinctness and injectivity simp rules. This is the only situation where freeness is used. - The generation of size functions (which was moved into the Function/ subdirectory at some point, but without any real reason) is extremely cryptic. This is mostly because it does a too complex task, essentially reversing the nested-to-mutual conversion in this particular instance. It appears that this should be re-done completely based on the infrastructure of the new package, and it would probably become much simpler. In summary, except for the size functions which are somewhat special (and only used for termination proofs), the function package never uses any inductiveness and would happily swallow codatatypes as well. Thus, I would propose to model the concept of a type made of constructors explicitly as a declaration which other tools can look up and add further interpretations to (in the sense of datatype interpretations). The function package would only depend on that. If I understand correctly, that declaration would essentially be rep_datatype without the induction rule. The function package (and I assume other tools, such as case syntax generation) can only rely on that and thus treat datatypes and codadatypes uniformly. But, according to your schedule, these considerations become important only after the coming release, so I'll shut up for the moment. The corecursive counterpart to fun, corecursion up-to, is something Andrei urges us to work on, but where Dmitry and I are applying the brakes because of the pile of work that remains to be done on the new (co)datatype package before we can consider adding such nice features. In fact, inspired by Leino et al.'s (co)recursion mixture in Dafny [*], Andrei has some ideas to support some recursion in conjuction with corecursion up-to. I'd like to learn about corecursion up-to, which I have not come across so far. Are there any good references for this? Or should I rather have a beer with Andrei? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Subscripts within identifiers
On 07/10/2013 07:25 PM, Makarius wrote: Even simpler would be to invent a name for some variant of \omega that is not considered a letter, and use that with \^sup as before. That would be analogous to \epsilon vs. \some. Does the mapping from Isabelle symbols to Unicode code points have to be injective? Otherwise, this would be an easy way to escape the problem: Just have \omegasym, which displays as omega, but is not a letter... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] adhoc overloading
Hi Chris, I'm currently (as of changeset e6433b34364b) investigating whether it would be possible to allow adhoc overloading (implemented in ~~/src/Tools/adhoc_overloading.ML) inside local contexts. Nice! For completeness find my current adhoc_overloading.ML attached Apart from the various formalities pointed out by Makarius, here is another concern, which relates to Florian's earlier question about how local you want to be... Currently, Adhoc_Overloading replaces constants with other constants, which makes it global-only. Your current version tries to deal with Frees similarly, it's not just that. Recall the example you gave previously: consts FOO :: ... (some syntax) setup {* Adhoc_Overloading.add_overloaded @{const_name FOO} *} locale foo = fixes foo :: ... assumes ... begin local_setup {* Adhoc_Overloading.add_variant @{const_name FOO} @{term foo} *} Now consider the following interpretation: interpretation foo some term proof Now, some term should become a variant of FOO, so at least the variant side of the overloading relation must be an arbitrary term. With your current code, the overloading would only survive interpretations where foo happens to be mapped to a constant. So, I guess that the overloaded constants should remain constants, since they are just syntactic anyway. It seems that localizing those would be rather artificial. But the variants must be properly localized and subject to the morphism. As a step towards proper localization, you could start with the global code, and first generalize it to accept arbitrary terms as variants. Note that this touches in particular the uncheck phase, which can no longer use Term_Subst.map_aterms_same (or Term.map_aterms), since it becomes general rewriting. One must resort to the more general Pattern.rewrite_term here. When all this is working again, the actual localization is then a mere formality, of which you have already discovered the ingredients. Makarius wrote: * Same.function seems to be a let-over from the version by Alex Krauss. He had that once in his function package, copied from somewhere else (probably old code of mine). No, it has nothing to do with the function package, which never used any Same stuff. It just arose rather naturally from the requirement to return NONE when nothing changes... So it was not meant as an optimization. There is no point to do such low-level optimizations in the syntax layer. It is for hardcore kernel operations only So should I manually check the result for equality, or does the framework do this for me? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Spec_Check
On 05/30/2013 03:51 PM, Makarius wrote: BTW, I've seen really good sources recently: ACL2. They have a *strict* 80 char limit, and really good writing style of essays, not code documentation. This sounds interesting. Can you point to some examples of such essays? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] [PATCH 0 of 1] Fix top-level printing of exception messages containing forced-line breaks
Hi David, On 04/03/2013 09:18 AM, David Greenaway wrote: On 02/04/13 21:17, Makarius wrote: before you send more patches, can you please go back to the very start of the mail thread from last time, which contains a lot of hints how things are done, including pointers to the documentation. I am not going to spend such an amount of time again, especially when it looks like it is being wasted. My apologies if your time has been wasted. It was my hope that sending a bug report, along with an analysis of its cause and a suggested fix would waste far less of your time than simply sending through just the bug report with nothing more. I personally found the patch helpful in clarifying what you think is the source of the issue that you are seeing. So I merely read it as This is what I tried and it made the problem go away for me (even though you wrote Please review and possibly apply). Such a patch is a constructive proof of some analysis that you made, which I think is a good thing (and it can always easily be ignored when you are totally on the wrong track). However, most problems are deeper rooted and not easily patched away, so people are usually very skeptical about such superficial fixes. The usual practice is that the experts in the relevant areas look at the problem (not so much the proposed solution) and implement a suitable solution if and when possible. Note that this can take some time, as there are other, more pressing things. Here it may help to clarify why the issue is a problem for you in real-life applications. The area of your issue (interaction with the underlying ML system) is maintained by Makarius pretty much exclusively. I also fear that your hints have been too subtle for me. I have re-read README_REPOSITORY (which appears to be mostly a HG tutorial, which a short paragraph describing the desired commit message content) and chapter 0 of the implementation manual (which, amongst other things, includes a longer discussion of the desired ML style, variables names, etc). Despite this, I must confess that I am still not sure what I am doing wrong. Does my 4 line patch violate the Isabelle style guidelines, or have I missed something about the correct etiquette for submitting patches? I would greatly appreciate if someone could let me know what I am doing wrong, so I can avoid wasting both your time and my time in the future. Some points that I noticed (but this is neither exhaustive nor authoritative): - Architecture violation: Isabelle/ML code may not refer to the PolyML structure directly, since it must use the compatibility layer. Your code does not compile with SML/NJ which is still formally supported. Possibly, something similar could be done in the compatibility layer, but one has to consider the consequences very carefully. - Process: You are assuming/proposing a fix, but there has been no discussion whether the behavior you are seeing is actually something that should be fixed. In particular, you seem to be expecting the display of low-level exceptions to be as convenient as user-level pretty printing. AFAIK, this is not the case in general. Due to the complexity of the system, there are many grey areas that are neither right nor wrong. I would say that pretty printing of low-level exceptions is such a case. So you should describe your actual real-life problem, and we can then also look for other solutions. It might also be interesting to know if the problem has always been there or has been introduced by some change. I personally saw quite some long CTERM exceptions and never noticed strange printing, so it might also be a new issue (just speculating here). - Style: Your commit message is indeed too verbose, according to the usual standards, and it has the wrong format. The question whether Isabelle's terse style is good or bad is a different issue, but the conventions are like that, see README_REPOSITORY. Hope this helps a bit... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] jEdit: Automatic popup menus on hovers
On Sat, 9 Mar 2013, Lars Noschinski wrote: This behaviour only popped up after working with one session for a longer time and jEdit was having frequent hiccups then, so I guess this was due to memory pressure (max memory usage was near the limit of 1600m set for the JVM). On 03/18/2013 01:29 PM, Makarius wrote: I've spent some days pondering various possibilities and reading sources of Java/Swing libraries. Not everything is clear, but there are various possibilities of memory leaks and timing problems when popping up a new window for each tooltip (in the sense of regular getToolTipText of Swing components). Concerning actual memory leaks, I made the experience that these are rather easy to track down on the JVM: Take a heap dump and analyze it using a suitable tool such as Eclipse Memory Analyzer (http://www.eclipse.org/mat/). The tool has an automated analysis which can usually point out the main sources of memory consumption immediately. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Debugging low-level exceptions
On 03/18/2013 12:47 PM, Makarius wrote: On Sat, 16 Mar 2013, Florian Haftmann wrote: you have to watch the »Raw output« buffer (via Plugins - Isabelle). Toplevel.debug and Runtime.debug are the same flag, with slightly varying purpose over the years, but now it is just for exception_trace wrapping for Isar transactions. The underlying PolyML.exception_trace works via low-level stdout, so the Raw Output panel is indeed the way to see that in Isabelle/jEdit. One also needs to guess what is the right trace, since it is not related to formal document content --- just physical output on a side-channel. Thanks, that helped. I don't know why my first attempt didn't work (of course I tried raw output), but now I am getting the expected traces. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Debugging low-level exceptions
Hi all, What is the current (as of, say, d5c95b55f849) method for getting exception traces? I am trying to debug a low-level exception such as exception Match raised (line 287 of term.ML) which happens somewhere below partial_function. Some years ago there used to be Toplevel.debug flag, which would activate printing traces of any top-level errors. The closest to this I found in the current codebase is Runtime.debug : bool Unsynchronized.ref, but putting ML {* Runtime.debug := true; *} does not seem to have any effect (or I did not look in the right place). This is Isabelle/jEdit. Should this work? Is there something else I should use? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Pushing to testboard fails
hg push -f testboard pushing to ssh://nosch...@lxbroy10.informatik.tu-muenchen.de//home/isabelle-repository/repos/testboard searching for changes remote: abort: could not lock repository /home/isabelle-repository/repos/testboard: Permission denied abort: unexpected response: empty string It seems the write permissions for group isabelle got lost. No idea when this happened. Now it seems to work again. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Repository Trouble
On 12/20/2012 06:22 PM, Makarius wrote: I just had a long phone call with Franz Huber, the local system admin person. All the macbroy20..29 and lxlabbroyX machines involved here use the same OpenSuse 12.2 with that hg 2.4. So just empirically that looks like the problem -- breakdowns started approx. at the time of update of several of these machines. I made a few experiments to reduce the room for wild speculations and myths: (Bottom line: everybody use lxbroy10 for pushes and we are safe) SETUP = The attached crude shell script pushes single changesets to a repository in loop. As source, I used a clone of the Isabelle repository. As destination, I used an empty repository in the same format as our central repository. I ran the script concurrently on various combinations of machines and with different versions of Mercurial. This test turns out to be fairly selective: Either the repository gets corrupted in the first 1,000 pushes, or it stays intact for 20,000 pushes, where I stopped the test. My assumption is that the corruption seen here is the same that we have had in production use. The error message is the same. RESULTS === | No | host A | hg A | host B | hg B | Conc. | NFS | - | |+++++---+-+-| | 1 | lxlabbroy5 | 2.4| lxbroy7| 2.4| Yes | Yes | BAD | | 2 | lxlabbroy5 | 2.4| - | - | No| Yes | OK | | 3 | lxlabbroy6 | 2.2.2* | lxbroy7| 2.1.1* | Yes | Yes | BAD | | 4 | lxlabbroy7 | 2.4| lxlabbroy7 | 2.4| Yes | Yes | OK | | 5 | lxlabbroy6 | 2.2.2* | lxlabbroy6 | 2.2.2* | Yes | No | OK | | 6 | lxlabbroy8 | 2.2.2* | lxlabbroy9 | 2.2.2* | Yes | Yes | BAD | | 7 | lxlabbroy8 | 2.2.2* | lxlabbroy9 | 2.2.2* | No| Yes | BAD | | 8 | lxbroy7| 2.1.1* | lxbroy8| 2.1.1* | No| Yes | OK | | 9 | lxbroy6| 2.1.1* | lxbroy9| 2.1.1* | Yes | Yes | OK | | 10 | lxlabbroy7 | 2.1.1 | lxlabbroy8 | 2.1.1 | Yes | Yes | BAD | | 11 | macbroy20 | 2.2.2* | macbroy21 | 2.2.2* | Yes | Yes | BAD | | 12 | lxbroy6| 2.4| lxbroy9| 2.4| Yes | Yes | OK | | 13 | macbroy20 | 2.1.1 | macbroy21 | 2.1.1 | Yes | Yes | BAD | *) Version from the system's installation. Otherwise, Mercurial was compiled from source. Conc.: Yes: Different processes (on the two hosts) push concurrently No: Only one process, but via ssh through two different hosts (Here, I used a slightly different script). Exception: Run #2 NFS: Does the destination repository live on NFS? - : OK: Can do 20,000 pushes without seeing a corruption BAD: Repository corruption before 1,000 pushes. INTERPRETATION == - There is no correlation with the Mercurial version in use. Breakages occur with older and newer versions alike, and the same version is OK in other circumstances. - The error only occurs when the repository is accessed from different hosts. The access does not need to be concurrent (which excludes a problem with Mercurial's locking mechanisms). This is also similar to the situation we had in production use, where concurrent pushes are fairly unlikely. - At least one of the hosts involved must be lxlabbroy* or macbroy*, the OpenSuSE machines. The Gentoo servers are not affected. I would say that this points to the SUSE NFS client driver as the source of the problem. If we use lxbroy10 exclusively for pushes, we should be safe until the issue is fixed. Alex #!/bin/bash SRC=./src DEST=./dest function fail { echo $1 exit 1 } [ -n $1 ] || fail Abort: path to mercurial must be given as first argument HG=$1 echo Using mercurial $HG while : do # Get tip revision from DEST DEST_TIP=$($HG -R $DEST tip --template '{rev}') let NEXT = $DEST_TIP + 1 # Push one changeset echo Pushing revision $NEXT $HG -R $SRC push -f -r $NEXT $DEST /dev/null # quick integrity check $HG -R $DEST tip /dev/null || fail hg tip failed. Broken repository!?! sleep 0.2 done ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Repository Trouble
On 12/20/2012 12:20 AM, Alexander Krauss wrote: I am now writing this up for the hg mailing list, since we now may have enough information to get help tracking it down... I posted a question here: http://www.selenic.com/pipermail/mercurial/2012-December/044783.html and there are some answers, which suggest that NFS or other physical issues are responsible for this. On 12/20/2012 06:22 PM, Makarius wrote: I just had a long phone call with Franz Huber, the local system admin person. All the macbroy20..29 and lxlabbroyX machines involved here use the same OpenSuse 12.2 with that hg 2.4. So just empirically that looks like the problem -- breakdowns started approx. at the time of update of several of these machines. This is another possibility, different from the speculations by Sullivan/Mackall above. Have there been any changes in the NFS infrastructure this year? Any changes in the underlying hardware? He will tell me later about further moves. The current situation is this: (1) lxbroy10 is the recommend ssh login server for pushes right now, since it uses a completely different Mercurial installation on Linux version. You merely need to reconfigure your .hg/hgrc like this: [paths] default = ssh://wenzelm@lxbroy10//home/isabelle-repository/repos/isabelle in analogy to usual host switching, using your own login name, of course. This is a good idea in any case. But if the problem no longer occurs then, it does not necessarily mean that the Mercurial version is the cause of the problem. It can also be a different version of the kernel/NFS client etc. (2) The local sysadmins are working on replacement of the Mercurial 2.4 from SuSE 12.2, which is potentially the cause problems here. Replacement with what? Going to an older version is no solution in the long run. The changes between 2.4 and 2.4.1 do not look relevant to me. (3) We watch closely what happens, and think again later. Yes. Anybody who notices anything unusual, please report it on this thread. (4) From http://www.selenic.com/pipermail/mercurial/2012-December/044784.html: Brian O'Sullivan wrote: I have a few questions around your report. What is your NFS server setup? What OS is the server running, and what version of NFS and transport are you using? @Franz: can you give answers to these questions? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] AFP devel broken
On 12/05/2012 10:50 PM, Gerwin Klein wrote: I am unsure how to proceed debugging these hangs. I will try some other machines and see if it is correlated with MacOS X or if it is something else. Since no log file gets produced and no information comes out of tty mode, it's not even clear to me which stage exactly hangs. It's entirely possible that it's just hanging trying to load the parent image, for instance. Recently I noticed strange effects when trying to read larger files from NFS (larger meaning 1-2 GB). My situation was different. The reading was from within an Isabelle session (using basically File.fold_lines), which would eventually (and predictably) fail at some point during the processing with an exception going back to a bad file descriptor from the OS. This happened both on lxbroy7 and lxbroy10, and it went away when I copied the file to the local file system and read it from there. I don't know what this is, but I could imagine something strange happening when the same occurs while polyml is loading an image. This is just guessing though. The fact that mira builds still work also suggests NFS issues: the mira workbench sits under /tmp, which is local. Could one think about moving isatests $ISABELLE_HOME_USER to a local disk? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Unable to push to Isabelle reop
Quoting Clemens Ballarin balla...@in.tum.de: [...] pushing to ssh://balla...@macbroy2.informatik.tu-muenchen.de//home/isabelle-repository/repos/isabelle searching for changes remote: Not trusting file /mnt/nfsbroy/home/isabelle-repository/repos/isabelle/.hg/hgrc from untrusted user wenzelm, group isabelle remote: abort: could not lock repository /home/isabelle-repository/repos/isabelle: Permission denied abort: unexpected response: empty string I wonder whether this is related to the recently 'broken' repository, or are my permissions degrading? This may be a side effect of Makarius' re-cloning. Maybe try another host as gateway? macbroy2 has a somewhat non-standard setup. Do you get the same result when using, e.g. macbroy20? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] mira database migration
Hi all, Today, I've completed the (long overdue) migration of the mira database from Florian's old machine to a proper server (hosted on isabelle.in.tum.de). Since the replication feature had some problems (probably due to the version mismatch), I moved the data over by dumping to a file and reloading at the other end. This means that about a day of reports have not been migrated, since they came in after doing the dump. I now decided to leave it at that and not spend extra hours on moving these particular reports as well... If anybody notices issues with the mira infrastructure, please tell me soon. I'll be on vacation (without laptop) for 3 weeks starting this Saturday, so I'll have to fix any problems before that. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Repository trouble -- again
Hi all, It seems that the main Isabelle repository got corrupted again and is currently unavailable. Since the error seems to be the same as last time, I expect to be able to fix it quickly. Apparently, the error is correlated with me pushing some changes in, so I guess I'll have to try to reproduce it with a clone and track it down... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Repository trouble -- again
It seems that the main Isabelle repository got corrupted again and is currently unavailable. OK, it's back for now... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Problem with http://isabelle.in.tum.de/repos/isabelle
On 08/06/2012 12:06 AM, Makarius wrote: There is presently a problem with http://isabelle.in.tum.de/repos/isabelle around rev ebbd70082e65 -- also with local pushes or pulls, but it seems to work via ssh (at least for me). The problem (a repository corruption for reasons we don't know yet) are now solved (by stripping and re-pushing some recent changes), and the repository operates normally again. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] NEWS: Isabelle sessions and build management
On 08/01/2012 11:52 AM, Makarius wrote: Is there any mira expert who is not on vacation? Otherwise I have to start understanding its setup myself. I am already looking into this. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Future and maintainance of ~isabelle/contrib_devel at TUM NFS
On Thu, 28 Jun 2012, Tjark Weber wrote: Directories would be more amenable to version control than tarballs, if that makes a difference. These are build artifacts, not sources, so SCM a la Mercurial is a conceptual mismatch: There is no real notion of change (just monotonic addition of new components), no diffs, no merges etc. On 06/28/2012 09:54 PM, Makarius wrote: Proper versioning (with Mercurial) would solve several problems: * canonical identification of *the* true content of component repository * Unix permissions done right in shared space * built-in ssh/http pull I looked briefly at Mercurial's largefiles extension, but I don't think it is suitable: The extension helps in situations when the repository history consumes significantly more space than the working directory due to large binaries that are part of previous revisions but not the current ones, since they were changed or removed. Since our component store grows monotonically, there is no benefit from largefiles, since the size of the tip revision basically equals the size of the whole repository. In practice, this means that a pull would be cheap, but an update would be prohibitively expensive. The above assumes that we simply try to manage the flat directory as a hg repository. Trying to capture the evolution of components (i.e., different versions) does not really make sense, since each component evolves completely independently, and it would not be clear what a given revision of the component repository would mean. All this is just the conceptual difference between a source code repository such as Mercurial and build artifact repositories such as Sonatype Nexus or Artifactory. These basically implement a monotonic file store (plus integration with certain build frameworks which is quite irrelevant for us). Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Future and maintainance of ~isabelle/contrib_devel at TUM NFS
On 06/29/2012 12:27 AM, Jasmin Christian Blanchette wrote: Since our component store grows monotonically, The way I see it, obsolete components should be removed, so as to minimize confusion. No reason for confusion here, since Admin/components tells you what you need (and I am assuming there is a little script that will download it for you if you are not sitting on NFS). Removing obsolete (wrt. to what? The repository tip? The latest release?) versions raises the question of how to easily retrieve the matching revisions for older Isabelle versions. There must be a formal link to the matching revision in the component repository. So here is another implementation alternative for all this: - This assumes universal components. - Instead of Admin/components as a file with references, there is a hg repository (using the largefiles extension), which evolves along with Isabelle and always contains exactly the components that would now be mentioned in Admin/components. - The Isabelle repository contains a file with the hash of the matching revision in the component repository, which serves as a formal link. - Thus, to get a fully working installation for a given Isabelle revision, it is enough to update the component repository to the matching revision, which will automatically download the relevant files. This looks pretty much equivalent to the other proposed solutions, at least at this late hour. But it needs some testing with real data to get some experience with the largefile extension. Moreover, I am not sure if Mercurial 2.0+ can really be assumed to be available everywhere. My Debian stable installation still has 1.6.4. Currently, Isabelle development works just fine even with very old Mercurial. there is no benefit from largefiles, since the size of the tip revision basically equals the size of the whole repository. In practice, this means that a pull would be cheap, but an update would be prohibitively expensive. I'm not sure I'm following this part about pull vs. update. Sure, the components are really big and should arguably live in a different Mercurial repository than Isabelle, but how is pull vs. update more expensive than copying files around using scp? This was assuming that we do not remove old components. All this is just the conceptual difference between a source code repository such as Mercurial and build artifact repositories such as Sonatype Nexus or Artifactory. These basically implement a monotonic file store (plus integration with certain build frameworks which is quite irrelevant for us). How easy are these to use? Are they worth learning in addition to Mercurial? They are easy to use, since there are not many ways how we would use them: Component upload can be done via a web interface, and download is via http, following some path conventions. Not sure about installation and maintenance. They do impose some conventions and strange terminology on you though, which come from the Maven way of organizing stuff. What are the arguments in favor of a monotonic store as opposed to removing obsoleted components? Well, in the end these are different implementations of the same thing, so the question is which one is simpler to use and maintain in the end... I don't know. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Future and maintainance of ~isabelle/contrib_devel at TUM NFS
On 06/27/2012 02:03 PM, Makarius wrote: On Wed, 27 Jun 2012, Florian Haftmann wrote: Not sure what the status of »/home/isabelle/public_components« is. See also this thread on the same topic: http://www.mail-archive.com/isabelle-dev@mailbroy.informatik.tu-muenchen.de/msg02157.html /home/isabelle/public_components was my (incomplete) attempt to solve the remote distribution problem, while /home/isabelle/contrib now solves the local distribution problem. Currently the difference is that public_components contains tarballs (not suitable for in-place use), whereas contrib contains directories (not suitable for download via HTTP/wget: slow, and file permissions would get lost). I think we actually need to keep the content in both formats, but one directory should be automatically generated from the other. Which one should be the master? Intuitively, I like the mindset 1 component = 1 package = 1 tarball, but regarding the directories as the master and using tarballs as a mere distribution mechanism is equally valid IMO. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Future and maintainance of ~isabelle/contrib_devel at TUM NFS
On 06/16/2012 02:11 PM, Jasmin Christian Blanchette wrote: a) Subdirectories for each platform /home/isabelle/contrib/ x86-linux/ x86_64-linux/ x86-cygwin/ ... Then, the universal component packages must be copied, symlinked or hardlinked. b) Different packages for different platforms, roughly as it is now... /home/isabelle/contrib/ jdk-6u31_x86_64-linux/ jdk-6u31_x86-linux/ Then we need a /Admin/contributed_components file for each platform, which lists the components relevant for that platform. I would prefer both indeed: a) architecture-sensitive organisation, but with universal components directly under contrib (as is the case now) b) separate component files for different platforms I'm a bit puzzled here. What does preferring both means exactly? And why does point (a) talk about universal components, when you wrote that the time of platform-universal components is gone? What are the precise implications for the (universal) components I'm packaging (Kodkodi, CVC3, E, SPASS, Z3)? the time is gone just means that we can no longer assume that every component is platform-universal. You can continue building universal components, and I would say this is still the preferred way wherever it can be done with reasonable effort. @Florian: so your suggestion would be that there are several components files in Admin, say Admin/contributed_components_x86-linux containing contrib/x86-linux/jdx-6u31-x86_linux contrib/e-1.5 ... Then the extra path component is redundant, and I think I would rather go without it, since the risk of confusion is high, since the invariant is easy to violate. The directories-for-platforms convention also breaks down when, say, some component is universal accross linux and macos, but needs a special case for cygwin. Where would you put this then? I now realized that having separate component files has the advantage that you can easily make a single installation can be used from different platform without changing symlinks. I think this is important enough to not consider variant a) further. So I think I now prefer a flat directory as component repository, and a component file for each platform. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Redundant equations in function declarations
I may as well be a bit more explicit. About seven Cambridge MPhil students were given an assignment that included converting the following ML code into HOL and proving theorems about it. [...] OK. See now e7e647949c95, as a reaction to this tragic story. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] spass: Internal error: exception Empty raised (line 458 of library.ML)
Hi, I just got the following internal error from spass while trying out the monolithic Windows bundle from Makarius: theory Scratch imports Main begin lemma A ⟶ B ⟹ A ⟹ B sledgehammer Sledgehammering... spass: Internal error: exception Empty raised (line 458 of library.ML) remote_vampire: Try this: by metis (12 ms). remote_e_sine: Try this: by metis (13 ms). e: Try this: by metis (14 ms). z3: A prover error occurred: The SMT solver Z3 is not activated. To activate it, set the environment variable Z3_NON_COMMERCIAL to yes. See also /cygdrive/c/Users/alexander.krauss/Downloads/Isabelle_23-Apr-2012/contrib/z3-3.2/etc/settings ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Isabelle release test website
On 04/23/2012 05:22 PM, Makarius wrote: Here is an update of the test website for warming up a bit more http://www4.in.tum.de/~wenzelm/test/website/ From the monolithic Windows App (with Windows 7): The font in the little symbol replacement popup seems to be wrong: When I enter ==, I just see a little box in that popup. In the main buffer, the symbol is displayed correctly. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Publishing contributions as an external
A completely different question is whether we can open testboard to externals. This might reduce some communication overhead we are seeing at the moment (I'm currently testing..., I have pushed..., etc.) Essentially, this is just a matter of setting up a proper push-via-https repository Since the apache/hg setup on TUM machines is really awkward and error-prone, I simply outsourced all these problems and created a mirror on Bitbucket: https://bitbucket.org/akrauss/isabelle-testboard Interested externals can therefore submit changes for testing and review: - Create an account at bitbucket - Send me a short email so that I can give you push permissions. - Push your changes to the above url. A cron job at TUM will automaticall pull these changes into testboard. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] - and --
On 04/17/2012 05:41 PM, Tobias Nipkow wrote: This is what I would call unwieldy: instead of typing-- or- (and having them converted to the appropriate symbols) you always type-, but then you have to select from a menue. I don't see the progress. Could not agree more. These arrows are very common, and I want to be able to type them without menu interaction. The selection idea is the equivalent of instead of having to use the shift key, you simply type the letter and then select from the menu whether you want the capital or the small letter. Of course we should not forget that the eager replacement done by PG/Emacs is also problematic: try to type ~~/src and see how many keystrokes you need :-) Maybe the Isabelle Keyboard from 2008 needs a revival in jEdit: It has an extra modifier key (a modded Windows key) which opens up a whole range of characters. But this is probably hard to do cross-platform. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Publishing contributions as an external
On 04/16/2012 11:52 AM, Makarius wrote: @Lukas: Thanks for pointing me to mercurial queues which are really a great tool. Using queues it should be easily possible (even as an external) to avoid the long pilage of private changes and public commits. The queues became quite popular early for Isabelle/Mercurial experts. So far I've never tried it myself. I wonder if the more recent rebase extension would do similar things in a more basic way. Are there any users of it? Yes. For the occasional rebasing, the rebase extension is better. Moreover, the patch queue working model breaks down when you get complicated merges (basically you then get patches that won't apply any longer, and are thrown back to merge manually (unless you use further extensions, IIRC)). But patch queues have other advantages... In very recent Mercurials, there is also the hg graft command, which has a similar purpose but is part of the core. The bottom line is that you need some form of rebasing for anything that you do not manage to serve while it is hot. How you do it is up to you. I personally violate the don't keep changes private principle considerably: I have a few patches lying around that are more than a year old. This is not a problem as such, as long as I rebase finally before I push, but it brings some overhead. A completely different question is whether we can open testboard to externals. This might reduce some communication overhead we are seeing at the moment (I'm currently testing..., I have pushed..., etc.) Essentially, this is just a matter of setting up a proper push-via-https repository and managing a list of authorized people (we cannot make testboard completely open since we execute code that is pushed there. But it should be ok to give access to people known to us). Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] HOL/ex/Set_Algebras
Hi all, while backporting HOL/Library/Set_Algebras to use type classes again (a remaining point of the 'a set transition), I noticed that there is now a clone of that file in HOL/ex. The changelog says: changeset: 41581:c34415351b6d user:haftmann date:Sat Jan 15 20:05:29 2011 +0100 summary: experimental variant of interpretation with simultaneous definitions, plus example Unfortunately, nothing in the file itself states what it should demonstrate. Instead, the original comments remain, so there is plenty of opportunity for getting totally confused. What should we do with the clone? Are there maybe other examples that can demonstrate interpretations with simultaneous definitions, so that we can simply remove it? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Spare cycles on compute server
our system administrators just told me that our Munich compute server (lxbroy10) still has many spare cycles, which we could use for more testing and other measurements. At the moment, there are two processes: one checking isabelle_makeall on the testboard, another checking AFP_fast on the testboard. Any suggestions what we should test more? I've been using lxbroy10 for some importer-related benchmarks (which require(d) lots of GBs of RAM) but were only single-threaded. With recent improvements I'll probably be able to move back to lxbroy7-8 or even macbroy2x... At a later stage, benchmarking a parralelized import may be interesting, but we are not there yet. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] New HOL/Import
On 03/28/2012 11:33 PM, Cezary Kaliszyk wrote: We have been working on a modernized reimplementation of HOL/Import, for HOL Light. We think we are at a point where it could be pushed to the main Isabelle repository replacing the old one. This has happened now, cf. 4c7548e7df86. A component with a proof bundle exported from HOL Light (and postprocessed further) is available at http://isabelle.in.tum.de/devel/components/hol-light-bundle-0.5-126.tar.gz It contains HOL-Light's equivalent of Main, i.e., the lemmas that you get when you use hol.ml in HOL Light. When this component is available, it will be imported as part of the HOL-Import session. You'll know that it happens when that session takes 30 seconds to run instead of 3 (on my laptop). Of course, this is not the whole story, as we are actually targeting flyspeck :-) Further refinements will happen at some point: we'll be looking at parallelization, and also try to track down remaining bottlenecks. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Printing terms for debugging (Re: implicit beta normalization in the pretty-printer)
On 01/18/2012 03:34 PM, Ondřej Kunčar wrote: It's a bit annoying if you want to do debugging on the ML level and you have to inspect every term by inspecting its ML representation (which is really tedious for larger terms). This is a common problem, and everybody has come up with private hacks. Mine goes roughly like this: theory Dollar imports Main begin consts dollar :: ('a::{} = 'b::{}) = 'a = 'b (infixl $ 50) consts box :: 'a::{} = 'a notation (output) box (_) ML {* fun mk_dollar s t = let val sT as Type (_, [tT, resT]) = fastype_of s in Const (@{const_name box}, resT -- resT) $ (Const (@{const_name dollar}, sT -- tT -- resT) $ s $ t) end; fun dollarize (s $ t) = mk_dollar (dollarize s) (dollarize t) | dollarize (Abs (x, T, b)) = Abs (x, T, dollarize b) | dollarize t = t; fun raw_print ctxt = writeln o Syntax.string_of_term ctxt o dollarize; *} ML {* map (raw_print @{context}) [@{term (\lambdax ((y,z),w). P)}, @{term x + y + z}, @{term {f x | x. P x }}, @{term (\lambdax. x)} $ @{term a}, @{term (\lambdax. f x)}, @{term 5::nat}]; *} end Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] case syntax
On 01/12/2012 03:43 PM, Stefan Berghofer wrote: Quoting Makarius makar...@sketis.net: Just to illuminate the general situation a bit, can you point to relevant parts of your thesis, also the one of Konrad Slind for the old version? Stefan had mentioned that at some point as everybody knows, Konrad used to do it like that without giving a reference. you can find Konrad's thesis on the web at http://www.cl.cam.ac.uk/~ks121/papers/phd.html The pattern matching algorithm is described in Section 3.3 (p. 62 - 71) The steps of the algorithm are summarized in Figure 3.4. Here is the younger history: Konrad's algorithm sometimes leads to a blowup in the number of cases, which was always seen as problematic. In 2006, I thought I had found an algorithm that actually produces the minimal number of cases, but I didn't try to prove it. It was completely wrong of course, as I discovered later. The current pattern_split.ML is based on these ideas. In 2007/8, I worked out how to actually optimize the number of cases, but the results are not practical: You get a relatively complex optimization problem (worse than NP-hard), and it is hard to predict the results, which makes it unsuitable for use in a package. Furthermore, it does not actually solve the underlying problem: even when minimizing, the number of cases is large (it can be exponential). But the theory is nice and interesting (ch. 4 of my thesis has all the details). So, in short, it seems that Konrad's algorithm is basically the most appropriate we can get. When Stefan adapted it to the syntax translations, he also thought about adding some heuristics that improve it (by trying to guess the right variable to split on, instead of just picking the first), but I don't think that any such modifications went in. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] JinjaThreads
On 01/13/2012 06:24 PM, Makarius wrote: I haven't been aware of that. The configuration goes back to myself, in private communication with Alex. I did not check it later. In 4a892432e8f1 it is now more conventional, also tested manually to some extend. 4a892432e8f1 merely modifies the setup for the Isabelle_makeall run. The AFP settings are here http://afp.hg.sourceforge.net/hgweb/afp/afp/file/3dcc6b9eae2b/admin/mira.py#l33 and they match the ones from regular isatest. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Standard component setup (Re: NEWS)
On 01/05/2012 10:22 AM, Makarius wrote: I think one could publish ~isabelle/contrib_devel via HTTP, although it would require some clean up and tuning, say to expand symlinks. Another question is how to export the actual directory structure, without maintaining explicit index.html and tar.gz versions of everything. I had a look at this. There does not seem to be an apache extension that would let us publish a directory structure as a downloadable tarball. Also, the admins won't let us use the Apache autoindex facility, but they gave me a PHP script that does the same thing. So a low-tech suggestion with only minor overhead would be: * Components live in /home/isabelle/public_components as tarballs. The directory is regarded as append-only, possibly with infrequent garbage collection, which moves outdated packages to some archive location. The packages should be files, not symlinks (to make the directory more self-contained) I'll volunteer to provide the initial content based on contrib_devel and other sources, if people agree with the general idea. * The directory is accessible, cf. http://isabelle.in.tum.de/devel/components As the path indicates, this is mainly for developers. There could be a script in Admin/ that helps to bootstrap the local clone from this source. The more general question behind this is about an official Isabelle component repository, like major projects have it (Eclipse, Netbeans etc. even jEdit). But this needs substantial extra efforts to maintain. The above gives us a poor man's repository with several advantages over the current state - single place to look for stuff - uniform usage (download, untar, init_component, that's it.) Jasmin wrote: Could we instead provide a little script (or Isabelle tool) that turns a tarball/zip downloaded from upstream into a packaged Isabelle component? In the case of Yices and Vampire, the gain would be minimal: Without [...] Ok... let's forget about non-free stuff for now... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Standard component setup (Re: NEWS)
On 01/04/2012 10:30 PM, Makarius wrote: Is there any reference to these details on some documentation (README, manual, …)? A grep for Kodkod over the sources did not look very promising. A good starting point is the semi-official Admin/contributed_components file, which seems to be also used officially for Mira tests. It documents potentially relevant external components for tests. The versions given there are not necessarily the latest ones, but the simplest ones to install from the last official release bundle (directory contrib/). Eg. version edd50ec8d471 of the file still refers to contrib/scala-2.8.1.final from Isabelle2011-1, although the latest snapshots already uses scala-2.8.2.final, while the next release will potentially move forward to scala 2.9.x -- just the usual entertainment. The idea behind Admin/contributed_components is to list the recommended versions that people should use unless they know better for some reason. I added the hints about components to the HOWTO on the Wiki, which is becoming somewhat complete now. https://isabelle.in.tum.de/isanotes/index.php/Working_with_the_repository_version_of_Isabelle However, there is one open question: - There is no stardard way of actually getting hold of the components. The HOWTO recommends reusing the ones from the last release, which is usually a good idea. However, some components do not come with the release (vampire, yices, jedit_build). Should we simply have a directory at TUM which is served via http and where developers can get components? Maybe simply serve /home/isabelle/contrib_devel for that (For jedit_build this should be unproblematic, but I am not sure about the licensing situation for the other stuff.) More specific information about isatest settings is in Admin/isatest/settings/, but that is much less comprehensive. I can't say myself on the spot if every add-on is really tested. It seems that sledgehammer is not tested, since no ATPs are initialized. Maybe one should use the same strategy as in Mira: Symlink $ISABELLE_HOME/contrib to /home/isabelle/contrib_devel and activate all components from Admin/contributed_components that are actually found. How about the many variations on Predicate_Compile, code generator etc? Good question. What are the further packages required here? (I heard some talking about Prolog a while ago, but how serious was it?) Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] NEWS
On 01/04/2012 10:19 PM, Makarius wrote: The difference of a fully integrated release bundle and a development snapshot is increasing more and more -- since the bundles are getting so advanced. I would not like to see the clear distinction between production quality releases and arbitrary snapshots diluted. Some users might even think that a snapshot is always the latest and greatest thing, while in reality it ages much faster than proper releases. This raises the question what the role of the old development snapshots is. I have not used one myself for many years, and I think their existence is mostly historic (In the old days, a TUM account was required to be able to check out the CVS repository, so the snapshots were the only way of sharing non-releases with the outside world). So the general tendency would be to deemphasize these snapshots and recommend the actual source repository for developers (and AFP maintainers) and packaged releases for everybody else. But Larry's concern is still valid: Perhaps the solution is to provide a release candidate early on, together with an announcement of the changes. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] NEWS
Hi Florian, On 12/26/2011 05:33 PM, Florian Haftmann wrote: 'set' is now a proper type constructor. Definitions mem_def and Collect_def have disappeared. Good news. (https://isabelle.in.tum.de/isanotes/index.php/Having_'a_set_back%23Roaring_ahead). Do not expect stability before this list has boilt down. Not stability, but a working isatest would be nice... I assume you have a plan on how to achieve this :-) Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Repository Howto
Hi all, Some people might still remember this good old web page from Clemens, which tells us how to work with the CVS version of Isabelle: http://www4.in.tum.de/~ballarin/isabelle/repository.html I now made an attempt to produce a modernized version of this, in order to simplify the start for new developers and AFP maintainers, or anybody who must follow the development repository. https://isabelle.in.tum.de/isanotes/index.php/Working_with_the_repository_version_of_Isabelle Currently, it only explains how to clone the repository and get it to run. Producing new changes is not covered. Any feedback is appreciated, preferably in the form of Wiki edits :-) Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Float
On 11/14/2011 08:43 PM, Johannes Hölzl wrote: Am Montag, den 14.11.2011, 19:27 +0100 schrieb Florian Haftmann: Hi Johannes, two remarks: * http://isabelle.in.tum.de/reports/Isabelle/rev/e828ccc5c110 With `notepad` you can prove everything without a trace in theory, which eliminates the need for fiddling around with quick_and_dirty. I want to enforce that the user needs to activate quick_and_dirty in ROOT.ML, to remind him that there is something dirty. This is rather non-standard and also breaks the test, cf. http://isabelle.in.tum.de/reports/Isabelle/report/d6e6227d86634c02890a3e60e508abc3 (Tests are run without quick_and_dirty, and rightfully so.) lemma False sorry is indeed a strange kind of markup... I remember that the code generator has a notion of different targets (e.g., SML, Haskell, Eval,...). @Florian: Is it imaginable to add such unsound setup in a way that it does not infect the evaluation oracle by default? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Failure semantics for isabelle sessions
Lars Noschinski nosch...@in.tum.de wrote: On 19.10.2011 23:36, Jasmin Christian Blanchette wrote: Am 19.10.2011 um 22:34 schrieb Alexander Krauss: Does anybody know if there is a straightforward translation of the error codes 134/137 into English? Just Google Unix exit codes. E.g. 134 = The job is killed with an abort signal, and you probably got core dumped, 137 = The job was killed because it exceeded the time limit, and 139 = Segmentatation violation. Usually, the exit codes 128 presented to you by the shell are 128+(nr of signal). When wait()ing on a process, you can also differentiate between killed by a signal and a program just returning error codes 128. Cool, thanks. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Failure semantics for isabelle sessions
Hi all, Many of us have already seen isatest and other failures with of the following form: /tmp/mira/workbench/26748-140130062513920/Isabelle/lib/scripts/run-polyml: line 77: 13588 Killed $POLY -q $ML_OPTIONS ... make: *** [/tmp/mira/workbench/26748-140130062513920/Isabelle/heaps/polyml-5.4.0_x86_64-linux/log/HOL-Regular-Sets.gz] Error 137 or /tmp/mira/workbench/42947-139804334458624/Isabelle/lib/scripts/run-polyml: line 77: 58024 Aborted $POLY -q $ML_OPTIONS ... make: *** [/tmp/mira/workbench/42947-139804334458624/Isabelle/heaps/polyml-5.4.0_x86_64-linux/log/HOL-Word-JinjaThreads.gz] Error 134 Does anybody know if there is a straightforward translation of the error codes 134/137 into English? It would be really useful if we had a table of the most common failure situations, and I wonder if some of this knowledge is already floating around somewhere... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] mira on lxbroy10
On 10/16/2011 02:53 PM, Florian Haftmann wrote: On lxbroy10, something is utterly wrong: http://isabelle.in.tum.de/testboard/Isabelle/report/89d77033f6eb4dc196c199893871ae6d Is anyone taking care for this? Just tried to fix with Isabelle/efc2e2d80218. In general, Lukas and Lars now also feel responsible for the mira installation at TUM, so in general there is no reason to panic :-) Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] isabelle-release repository
On 09/26/2011 11:38 PM, Makarius wrote: isatest will also test http://isabelle.in.tum.de/repos/isabelle-release within the next few weeks. (In the past I used to have a minimal isatest for http://isabelle.in.tum.de/repos/isabelle but that was superseded by http://isabelle.in.tum.de/reports/Isabelle/ last time. Can we count on this again?) Yes. Our mira setup will run Isabelle_makeall as usual both for official changes and stuff pushed to testboard. There are also AFP tests etc. Note that before Isabelle and the AFP are released, changes to Isabelle are basically limited to ones that do not break the AFP (unless Gerwin plans to fork, too). Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] NEWS - Redundant lemmas
On 09/22/2011 05:00 PM, Brian Huffman wrote: I actually like Peter's idea of a deprecated flag better than my Legacy.thy idea. We might implement it by adding a new deprecated flag to each entry in the naming type implemented in Pure/General/name_space.ML. Deprecated names would be treated identically to non-deprecated names, except that looking up a deprecated name would cause a legacy warning message to be printed. I don't think it would be necessary to modify any other tools or packages. Well... taking this seriously would mean to do this not only for facts but for all sorts of name space entries. Packages would then have to make sure that the legacy flag is propagated, e.g., from a constant to its characteristic theorems (unless otherwise indicated by the user, I suppose). This is the same as for the concealed flag, which is still not handled uniformly throughout the system. Like with conceal, one would want to make sure that legacy stuff does not show up in search, or is not otherwise suggested to users by the system, e.g. via sledgehammer. There is in fact quite a bit of similarity with concealed. If one has both legacy and concealed, and possibly more once we get serious about modular namespaces (e.g., private to limit visibility to some scope), it might be worth trying to generalize those to some sort of general namespace annotation concept... My impression is that the traditional approach is to clear out old material quickly, so that the issue only arises in a weak sense. For mere renamings or simple generalizations, we should just go ahead, making sure that the conversion table is in the NEWS. Having an extra legacy phase here only creates extra work with no benefit for anyone. With a new release, people usually have to upgrade their theories anyway, so a few search/replace items can piggy-back on that work easily and postponing them is no better. The longer legacy process should be only for things that are not as trivial for users to replace... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Testboard reset: Old Testboard changesets are removed
Hi Lukas, I reset the testboard repository to start with a clean clone from the development repository and now everything should work again. If anyone is eager to restore the old testboard's changesets, we are happy for any assistance, otherwise we will discard them within the next week, if no one objects. I've had the same issue (for the same reason :-) ) once or twice before. Usually, running hg verify on the repository will tell you the first corrupted revision (usually the one you were pulling when running out of space/quota), and you can then clone everything else, which gets you back where you started. While it may be a good idea to clean up testboard once in a while (I am not sure how well hg scales to thousands of heads), we should try to archive the changesets somewhere. Otherwise it will be impossible to read old threads on the list, which sometimes refer to them. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Towards release
partial_function(option) foo :: nat list \Rightarrow unit option where foo x = foo x works, but partial_function(option) foo :: 'a list \Rightarrow unit option where foo x = foo x yields the following error message. *** exception THM 0 raised (line 1175 of thm.ML): instantiate: type Alex seems to have fixed this issue with changeset 8b74cfea913a Yes, many thanks for reporting this one. It came in when I added the generation of induction rules (which is still somewhat unfinished) in df41a5762c3d. This also shows that partial_function isn't terribly well-tested at the moment. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Referring to unfolded chained facts
Here's an example. After unfolding null_def, the user invoked Sledgehammer and asked for an Isar proof. The proof - ... qed block after that is generated by Sledgehammer: lemma null xs == tl xs = xs proof - assume nx: null xs show tl xs = xs using `null xs` unfolding null_def proof - have tl xs = [] by (metis `xs = []` tl.simps(1)) thus tl xs = xs by (metis `xs = []`) qed What about just generating assume xs = [] after proof - to properly introduce the unfolded proposition in the text. If you give it a name, it'll also be less verbose than the current version, since you don't have to repeat the proposition... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Summary: WHY have 'a set back?
Florian Haftmann wrote: envisaged working plan -- a) general * continue discussion about having set or not, end with a summary (for the mailing list archive) Among all the technical details about the HOW, the WHY part of this discussion seems to have come to a halt. Here is my attempt of a summary of what we currently know. 1. In favour of an explicit set type: 1.1. Type discipline (for users): The notational conventions seem to be a matter of taste (some users like to explicitly distinguish between sets and functions, others prefer to write just P instead of {x. P x} in a set context). However, our library has many results about both sets and predicates, but almost none about arbitrary mixtures of the two. Thus, mixing them is usually a bad idea. Inspection of theories in the distribution and AFP has shown that such a mixture usually requires unfolding mem_def later on to finish the proof. Example: http://isabelle.in.tum.de/repos/isabelle/rev/510ac30f44c0 (in Multivariate_Analysis, mem_def and Collect_def disappear from proofs when mixture is eliminated) 1.2. Type discipline (for tools): Even when all specifications and lemmas respect the set/pred separation, it is occasionally lost by applying seemingly harmless lemmas such as subset_antisym ([intro!] by default), or set_eqI, whose assumptions contain set notation, while the conclusion is generic. Thus, users will be forced into awkward situations, and the only way to recover from them is to unfold mem_def etc. The only way to avoid this is to disable the automatic use of such lemmas (with dramatic losses) or to introduce the type discipline. In other instances, some simplification rules cannot be enabled by default, since (even though they are perfectly sensible) they would make the confusion pervasive. The simplifier's eta-expansive behaviour can make this worse (turning A into %x. A x where A is a set), but the problem exists independently from this. Example: http://isabelle.in.tum.de/repos/isabelle/file/fd520fa2fb09/src/HOL/Complete_Lattice.thy#l823 (Here, mem_def is required, since the initial simp call does not preserve the the type discipline) http://isabelle.in.tum.de/repos/isabelle/file/fd520fa2fb09/src/HOL/Equiv_Relations.thy#l367 (similar situation) 1.3. Higher-Order Unification Higher-Order Unification sometimes fails on type 'a = bool where it works for explicit 'a set, which is first-order. The reasons for this effect are unclear and were not investigated in 2008, since HOU and its implementation in Isabelle are rather intricate. Examples (from the set elimination 2008): http://isabelle.in.tum.de/repos/isabelle/rev/d334a6d69598 http://isabelle.in.tum.de/repos/isabelle/rev/6a4d5ca6d2e5 http://isabelle.in.tum.de/repos/isabelle/rev/c0fa62fa0e5b 1.4. Coercions Sets can be declared as covariant for the sake of coercion inference, which makes more sense for subtyping. (A similar issue exists for Nitpick's monotonicity inference, but there it is solved independently already.) 1.5. Class instances (+,*) Sets can get their own class instances differing from functions. The only known instance where this is practically useful is in HOL/Library/Sets_and_Functions.thy, where one can then define A + B = { x + y | x y. x : A y : B }. So this is rather minor. 1.6. Code generation Code generation for sets is simplified by eliminating the need for an explicit constructor (as in Library/Cset.thy) as an intermediate type. However, for other types (maps), the data refinement problem persists. 2. Contra an explicit set type: 2.1. Quite a lot of work for users out there: Cleaning up set/pred confusion from our own theories and the AFP is already taking significant time. Some newer AFP entries where confusion occurs also in definitions and lemmas (in particular DataRefinementIBP and GraphMarkingIBP) require significant reengineering. (The original author, Viorel Preoteasa kindly agreed to help us here). But even for those theories it seems that the changes improve overall readability and automation. 2.2. (Fairly) small loss in sledgehammer's performance, according to Jasmin's measurements. (summary ends here, my own opinion follows:) So, it seems that we can conclude that instead of the nice unified development that we hoped for in 2008, we got quite a bit of confusion (points 1.1 and 1.2), in addition to the drawbacks that were already known (1.3, 1.5-1.6). If we had to choose between the two versions now (with no status quo to consider), the case would be pretty clear. So the question is whether our users out there will tolerate that they have to fix quite a number of theories. I think I am pro 'a set in the end, assuming we can fix the remaining technical issues. Please remind me of any important points I have missed. Alex ___
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
The global axiom is EX x. x : A == type_definition Rep Abs A allowing local typedefs whenever non-emptiness of A is derivable, even using assumptions in the context. This is an interesting discussion, but totally off-topic in this thread (whose main content is already growing very large.) Please start a separate thread, if you want to continue... Thanks, Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
On 08/24/2011 03:36 PM, Lawrence Paulson wrote: I've just been trying to get the proofs working, not to simplify them or even to understand them. Incidentally, let there be no illusions about people accidentally stumbling into a mixture of sets and predicates. Some of these theories were clearly designed from the ground upwards on the premise that sets and predicates were the same thing. I removed most of the duplication now in the main afp repos, which compiles again. I'll have a more closer look at these theories in the next days. Here are Larry's changesets, for reference: https://www4.in.tum.de/~krauss/hg/afp-set-GraphMarking-paulson/ Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Status of recdef?
Hi, This took me a bit longer to answer properly, but anyway: On 07/24/2011 05:05 PM, Makarius wrote: The old recdef package is another ancient duplicate that is mostly used in old manuals now. Is there a scheme for eventual removal? The situation is a bit subtle. Unfortunately, the function package, due to its general notion of pattern matching, has a notable efficiency problem, which becomes fatal when specifications are larger and have certain nested and overlapping patterns. Theoretical, the specification itself may explode exponentially when the patterns are made disjoint. This is inherent in the problem and also applies to recdef, but does not seem to occur in practice. However, we do frequently see a quadratic blowup, e.g. when we have a datatype T with many constructors C1 ... Cn, and when the matching has the form function f :: T = T = bool where f C1 C1 - True | f C2 C2 - True | ... f Cn Cn - True | f _ _ - False The catch-all pattern at the bottom is then transformed into O(n^2) equations. Unlike recdef, the function package implements this transformation as a preprocessor and works with the O(n^2) equations internally. Then it produces proof obligations that these equations are indeed pairwise disjoint. The number of proof obligations is thus O(n^4). The constant factor is also quite bad, since each proof obligation is solved using auto by default. At the time, this seemed to be the fair price to pay for the extra generality, but since in practice most definitions use normal datatype constructor patterns, it is actually quite a pain. The only plausible solution to this issue is to add special treatment to the common case, which avoids the proof obligations entirely. This could be done in different ways, but is a rather complex architectural change. So far I have not attacked it, since there were always other things that were more pressing, easier, or scientifically more rewarding. But it is still on my list, and I intend to look at this again soon. Now that a number of other issues have accumulated with the function package, it is a good time to revisit some of the internals. Since it is already marked as legacy_feature for quite some time, one could move it to src/HOL/Library/Old_Recdef.thy, for example. I did this now. Now, all uses of recdef in the distribution and the AFP remain because of the problem mentioned above. I have eliminated all other uses. krauss@lapbroy38:~/wd/isabelle/src/HOL$ grep -lr '^recdef' . ./Decision_Procs/Cooper.thy ./Decision_Procs/MIR.thy ./Decision_Procs/Parametric_Ferrante_Rackoff.thy ./Decision_Procs/Ferrack.thy krauss@lapbroy38:~/wd/afp/thys$ grep -lr '^recdef' . ./Simpl/Language.thy Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
On 08/20/2011 01:18 AM, Florian Haftmann wrote: A typical example for what I was alluding to can be found at: http://www4.in.tum.de/~haftmann/cgi-bin/hgweb.cgi/isabelle_set/rev/6e3e9c6c5062#l1.37 Observe the following proof: lemma part_equivpI: (\existsx. R x x) \Longrightarrow symp R \Longrightarrow transp R \Longrightarrow part_equivp R apply (simp add: part_equivp_def) -- {* (I) sane proof state *} apply auto -- {* (II) insane proof state *} apply (auto elim: sympE transpE) -- {* (III) does not even terminate; adding simp add: mem_def succeeds! *} The second »auto« imposes a »\in« on predicates, although mem_def, Collect_def or similar do not appear in the proof text (push to the »wrong world«). Worse, the final auto does not even terminate, this is what I had in mind when referring to an enlarged search space. Here, the way the system is build forces me to use »simp add: mem_def«. Thanks for clarifying. This is a very good example. In essence, it boils down to this: lemma (A :: 'a = bool) = B apply (rule) (* subset_antisym, introduces set connectives *) I have predicates in mind (and in the context, A and B may be used as predicates), but this is not visible from the goal state, which contains only equality. Then, the system will apply subset_antisym, which is incorrect. subset_antisym is declared [intro!], so this happens with auto, too. Then the user has no choice but unfold mem_def, to get back into a sane state. So I would say this is not so much about search space but about the fact that automated tools actually need the proper type information to behave correctly! * The logical identification of sets and predicates prevents some reasonable simp rules on predicates: e.g. you cannot have (A |inf| B) x = A x B x because then expressions like set xs |inter| set ys behave strangely if the are eta-expanded (e.g. due to induction). This sounds more like a problem with the underlying infrastructure that should be fixed, rather than working around the problem by introducing new type constructors. Can you give an example of a proof by induction, where eager eta expansion leads to an unnecessarily complicated proof? theory Scratch imports Main ~~/src/HOL/Library/Lattice_Syntax begin declare List.set_append [simp del] thm sup_apply declare sup_apply [simp] lemma set_append: set (xs @ ys) = (set xs \union set ys) apply (induct xs) apply simp_all apply auto -- {* non-termination! *} Nice example again. Actually, there's a fundamental inconsistency in the current setup in that some operations (like Un) are identified with the lattice operations, where as other operations like membership or comprehension are syntactically distinct. So whenever only connectives of the first kind occur in the goal, the automation can apply rules of both types, which possibly specializes the type to one of the two variants. Looks pretty much that one cannot really do without the type information... I wonder if similar effects can occur in other HOLs... The current implementation of subtyping allows to declare only one map function for a type constructor. Thus, we can have either the contravariant or the covariant map for the function type, but not both at the same time. The introduction of set as an own datatype would resolve this issue. This is an interesting oberservation which also applies to the instantiation of sets for type classes: in 'a = bool, you have to think about 'a in a contravariant fashion, whereas in 'a set it is covariant. The situation in monotonicity checking (as part of Nitpick) is exactly this covariant vs. contravariant issue. We resolved it by re-inferring the whole thing. So, from subtyping point of view, sets and predicates are really two different beasts... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
On 08/19/2011 01:31 AM, Gerwin Klein wrote: On 19/08/2011, at 5:56 AM, Alexander Krauss wrote: * Similarly, there is a vast proof search space for automated tools which is not worth exploring since it leads to the »wrong world«, making vain proof attempts lasting longer instead just failing. Can this claim be made more concrete (or even quantitative)? Or is it merely a conjecture? From what Jasmin wrote, this does not seem to hold for sledgehammer, and simp/auto/blast should not run into the wrong world when configured properly. I understand that this is true for naive users... but automated tools??? Can't really quantify it, but I'm seeing this all the time from not-so-novice users over here. Mixing sets and predicates (e.g. using intersection on predicates) works often enough that people like it and overuse it. Sooner or later you will see unfolding of mem_def. As opposed to unfolding conjunction, unfolding mem_def proves things that otherwise simp/auto/etc fail on. So your point is actually a different one from Florian's: Users unfold mem_def in order to gain automation, which may work in that moment, but is usually a bad idea later on. I understand this point. (Aside: It would be interesting to explore why users actually do this. Is there something missing in the automation for sets that works better for predicates?) My understanding of Florian's point above was that sets-as-predicates actually hinder automation by enlarging the search space. But it seems to me that this is only a theoretical consideration, and that we do not actually observe this effect in practice. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
On 08/18/2011 02:16 PM, Florian Haftmann wrote: * (maybe) hide_fact (open) Set.mem_def Set.Collect_def to indicate that something is going on with these maybe also declare them [no_atp], to avoid sledgehammer-generated proofs that we know are going to break one release later...? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
Hi all, Here are some critical questions/comments towards two of Florian's initial arguments pro 'a set. [...] * Similarly, there is a vast proof search space for automated tools which is not worth exploring since it leads to the »wrong world«, making vain proof attempts lasting longer instead just failing. Can this claim be made more concrete (or even quantitative)? Or is it merely a conjecture? From what Jasmin wrote, this does not seem to hold for sledgehammer, and simp/auto/blast should not run into the wrong world when configured properly. I understand that this is true for naive users... but automated tools??? * The logical identification of sets and predicates prevents some reasonable simp rules on predicates: e.g. you cannot have (A |inf| B) x = A x B x because then expressions like »set xs |inter| set ys« behave strangely if the are eta-expanded (e.g. due to induction). * The missing abstraction for sets prevents straightforward code generation for them (for the moment, the most pressing, but not the only issue). I want to emphasize that the limitation of the code generator mentioned here not only applies to sets-as-predicates but also to maps-as-functions and other abstract types that are often specified in terms of functions (finite maps, almost constant maps, etc.). Thus, having good old 'a set back is does not fully solve this problem as a whole, just one (important) instance of it. My view on this whole topic is outlined in the report I recently sent around (http://www4.in.tum.de/~krauss/papers/refinement.pdf, not updated since last time). In the long run, I would prefer to see flexible transport machinery to move stuff between isomorphic types. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] performance regression for simp_all
I couldn't update ~isatest/hg-isabelle manually, because some files in ~isatest/hg-isabelle/.hg/store belong to user krauss and isatest doesn't seem to have enough permissions. Alex, could you please fix these? Oops... Restored permissions and updated. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
On 08/12/2011 07:51 AM, Tobias Nipkow wrote: The benefits of getting rid of set were smaller than expected but quite a bit of pain was and is inflicted. Do you know of any more pain, apart from what Florian already mentioned? Sticking with something merely to avoid change is not a strong argument. This time we know what we go back to and the real benefits (and the few losses). Do we really know? What are, actually, the losses when going back? This has not yet been touched by this thread at all (except the compatibility/import issue mentioned by Brian), and at least myself I wouldn't feel comfortable answering this question just now... I am not arguing against 'a set, but please bring the facts to light! That we have to discuss this now is mainly since the last switch was made without fully evaluating the consequences (even though they were known already at the time). Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
On 08/12/2011 01:01 PM, Lawrence Paulson wrote: (I'm trying to imagine some sort of magic operator to ease the transition between sets with various forms of tupling and relations.) These tools exist to some extent, as the attributes [to_set] and [to_pred]. It is used a few times in the development of HOL, but otherwise not very well-known: krauss@lapbroy38:~/wd/isabelle/src/HOL$ egrep -r '\[to_(pred|set)\]' . ./List.thy:lemmas lists_IntI = listsp_infI [to_set] ./List.thy:lemmas append_in_lists_conv [iff] = append_in_listsp_conv [to_set] ./List.thy:lemmas in_lists_conv_set = in_listsp_conv_set [to_set] ./List.thy:lemmas in_listsD [dest!,no_atp] = in_listspD [to_set] ./List.thy:lemmas in_listsI [intro!,no_atp] = in_listspI [to_set] ./Transitive_Closure.thy:lemmas rtrancl_mono = rtranclp_mono [to_set] ./Transitive_Closure.thy:lemmas rtrancl_induct [induct set: rtrancl] = rtranclp_induct [to_set] ./Transitive_Closure.thy:lemmas converse_rtrancl_into_rtrancl = converse_rtranclp_into_rtranclp [to_set] ./Transitive_Closure.thy:lemmas rtrancl_idemp [simp] = rtranclp_idemp [to_set] ./Transitive_Closure.thy:lemmas rtrancl_subset = rtranclp_subset [to_set] ./Transitive_Closure.thy:lemmas rtrancl_Un_rtrancl = rtranclp_sup_rtranclp [to_set] ./Transitive_Closure.thy:lemmas rtrancl_reflcl [simp] = rtranclp_reflcl [to_set] ./Transitive_Closure.thy:lemmas rtrancl_converseD = rtranclp_converseD [to_set] ./Transitive_Closure.thy:lemmas rtrancl_converseI = rtranclp_converseI [to_set] ./Transitive_Closure.thy:lemmas converse_rtrancl_induct = converse_rtranclp_induct [to_set] ./Transitive_Closure.thy:lemmas converse_rtranclE = converse_rtranclpE [to_set] ./Transitive_Closure.thy:lemmas trancl_into_rtrancl = tranclp_into_rtranclp [to_set] ./Transitive_Closure.thy:lemmas rtrancl_into_trancl1 = rtranclp_into_tranclp1 [to_set] ./Transitive_Closure.thy:lemmas rtrancl_into_trancl2 = rtranclp_into_tranclp2 [to_set] ./Transitive_Closure.thy:lemmas trancl_induct [induct set: trancl] = tranclp_induct [to_set] ./Transitive_Closure.thy:lemmas trancl_trans_induct = tranclp_trans_induct [to_set] ./Transitive_Closure.thy:lemmas rtrancl_trancl_trancl = rtranclp_tranclp_tranclp [to_set] ./Transitive_Closure.thy:lemmas trancl_into_trancl2 = tranclp_into_tranclp2 [to_set] ./Transitive_Closure.thy:lemmas trancl_converseI = tranclp_converseI [to_set] ./Transitive_Closure.thy:lemmas trancl_converseD = tranclp_converseD [to_set] ./Transitive_Closure.thy:lemmas trancl_converse = tranclp_converse [to_set] ./Transitive_Closure.thy:lemmas converse_trancl_induct = converse_tranclp_induct [to_set] ./Transitive_Closure.thy:lemmas tranclD = tranclpD [to_set] ./Transitive_Closure.thy:lemmas converse_tranclE = converse_tranclpE [to_set] ./Transitive_Closure.thy:lemmas reflcl_trancl [simp] = reflcl_tranclp [to_set] ./Transitive_Closure.thy:lemmas rtranclD = rtranclpD [to_set] ./Transitive_Closure.thy:lemmas trancl_rtrancl_trancl = tranclp_rtranclp_tranclp [to_set] ./Wellfounded.thy:lemmas wfPUNIVI = wfUNIVI [to_pred] ./Wellfounded.thy:lemmas wfP_induct = wf_induct [to_pred] ./Wellfounded.thy:lemmas wfP_eq_minimal = wf_eq_minimal [to_pred] ./Wellfounded.thy:lemmas wfP_trancl = wf_trancl [to_pred] ./Wellfounded.thy:lemmas wfP_subset = wf_subset [to_pred] ./Wellfounded.thy:lemmas wfP_acyclicP = wf_acyclic [to_pred] ./Wellfounded.thy:lemmas acyclicP_converse [iff] = acyclic_converse [to_pred] ./Wellfounded.thy:lemmas acc_induct = accp_induct [to_set] ./Wellfounded.thy:lemmas acc_downward = accp_downward [to_set] ./Wellfounded.thy:lemmas not_acc_down = not_accp_down [to_set] ./Wellfounded.thy:lemmas acc_downwards_aux = accp_downwards_aux [to_set] ./Wellfounded.thy:lemmas acc_downwards = accp_downwards [to_set] ./Wellfounded.thy:lemmas acc_wfI = accp_wfPI [to_set] ./Wellfounded.thy:lemmas acc_wfD = accp_wfPD [to_set] ./Wellfounded.thy:lemmas wf_acc_iff = wfP_accp_iff [to_set] krauss@lapbroy38:~/wd/isabelle/src/HOL$ krauss@lapbroy38:~/wd/afp/thys$ egrep -r '\[to_(pred|set)\]' . ./Simpl/AlternativeSmallStep.thy:by (rule renumber [to_pred]) ./Simpl/AlternativeSmallStep.thy:by (rule renumber [to_pred]) ./Simpl/SmallStep.thy:by (rule renumber [to_pred]) ./Simpl/SmallStep.thy:by (rule renumber [to_pred]) krauss@lapbroy38:~/wd/afp/thys$ Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
On 08/12/2011 02:16 PM, Tobias Nipkow wrote: Surely we want to maintain both inductive and inductive_set? This is what Florian's experiment does. It basically reverts the 2008 transition, but not the 2007 one. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation
Florian: Is your modified Isabelle repo available for cloning, so we can play with it? If so, I might be able to find an answer to my own question... You can clone directly from the http:// location: hg clone http://www4.in.tum.de/~haftmann/cgi-bin/hgweb.cgi/isabelle_set or, faster, clone isabelle locally and pull the extra revision. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Mira up again
On 08/07/2011 02:53 PM, Alexander Krauss wrote: The mira framework, which serves our continuous builds, is currently down, since I messed something up while updating the data representation in the database. We're back to normal operation. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Mira outage
Hi all, The mira framework, which serves our continuous builds, is currently down, since I messed something up while updating the data representation in the database. You can still browse the reports as usual, but no new tests are being run at the moment. I hope that I can get this fixed tomorrow. Of course, the repository itself is not affected by this. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Odd failure to match local statement with pending goal.
On 08/03/2011 12:34 PM, Lawrence Paulson wrote: Many thanks for your analysis. It looks like an interaction involving fix and bound variables. Not too fast :-) We must now peel off the various layers of Isar (recall that show is just a normal refinement step), to get closer to the problem. The attached theory exposes the same issue in a bare-bones HOL, using plain rule application. Note that after the apply (rule R), the two arguments of P have been over-unified, which makes the following assumption step fail. Using the alpha-equivalent rule R2, the same works nicely. The same can be done in low-level ML, using just rtac, which suggests that the error is somewhere in the underlying Thm.biresolution etc. So far, I have not looked any further... I could reproduce the same behaviour in Isabelle 2005, so the issue has been around for a while... possibly much longer. Alex theory Odd_Failure imports HOL begin typedecl nat consts P :: nat \Rightarrow nat \Rightarrow bool Q :: nat \Rightarrow bool lemma R: (* triggers the error *) fixes s t :: nat assumes P s t shows \forall(s::nat) (t::nat). Q t sorry lemma R2: (* works as expected, with (unused) bound variable renamed. *) fixes s t :: nat assumes P s t shows \forall(s'::nat) (t::nat). Q t sorry (* both rules are alpha-equivalent: *) ML {* (op aconv o pairself prop_of) (@{thm R}, @{thm R2}) *} lemma \And(a::nat) (b::nat). P a b \Longrightarrow \forall(c::nat) (c::nat). Q c apply (rule R) (* wrong step here *) (*apply assumption*) (* fails here *) oops text {* The same thing in ML without any Isar management: *} ML {* let val goal = @{cprop \And(a::nat) (b::nat). P a b \Longrightarrow \forall(c::nat) (c::nat). Q c} val rule = @{thm R} val st = Goal.init goal in st | rtac rule 1 | Seq.hd end *} ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Code generation in Isabelle
Anyway, the standard procedure for removal of old stuff is to start marking it as old or legacy in the NEWS, and emitting suitable legacy_warnings. I will follow that standard procedure, once all occurrences of the old code generator invocations are replaced. Put in legacy warnings already now, as they will alert users who accidentially type the wrong commands (remember our experience with the methods evaluation vs. eval last week). You don't have to wait with this until all else is resolved. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Countable instantiation addition
datatype foo = deriving countable, finite, Tobias also mentioned set_of, map_of, etc. But since these aren't class instantiations (we have no constructor classes such as functor), we end up with the good old generic datatype interpretation (roughly: datatype - theory - theory). So it seems like we simply want named datatype interpretations that are explicitly invoked via deriving (stretching the original Haskell notion quite a bit...) Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] [isabelle] Countable instantiation addition
Hi Matthieu, I have written a little ML library allowing to automatically prove, in most cases, instantiations of types (typedefs, records and datatypes) as countable (see ~~/src/HOL/Library/Countable). It seems that for datatypes we now have such functionality, implemented four weeks ago by Brian Huffman: http://isabelle.in.tum.de/repos/isabelle/rev/15df7bc8e93f It comes in the form of a method, so it has to be invoked explicitly, but like this it doesn't produce any penalty when it is not used. If you want to contribute an extension to records/typedefs, you are welcome, but you probably want to study Brian's implementation first. I assume that he is also the most appropriate person to review patches for this functionality. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Countable instantiation addition
On 07/21/2011 09:47 PM, Gerwin Klein wrote: The idea has potential for generalisation. Could we turn this into something similar to Haskell's deriving? The command would take a datatype and a list of instantiation methods that each know how to instantiate a datatype for a specific type class, e.g. enum, countable, order, etc. An instantiation method would be basically a usual proof method plus some small part that knows how to set up the specific instantiation goal (probably the same code for pretty much all applications of this) plus some background theory that provides the basic setup. I like this approach. You ask for something explicitly and then you get it, but you don't have to remember a new obscure syntax for every bit of it. You would just write datatype foo = deriving countable, finite, Several existing things fit into this scheme: When I define a fresh datatype foo = Bar in Main, it automatically becomes an instance of the following type classes: - equal (executable equality, for code generation) - term_of(reifiable term structure, for code generation) - size (size function, for termination proofs) - full_exhaustive(for exhaustive quickcheck) (there is actually a large zoo of type classes for quickcheck...) This doesn't mean that we have to make every last thing explicit, but a mechanism would make sense. Note that in general, a proof method is not enough, since we have to define overloaded constants. Here, countable is an exception since the class has no parameters. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Code generation in Isabelle
On 07/21/2011 04:25 PM, Steven Obua wrote: Actually, there is a third code generator hidden somewhere in Isabelle. If you are talking about what I know under the name Compute Oracle, then rest assured that it is hidden well enough that the chance of some user accidentially confusing it with the mainstream code generator is negligible. Interested parties can discover it using grep, of course. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Testing HOL/Import
Hi all, with Cezary's guidance, I set up mira configurations for building the proofs bundle from the HOL Light repository and for running the HOL-Generate-HOLLight with that bundle, cf. http://isabelle.in.tum.de/reports/Isabelle/report/ed3813d5499d44f6be414a5f051e85f3 for the first successful run. I experienced some fairly reproducible segmentation faults on some machines, but lxbroy5-9 seem to work. This is still prior to Cezary's major cleanup in 6ca79a354c51, so there is hope for improvements here. Now my question is: What do we actually know when HOL-Generate-HOLLight completes without error? Should the generated file be compared with the version in the repository and should the test fail when they are not identical? Are there other things that should be checked? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Occur-Check Problem in Isabelle
[...] Can you tell me how I should do in the proof of the lemma continues to Isabelle runs through here? It was already pointed out that such questions are off-topic for this list. Please repost to isabelle-us...@cl.cam.ac.uk ! Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Modest proposal for image tagging
On 07/12/2011 01:18 PM, Makarius wrote: On Tue, 12 Jul 2011, Alexander Krauss wrote: sed -i 's/THE_VERSION/$(hg id)/g' version.ML isabelle usedir ... Actually, a similar thing happens when an isabelle distribution is built from a repository clone. Alex needs to do this because he his crunching on the official sources. Just to avoid misunderstandings: I am not doing this, but was referring to the makedist script: http://isabelle.in.tum.de/repos/isabelle/file/Isabelle2011/Admin/makedist#l171. The mira framework works more like approach B), as it keeps metadata in its own database. val my_id = getenv MY_ID; In the latter case you have your own settings (potentially via user components with etc/settings) to ensure that the environment variable is present at build time. This is probably the easiest approach discussed so far. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] [isabelle] Status of HOL/Import
On 07/12/2011 11:15 PM, Florian Haftmann wrote: Hi Cezary (et al), first, thanks a lot for your efforts, this is a valuable contribution! There are a number of obvious things that could be done with it, like you mention Isabelle settings but also proper use of local theories (this would avoid the Stale theory errors), split conjunction theorems and look them up separately, etc. However I am not sure there is enough interest. The interest, I guess, is there, only the degree of suffering so far has always been below the critical line. I totally agree with Makarius that improvements will fall into disrepair as long as there is no regression test for the imports. Maybe one of the TUM guys would be willing to invest some time and effort to this? This doesn't seem so hard once we know how to build all this. I just tried to redo this to see how it works $ svn checkout http://hol-light.googlecode.com/svn/trunk/ hol_light $ cd hol_light/Proofrecording/hol_light $ make and it failed with [...] convert type.ml ... convert update_database.ml ... convert wf.ml ... warning: ignoring 'lemma1' in wf.ml warning: ignoring 'pth' in wf.ml warning: ignoring 'pth' in wf.ml done. 1622 named proofs found. make: Warning: File `pa_j_3.04.ml' has modification time 0.015 s in the future \ if test `ocamlc -version | cut -c3` = 0 ; \ then cp pa_j_`ocamlc -version | cut -c1-4`.ml pa_j.ml ; \ else cp pa_j_3.1x_`camlp5 -v 21 | cut -f3 -d' ' | cut -c1`.xx.ml pa_j.ml; \ fi if test `ocamlc -version | cut -c1-4` = 3.10 -o `ocamlc -version | cut -c1-4` = 3.11 ; \ then ocamlc -c -pp camlp5r pa_lexer.cmo pa_extend.cmo q_MLast.cmo -I +camlp5 pa_j.ml; \ else ocamlc -c -pp camlp4r pa_extend.cmo q_MLast.cmo -I +camlp4 pa_j.ml; \ fi File pa_j.ml, line 104, characters 10-13: Parse error: :: or ] expected after [sem_expr_for_list] (in [expr]) File pa_j.ml, line 1, characters 0-1: Error: Preprocessor error make: *** [pa_j.cmo] Error 2 Any hints? Also, what is the HOL Light release policy? Is everybody really just using the svn head? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] NEWS: embedded YXML syntax
On 07/11/2011 05:45 PM, Makarius wrote: *** ML *** * The inner syntax of sort/type/term/prop supports inlined YXML representations within quoted string tokens. By encoding logical entities via Term_XML (in ML or Scala) concrete syntax can be bypassed, which is particularly useful for producing bits of text under external program control. I'm having trouble understanding this... Do you have an example / intended use? Thanks, Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Feedback from a Isabelle tutorial
Hi all, Now that this topic is already active, here is more: In a small course here at TUM (http://www4.in.tum.de/lehre/vorlesungen/perlen/SS11/) we are also using jedit exclusively for the first time, and I can confirm the observation that it makes it slightly easier for newbies to get started. In particular, the How can I copy and paste?, How do I open a file? questions all go away, since the editor commands/key sequences are less obscure. There are also less installation issues (we have no virtualbox image, but a custom bundle, which I built from a development version). Here are two (minor) issues I noticed, which do get in the way and may be easy to fix: 1) Command completion: This may be one of the features that look nice but are useless in practice and often get in the way: Some frequent commands are prefixes of other rather obscure commands, which will then be suggested by the auto-completion: When I type apply and pause for a moment to think, the editor suggests apply_end, which many users don't even know about. This steals (a) the focus, as I cannot move the cursor up/down at this point and (b) my attention. Another instance, which comes up in an exploratory/teaching context, is the following scenario: lemma x = y-- some false conjecture try -- see if it works ^ - counterexample found immediately cursor is here At this point I would like to go up and correct the lemma, but I cannot, since the editor suggests try_methods as a completion of try. I have to press escape first. Of course, in an ideal world, I would not have to type try in the first place, but currently this is our way of working. Suggestion: Simply kill completion of commands (not symbols)??? 2) Since entering non-ascii versions of arrows takes two extra keystrokes and some attention, students tend to just use the ascii variants. I don't know if this is good or bad, but when I give them a file for editing that has nice arrows, after editing, it usually has both versions, and I have to explain that they are the same. I principle, this could happen with PG/Emacs too, but it normally wouldn't, because of the automatic substitution. It is a known issue (if an issue at all) that in Isabelle/jEdit it is sometimes necessary to cut-and-paste some lines in order to synchronize. This behaviour is indeed getting in the way in practice. It works according to the specification of the editing model, but What is actually the specification of the editing model? I am just curious here, and if you can explain it quickly, it may give me an intuition of what's happening when something goes wrong (i.e., as specified). Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] New Testing Infrastructure -- status report
hg push -f testboard I use queues a lot and usually do all testing before I qfinish the queued patches. Is there a Mercurial trick to push all the applied queues without qfinishing them first? hg push -f actually does push applied mq patches as normal changesets, so it should do exactly what you want. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] unhelpful low-level error message from primrec
Hi Brian, I just noticed this error message from primrec if you write a specification that is not primitive recursive. Here is a simplified example: primrec foo :: nat = nat where foo 0 = 1 | foo (Suc n) = foo 0 *** Extra variables on rhs: foo *** The error(s) above occurred in definition foo_def: *** foo \equiv \lambdaa. nat_rec 1 (\lambdan na. foo 0) a *** At command primrec Apparently, the primrec package gets as far as trying to register the non-recursive definition; then the definition command fails, and the error is not caught by primrec. This is a regular source of confusion for newbies, and I want to see this improved for a long time. IIRC, the problem is that the primrec schema is not so easy to check (in the presence of mutual/nested types etc.), and thus the package simply defers that check to the lower level layers. Catching the low-level error and assuming that it is due to a violation of the primrec schema is not really a good alternative, since more serious confusion will arise in cases where that assumption does not hold... If anybody wants to investigate this in detail, and work out what the missing check should look like, I'll be happy to look at patches... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] unhelpful low-level error message from primrec
What is the status of primrec anyway, in the light of fun(ction)? It is still used (a) for didactical reasons, to teach primitive recursion over datatypes, (b) for bootstrapping purposes within HOL-Plain, (c) since it is faster than fun, as it merely instantiates a combinator, and (d) for higher-order datatypes (think Ordinals), which fun's automated termination provers cannot handle, since they only use measures into nat. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] AFP statistics
Can anybody say where AFP is tested and if there are statistics accumulating? The AFP test is currently still running in Sydney and accumulating the usual data. I've copied the logs over to ~/afp/log on macbroy* There are also some runs being performed at TUM, but they are unofficial and incomplete. However, they can already be useful sometimes to get an overview in retrospect when the situation is unclear. http://isabelle.in.tum.de/reports/Isabelle (the second column of traffic lights is for the AFP, click on individual changesets to see a listing of detailed reports, which include the usual logs) The plan is to move the afp test back to Munich when I'm in Munich in about two weeks. Then we should be back to the old routine. I'd prefer to run the test on a Linux machine. Is there one available with around 4 cores and lots of memory? Do 10GB already qualify? Then we can use lxbroy7/8, which are currently looking for a new job. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Mirabelle and load path
On 03/22/2011 10:17 PM, Makarius wrote: It seems to have recovered, e.g. in 626fcf4a803e. Are you now the Mirabelle maintainer? I wouldn't go that far, as I know nothing about most Mirabelle internals. I just needed to use it, so I fixed it with Sascha's help. This is part of another attempt to make the Judgement Day benchmarks continuous. MIRABELLE_OUTPUT_PATH=/tmp/mirabelle MUTABELLE_OUTPUT_PATH=/tmp/mutabelle Such strong assumptions about exclusive access to global resources --- hardwired into the default settings --- are apt to cause more surprises. I tried to fix this in 6a147393c62a for Mirabelle. If it doesn't produce new crashes, we can do the same for Mutabelle. I also wonder about mirabelle -q, which produces strange messages in the regular makeall setup: http://isabelle.in.tum.de/repos/isabelle/file/tip/src/HOL/Mirabelle/lib/scripts/mirabelle.pl#l134 http://isabelle.in.tum.de/repos/isabelle/file/tip/src/HOL/Mirabelle/lib/scripts/mirabelle.pl#l139 Is this really intended, or just due to the confusing ne test? Maybe Sascha can comment on this (cf. f20cc66b2c74). If in doubt, we might just kill the two lines. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Mirabelle and load path
Dear all, Unfortunately, Mirabelle is broken since aa8dce9ab8a9 ('discontinued legacy load path;'). It relied on the implicit path, since it generates a modified version of a theory file somewhere in /tmp, and then processes that file using 'use_thy' in a raw isabelle_process. The imports of the theory were found via the current working directory and the implicit load path '.'. Sascha and I tried three fixes which did not work: a) add ML_val {* Thy_Load.set_master_path ... *} to the generated file. This works when processing the file interactively in PG, but fails in batch mode, which seems to disallow ML commands before the theory headers (and for a good reason, now that I think about it...) b) Set the master path in the corresponding ROOT.ML, before the use_thy command. This has no effect, since use_thy of course resets the master path. c) Pipe the modified theory into 'isabelle tty', to simulate an interactive session. Now setting the master path works, but the position information from the original theory is lost. Mirabelle relies on the line numbers to organize its results. So I am wondering if the system could provide a variant of use_thy(s), which takes an explicit master path and basically interprets the given theory as if it would reside in that path. Probably, similar functionality is already available for PG interaction. Of course any other solution would be fine as well... If all else fails, we could create the modified theory in the same location as the original one, but poking in the original source directories is likely to produce new problems, so it would be nice to find another solution. Any thoughts? Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Mirabelle and load path
So I am wondering if the system could provide a variant of use_thy(s), which takes an explicit master path and basically interprets the given theory as if it would reside in that path. Probably, similar functionality is already available for PG interaction. Of course any other solution would be fine as well... It should be sufficient to include the path in the use_thy invocation: use_thy /tmp/A.thy; Alternatively there is Thy_Info.use_thys_wrt: Path.T - string list - unit You misunderstood my question. Finding theory A is not the problem, but finding its dependencies, which are not in /tmp: If theory A imports B (without explicit path), it will look for /tmp/B.thy, instead of the original location. What I was asking for is a possibility to load a theory file A.thy from location X (here: the location of the modified theory file in /tmp) with a master path pointing to location Y (here: the original location of the theory file). Then, the dependencies of A will be found in path Y. Currently, the only chance of running A.thy successfully (in batch mode) is to physically place it in the directory where its dependencies are. This is somewhat rigid and problematic for Mirabelle. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Mercurial failing as always
Lawrence Paulson wrote: Does anybody know what to do here? Larry ~/isabelle/Repos: hg push pushing to http://isabelle.in.tum.de/repos/isabelle searching for changes http authorization required realm: Mercurial repositories user: Nothing is failing. It is just that pushing over http is disabled. This is explained in README_REPOSITORY, here: http://isabelle.in.tum.de/repos/isabelle/file/cbfba0453b46/README_REPOSITORY#l109 Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] [Fwd: doc test failed]
(please keep postings to isabelle-dev in English) Tobias Nipkow wrote: Seit Wochen wenn nicht Monaten bekomme ich alle paar Tage diese Fehlermeldung: More precisely, the error occurs (almost) daily since January 11th, according to my email archive. Fuehlt sich hier jemand zustaendig? So far, I didn't really feel responsible, since I don't know how the doc test works and what its implicit assumptions are. It seems though that the IsaMakefile for IsarRef assumes that the images are already present, whereas, e.g., in doc-src/ZF/IsaMakefile it is rebuilt explicitly. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] NEWS
* New term style isub as ad-hoc conversion of variables x1, y23 into subscripted form x\^isub1, y\^isub2\^isub3. For use in document preparation, e.g., lemma foo: P x1 x2 proof text {* @{thm (isub) foo} *} produces output with subscripts. Converts names of Frees, Vars and Bounds. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Proof General 4.1pre
The question is if PG 4.1 converges sufficiently fast for Isabelle2011, and if we should switch to the PGIP update for floating point settings. This would mean to discontinue 4.0 and 3.x altogether. I've been using PG 4.1 for a while now from cvs, and it works nicely, except that I've regularly experienced sync losses in connection with the undo-on-edit feature. I couldn't track this down enough to come up with a useful bug report yet, but it seems that switching off this feature solves the problem. Thus, if 4.1 is going to be used, I would recommend having undo-on-edit switched off by default (maybe this is the case anyway, I am not sure...) Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Isabelle release
Hi all, (must add my 2ct, too) Concerning the content of the next release: Brian wrote: What exactly makes it major? Judging by the NEWS file, it looks like 2009-2 introduced about as many new features as the upcoming release will. Is there any new feature in particular that is considered a major change? There are certainly many interesting things: Coercions, partial_function (though that is still lacking induction rules so it's a bit unfinished), smt in sledgehammer, big HOLCF cleanup etc. But I find it impossible to assign a 'major' predicate consistently. And what is major for one user is irrelevant for the other. I have the same feeling as Gerwin concerning release naming. When Isabelle 2009-2 came out, several users asked me about the name and I found it hard to give them a consistent answer. Also, in this particular situation, having an Isabelle 2010 release in January looks a bit like they didn't make their planned release date :-) Larry wrote: I'm afraid that I originated the custom of not always linking the release name to the calendar year. The idea was to indicate that the new release consisted of little more than patches from the previous one. Larry, would you say that this is still adequate given how Isabelle evolves today? Are we talking 'just patches' here, given the changes mentioned above (plus countless small improvements everywhere as usual)? Or what is it? To me it is just the next small step in a sequence, and after 10 years we look back and are surprised about the 'major' changes that happened. Michael wrote: Or dispense with year numbers entirely. Even Microsoft gave up on that idea for Windows. But then we need a Marketing division to come up with a new name every 8 months :-}. Year numbers are very comfortable. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] JinjaThreads
Clemens Ballarin wrote: JinjaThreads doesn't seem to run out of the box (on macbroy2, with Poly/ML 5.3.0). It seems to run out of memory. I use ML_OPTIONS=-H 500, but I would assume the AFP sets this appropriately. Probably this is a known issue, but I don't know where to check for the automatic AFP logs. The logs are in ~isatest/afp-log. I would assume that you do not need special settings on macbroy2. On smaller machines one must turn off parallelism, I was told. I am not sure where the settings for the AFP tests come from, though... Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Testing on smlnj
Dear all, I want to report on the current state of smlnj testing. Currently, the nightly isatest only covers the HOL image and HOL-Library. More wasn't really feasible in a nightly-build setting on a machine that is used interactively during the day. Now we have a dedicated machine (lxlabbroy15, 3.06GHz Xeon, 2 CPUs, 2GB RAM) for running smlnj exclusively, and the new testing framework is not restricted to a nightly-build mode. But we must decide on the strategy that we use. So I first ran it on everything to get a picture of the state of affairs. The revision I tested is 97bf008b9cfe. The big surprise is that HOL-Proofs can actually be built with smlnj (in ten hours), but HOL-ex (which I would have expected is smaller) cannot. I don't know why... the memory requirements of different sessions are largely unknown anyway. Sessions that work -- (but look at the timing!) Finished HOL (3:51:13 elapsed time, 3:51:04 cpu time, factor 0.99) Finished HOL-Auth (5:17:32 elapsed time, 5:17:27 cpu time, factor 0.99) Finished HOL-Boogie (0:03:51 elapsed time, 0:03:45 cpu time, factor 0.97) Finished HOL-Boogie-Examples (0:05:54 elapsed time, 0:05:53 cpu time, factor 0.99) Finished HOL-Hahn_Banach (0:11:26 elapsed time, 0:11:24 cpu time, factor 0.99) Finished HOL-Hoare (0:24:42 elapsed time, 0:24:41 cpu time, factor 0.99) Finished HOL-Hoare_Parallel (2:45:23 elapsed time, 2:45:22 cpu time, factor 0.99) Finished HOL-IMP (0:13:31 elapsed time, 0:13:29 cpu time, factor 0.99) Finished HOL-IMPP (0:06:05 elapsed time, 0:06:03 cpu time, factor 0.99) Finished HOL-IOA (0:01:41 elapsed time, 0:01:40 cpu time, factor 0.99) Finished HOL-Imperative_HOL (2:03:01 elapsed time, 2:02:58 cpu time, factor 0.99) Finished HOL-Import (0:16:31 elapsed time, 0:16:30 cpu time, factor 0.99) Finished HOL-Induct (0:23:05 elapsed time, 0:23:04 cpu time, factor 0.99) Finished HOL-Isar_Examples (0:11:37 elapsed time, 0:11:35 cpu time, factor 0.99) Finished HOL-Lattice (0:02:24 elapsed time, 0:02:24 cpu time, factor 1.00) Finished HOL-Library (3:31:17 elapsed time, 3:31:11 cpu time, factor 0.99) Finished HOL-Matrix (0:32:20 elapsed time, 0:32:18 cpu time, factor 0.99) Finished HOL-Metis_Examples (0:29:46 elapsed time, 0:29:44 cpu time, factor 0.99) Finished HOL-NanoJava (0:12:46 elapsed time, 0:12:45 cpu time, factor 0.99) Finished HOL-Number_Theory (1:07:40 elapsed time, 1:07:39 cpu time, factor 0.99) Finished HOL-Old_Number_Theory (0:59:20 elapsed time, 0:59:19 cpu time, factor 0.99) Finished HOL-Quotient_Examples (0:19:52 elapsed time, 0:19:51 cpu time, factor 0.99) Finished HOL-Prolog (0:00:26 elapsed time, 0:00:25 cpu time, factor 0.96) Finished HOL-Proofs (10:06:25 elapsed time, 10:06:14 cpu time, factor 0.99) Finished HOL-Proofs-ex (0:00:24 elapsed time, 0:00:19 cpu time, factor 0.79) Finished HOL-Proofs-Lambda (3:14:35 elapsed time, 3:14:33 cpu time, factor 0.99) Finished HOL-SET_Protocol (1:06:13 elapsed time, 1:06:10 cpu time, factor 0.99) Finished HOL-Word-SMT_Examples (0:08:18 elapsed time, 0:08:14 cpu time, factor 0.99) Finished HOL-Statespace (0:04:51 elapsed time, 0:04:50 cpu time, factor 0.99) Finished HOL-Subst (0:02:37 elapsed time, 0:02:36 cpu time, factor 0.99) Finished TLA (0:04:00 elapsed time, 0:03:56 cpu time, factor 0.98) Finished TLA-Buffer (0:00:34 elapsed time, 0:00:33 cpu time, factor 0.97) Finished TLA-Inc (0:01:36 elapsed time, 0:01:34 cpu time, factor 0.97) Finished TLA-Memory (0:13:11 elapsed time, 0:13:10 cpu time, factor 0.99) Finished HOL-UNITY (1:39:58 elapsed time, 1:39:55 cpu time, factor 0.99) Finished HOL-Unix (0:09:11 elapsed time, 0:09:10 cpu time, factor 0.99) Finished HOL-Word-Examples (0:00:26 elapsed time, 0:00:24 cpu time, factor 0.92) Finished HOL-ZF (0:12:44 elapsed time, 0:12:42 cpu time, factor 0.99) Finished HOL-Algebra (2:26:16 elapsed time, 2:26:10 cpu time, factor 0.99) Finished HOL-Multivariate_Analysis (3:19:36 elapsed time, 3:19:28 cpu time, factor 0.99) Finished HOL-NSA (0:27:29 elapsed time, 0:27:24 cpu time, factor 0.99) Finished HOL4 (0:51:33 elapsed time, 0:51:27 cpu time, factor 0.99) Finished HOL-Main (2:54:51 elapsed time, 2:54:45 cpu time, factor 0.99) Finished HOL-Plain (0:29:25 elapsed time, 0:29:22 cpu time, factor 0.99) Finished Sequents (0:00:38 elapsed time, 0:00:37 cpu time, factor 0.97) Finished Sequents-LK (0:00:11 elapsed time, 0:00:10 cpu time, factor 0.90) Finished Pure-WWW_Find (0:00:00 elapsed time, 0:00:00 cpu time) Finished HOL-Nominal (0:39:28 elapsed time, 0:39:23 cpu time, factor 0.99) Finished HOL-Word (0:34:12 elapsed time, 0:34:07 cpu time, factor 0.99) Finished HOL-Bali (4:20:45 elapsed time, 4:20:43 cpu time, factor 0.99) (not sure why FOL/ZF is not in my logs, probably it built even before, and I didn't do make clean. So we can expect that they work) I haven't added the numbers, but it must be a few days of machine time. Out of memory - The following sessions failed with out-of-memory.
Re: [isabelle-dev] Exception. fun oddity in Isabelle 2009-2
Hi Lucas, Nice failure :-) datatype Ta = C_2 nat nat | C_1 Ta nat fun f where f (C_2 a b) c = c | f (C_1 a b) c = f a (f a (f a (f (C_1 a b) b))) (* ... after a long time... constants f :: Ta ⇒ nat ⇒ nat Exception. At command fun. *) This is the termination prover looping. It keeps applying the psimp-rules, which are still in the simpset in Isabelle2009-2. I took them out in 150f831ce4a3 since they caused other confusion, so it is no longer a problem in the next release. Now you can work around it by taking them out yourself. I am not sure about the exception. It could be some physical interrupt due to hitting some ressource bound, which aborts the whole toplevel transaction, so you cannot handle it. But I am just guessing. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Updating the current theory
Hi Michael, Thanks, Alex. Indeed, the effect I'm after is like using setup {* *}, but ultimately I'd like this effect to be produced by calling a tactic, i.e. let a tactic make updates to the current theory when invoked using 'apply (tac...)'. AFAIK, this is impossible. Tactics/methods cannot update the theory, they just operate on goals. To change the theory you need to issue a declaration, either via setup or a command of your own, or by using attributes. Don't try to use imperative things like Isar. and don't mess with the Toplevel. It will almost certainly break some fundamental invariants and not solve your problem. Instead, try to move the theory transformation elsewhere. Alex ___ isabelle-dev mailing list isabelle-...@in.tum.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
Re: [isabelle-dev] Extracting dependencies from theory headers
Makarius wrote: Here are some examples for the isabelle scala toplevel: [...] Thanks, these are good pointers. Unfortunately, this is (once again) harder than I thought. Here are the issues/questions: - Relative paths are not resolved by the current simple parser. I remember that there used to be some oddities in PG related to relative paths. I am not sure what the situation is now. What is the meaning of a relative path in an imports or uses declaration? - Related to the above: Dynamic load path modifications via add_path (as e.g. in HOL/Plain.thy) are a show-stopper, since they make it impossible to see what an Import refers to just by looking at headers. These would have to be replaced by something static, possibly a property of the session. Question: What are typical uses of add_path, other than the two instances in the current distribution (which set the library path, once for HOL and once for HOLCF)? After our very brief excursion into distributed make and queue management last year, the main new aspect from this year was http://hudson-ci.org/ Did anybody take a look at that? At least the website looks nice and simple. It is all JVM-based and seems to support Mercurial projects, too. I looked at it, but I found the design fairly inflexible. Its Mercurial support is limited to testing the tipmost revision when it comes in. Aggregation is nice (weather icons etc.), but all data seems to be time-indexed and not revision-indexed. It could be a replacement for the current isatest if somebody manages to set it up properly. But it will not do any of the following: - developer-initiated tests of unpublished changes - automatic bisects - multi-repository compatibility tracking (Isabelle vs. AFP) Florian recently spent some time with our own testing tool, which is now in a better state. I still hope that this becomes reality in the not-too-distant future. But this is another story. Alex ___ Isabelle-dev mailing list Isabelle-dev@mailbroy.informatik.tu-muenchen.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
[isabelle-dev] Extracting dependencies from theory headers
Dear list, (and Makarius in particular :-) ) I remember some offline discussions last year about having an Isabelle tool that extracts file dependencies from theory sources (probably starting from some special session file, which specifies the root theories) and outputs it in a simple textual format. I also remember that Makarius already had the important ingredients for such a tool. How far is it to get this working from the present state? I am asking because Lars, Florian and I had this discussion again today. If we had such a tool, we would actually be willing to spend some time trying to replace the user-written (rather: copied) IsaMakefiles in the AFP with a single generated one. (Lars seems to be a make expert, and he had some realistic ideas on simplifying the whole setup). In particular, this would allow parallel builds of the AFP using make -j. Alex ___ Isabelle-dev mailing list Isabelle-dev@mailbroy.informatik.tu-muenchen.de https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev