On Tue, May 31, 2022 at 7:43 PM 'Philip White' via Metamath < [email protected]> wrote:
> > FYI: I tried loading the page, I get "Loaded N" where N rapidly > > counts up to 42869 and then nothing happens. This happens even after > > a reload, set.txt is cached so that doesn't seem to be the issue. > > Yup. The text above the loading count gives you the keyboard shortcuts > for doing anything. I know it's a terribly undiscoverable UI at the > moment. The description wasn't there until an hour or two ago when I > added that so it would meet the bare minimum of sharability. Anyway, if > the description doesn't help, just press the forward slash key, and > a fuzzy-finder window should pop up that you can search through all the > assertion labels that have been loaded. > Thanks, I got the basic functionality working. A few more bug reports: I haven't been able to figure out any conditions under which the (shift-)hjkl buttons do anything; and the fuzzy finder does not prioritize exact matches which makes it difficult to jump to specific theorems. > > This kind of approach is fine for toy projects but for 'real' > > projects that you want to be used by the community this is > > not-invented-here syndrome. Even if you don't want to write rust > > code, one thing you can do is document your implementation strategy > > and feature set so that the code can be ported by others. > > I agree that it's a good idea to document the interesting parts of the > project so that other efforts can benefit as well. Some of my > motivation is intentionally NIH, since making things helps me learn how > they work and I also find using libraries that I've written more fun > than using libraries other people have written. I disagree that > building a new library when one already exists is always bad idea; if > you don't think that either, than maybe we only disagree on the > criteria of when to do it. Right, I have nothing against writing something new just so that you can understand it (I do that all the time). But you do have to watch out for potential long term time sinks because if it becomes a proper project that multiple people put time into then we collectively just end up getting half as much done with respect to the big league proof assistants. > Interesting, sounds like I'm mistaken. The incrementality I'm taking > about is pretty pervasive. If you can figure out how the demo I linked > works, then you'll be able to step forward and backward through proofs > and see the stack and heap growing and shrinking as the steps are > executed. The stack and heap are represented with immutable maps that > share most of their structure between each step so that you can you > step forward with minimal recomputation of the virtual-dom tree; that > is, you only need to compute the newly added items to the stack, > instead of recomputing the whole page again. > > Does mm-knife have incrementality during the execution of a single > proof? I could be wrong, but my approach sounds like it would not > achieve the lightning speeds that I've heard mm-knife is capable of. In > contrast, my verifier takes about 13 seconds to verify set.mm. > You need to choose some clever data structures in order to support both very fast verification and incremental updates, but it does actually have this. (By the way, I don't know if your "rust is low level" impression might have been caused by reading the mm-knife code base, but there are some tricks in there that are pretty low level performance optimizations based on actual profiling. Rust code doesn't have to look like that and luckily it is confined to just a few core data structures and mostly hidden behind an API barrier for "regular" code not on the verification hot path.) Regarding incrementality during execution of a single proof, that's up to you. The way you step through a proof is actually user-(of-library)-configurable, and I would generally expect a UI to implement its own data structures which support the required stepping operations without moving a lot of data. Personally, I wouldn't try to do this with a UI framework at all. General purpose virtual-dom diffing is just objectively worse from a performance standpoint. I would just have stepping send change events that push and pop things from the stack and/or heap. (But also, this is not at all the bottleneck - you can do pretty much anything you want in this space and your debugger will be performant enough.) > There's an argument to be made that virtual-dom + persistent data > structures + all these other incrementality abstractions + > js_of_ocaml's slow compilation scheme are making my program more > complicated and slow. Maybe if I were to just do DOM manipulations it > would be faster and less code. Maybe, but I'm skeptical, and I also > don't think I would enjoy working on it as much, and I would probably > write more bugs. > There's nothing wrong with trying out alternative ideas to see if they work. It sucks when you put a bunch of effort into a performance optimization only to find out after profiling that it is a negative win. (I've been there.) In your setting (OCaml + js_of_ocaml + virtual-dom), it might very well be that incrementality and persistent data structures is cheaper than not, and I can certainly believe that it's easier to get correct, especially if the virtual dom implementation is outsourced to a good library. All I can say is, don't forget to actually measure. > > If you are compiling to web assembly, I don't think the system support > > really matters, you are ultimately constrained by the browser in the > > end. Besides, I don't think target support is an issue these days no > > matter what language you choose. > > > > Interestingly, if you are using web assembly (EDIT: or compiling to > > javascript, if the *.bc.js files are any indication), you might not > > even need to link them together directly, since they can both just > > separately talk to javascript. > > Yeah, that's right. I'm using js_of_ocaml, which translates the bytecode > output of the ocaml compiler into javascript. I'm sure you're right > that it could be done. I would still contend that having bindings is > inconvenient because you have to think about how your language's > runtime will think about different pieces of data. Like, you probably > can't define a variant type in Rust and have it be translated to a > variant type in OCaml (unless you write that marshalling code yourself). > Actually you can: I haven't used the 'ocaml' rust crate but it seems to have some pretty great support for marshalling complex types, including variant types: https://docs.rs/ocaml-interop/0.8.8/ocaml_interop/macro.impl_to_ocaml_variant.html . The usual way would be to put a #[derive(FromValue, ToValue)] attribute on the type definition, like shown in the first example here: https://docs.rs/ocaml/latest/ocaml/index.html . If the communication was indirect via webassembly it probably wouldn't be as nice as this though, since you would have to marshal your types into JS and from there to rust (so it would be using a WASM <-> Rust bindgen instead). Still, you could probably encode variant types to JSON and get essentially fully automatic FFI for complex types, as long as both ends have the same conventions for encoding (and I know the rust deserializer is configurable to other conventions even then). The worst part is really not the communication itself (these bridges are pretty well developed) but rather the overhead of all the conversions. It's best if you can avoid sending large data across languages, and keeping things in one house eliminates that cost. > Metamath *verifiers* are easy to write, but metamath *proof > > assistants* not so much. I think the need for centralizing our fairly > > small pool of manpower is because the proof assistant-styled features > > are not nearly powerful enough in any system except to some extent > > mmj2 and MM-PA, both of which are severely hampered by losing their > > principal architect / maintainer. > > Fair point, although all the "fracturing" that's been mentioned seems > to be about the proliferation of verifiers, and not proof assistants. > I don't think proliferation of verifiers is a problem at all. I would like to see metamath verifiers on Rosetta Code and written in 200 languages. It's a weekend project so it doesn't have the same risks unless you want to start adding fancy features like math parsing or markup rendering, or anything that is not just read-only on the mm file. My experiment is currently just in the direction of an interface for > visualization and exploration, which I *think* is orthogonal to > building a proof assistant? Having seen what the program does now, and assuming that the debugger thing is the end goal and not just one part of a complex interface (which is probably the direction you are thinking ATM), I think it would be interesting to try implementing it in a way so that it could fit on top of the usual HTML theorem presentation, and in particular without having the entirety of set.mm loaded. I would probably do that by implementing the debugger in JS (possibly compiled from another language), and then putting the data required for that one theorem on the page and a precompiled data structure with the formatting information in a separate file. Mario -- You received this message because you are subscribed to the Google Groups "Metamath" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/CAFXXJSvM99x-X-9SpFT64Mr21fytTpUkp%3DQ1iwkDiWoUGrGfvg%40mail.gmail.com.
