Re: [fonc] To fork or not to fork? (was: Hacking Maru)
Since we're on the topic of forking... About a year and a half ago, I basically took maru-2.1 apart and rebuilt it from scratch as a learning experiment. Also, in the spirit of fonc and the sciences of the artificial, I wanted to write experiments against the maru system - these of course boil down to expressing a new system / language IN maru (I've called those sets of experiments / language / system modernity) Given the recent interest in trying to understand maru, I figure I'd share my heavily documented experience doing this. https://github.com/strangemonad/modernity In many ways this was an adventure in software forensics and trying to put myself in Ian's mind / frame of thinking. (That's a personal gripe and might be blowing my dislike of github, et al., out of all proportion. :) Or maybe a gripe against the way the Linux kernel devs organize their work? Here is how I would imagine my dream world. It would be a central repository with: - A toy Maru, optimised for clarity. Hopefully, what' you'll find in modernity/tools fits this. Let me know if it doesn't - A tutorial for writing your own toy. That is basically the intent with the modernity system in modernity/src. You'll see a few references to books. My hope is to have a rich enough gui to load an active-essay that is the description of the system. Similar to the Physically based ray tracing book (http://www.pbrt.org/). - The hand-written bootstrap compilers (for understanding, and the Trusting Trust problem). see modernity/tools/maru-bootstrap (for the C version) and modernity/tools/maru for the maru-in-maru implementation I'd be curious to see what parts are more understandable and what parts are still confusing to folks. Since I spent a good 4-5 months steeped in this much of this makes a lot of sense to me but I'm sure there's still lots in the way of handholding for a newcomer. shawn___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] To fork or not to fork? (was: Hacking Maru)
Terrific work! I have just cloned your git repository, I will check it out. But first, I need to crack generalised Earley Parsing. I love OMeta, but the hack it uses to get around PEGs limitations on left recursion is ugly (meaning, not fully general). I basically want PEGs that run on Earley parsing. If we consider functions that return rules as infinite sets of rules, I believe this should work. The main difficulty is that now, states rules are effectively closures, and we still need to compare them. Loup On Wed, Oct 23, 2013 at 11:36:27AM -0400, shawnmorel wrote: Since we're on the topic of forking... About a year and a half ago, I basically took maru-2.1 apart and rebuilt it from scratch as a learning experiment. Also, in the spirit of fonc and the sciences of the artificial, I wanted to write experiments against the maru system - these of course boil down to expressing a new system / language IN maru (I've called those sets of experiments / language / system modernity) Given the recent interest in trying to understand maru, I figure I'd share my heavily documented experience doing this. https://github.com/strangemonad/modernity In many ways this was an adventure in software forensics and trying to put myself in Ian's mind / frame of thinking. (That's a personal gripe and might be blowing my dislike of github, et al., out of all proportion. :) Or maybe a gripe against the way the Linux kernel devs organize their work? Here is how I would imagine my dream world. It would be a central repository with: - A toy Maru, optimised for clarity. Hopefully, what' you'll find in modernity/tools fits this. Let me know if it doesn't - A tutorial for writing your own toy. That is basically the intent with the modernity system in modernity/src. You'll see a few references to books. My hope is to have a rich enough gui to load an active-essay that is the description of the system. Similar to the Physically based ray tracing book (http://www.pbrt.org/). - The hand-written bootstrap compilers (for understanding, and the Trusting Trust problem). see modernity/tools/maru-bootstrap (for the C version) and modernity/tools/maru for the maru-in-maru implementation I'd be curious to see what parts are more understandable and what parts are still confusing to folks. Since I spent a good 4-5 months steeped in this much of this makes a lot of sense to me but I'm sure there's still lots in the way of handholding for a newcomer. shawn ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] To fork or not to fork? (was: Hacking Maru)
Loup, By 'fork' I meant to imply creating a publicly-visible repository that pops up in google searches and prevents you finding the place where the progress is being made, unless you happen to spot the tiny icon hidden in the corner that takes you to the repo from which your current page was forked. (That's a personal gripe and might be blowing my dislike of github, et al., out of all proportion. :) Cloning a repo and experimenting/breaking/repairing in order to understand is not the same, nor is using your local Mercurial repository clone to work locally and then contribute back to a parent repo. If that's what Faré meant by fork then I'm all for it. On Oct 21, 2013, at 08:36 , Loup Vaillant-David wrote: I'm now doing the same with Earley Parsing[3]. The Wikipedia article's presentation is not the clearest and it is about the minimum needed, with some reading between the lines, to make a working recogniser. Earley's thesis and original papers are known to contain errors. I recommend you get hold of Parsing Techniques: A Practical Guide (Grune and Jacobs, Springer, 2008) which presents lots of parsing algorithms (including several chart parsers) clearly and concisely. There are a few papers building on Earley's work that contain clear presentations of the original algorithm, parse tree reconstruction and their compact representations; e.g., SPPF-Style Parsing from Earley Recognisers (Elizabeth Scott, Elsevier, 2008) and Practical Earley Parsing (Aycock and Horspool, The Computer Journal, 45(6), 2002). I agree entirely that after noticing that following the causality of predict and scan steps (backwards from the final states) gives all the derivations, the rest is relatively easy. - Read scientific papers. I gathered a surface understanding of some principles, but nothing solid yet. - Build a toy from scratch. I'll probably do that, since it worked so far. These two are fun to do in parallel. They feed each other very well. Here is how I would imagine my dream world. It would be a central repository with: - A toy Maru, optimised for clarity. - A tutorial for writing your own toy. - A serious Maru, lifted up from the toy. - A tutorial for lifting your own toy up. - The hand-written bootstrap compilers (for understanding, and the Trusting Trust problem). Does this dream world sounds possible? Is it even a good idea? I hope so, and I think so. At some point you could consider literate programming. Jones Forth is one example of how this can be attempted even from the very first point (http://rwmj.wordpress.com/2010/08/07/jonesforth-git-repository). By the time you're on the third step, the above hierarchy could begin to support source code representations intended for ease of understanding. Regards Ian ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc