On 01/31/2012 11:14 PM, Erik Christiansen wrote: > On 30.01.12 07:54, Kenneth Lerman wrote: >> On 01/30/2012 12:28 AM, Erik Christiansen wrote: >>> What is being missed here is that the present parser does all that you >>> fear above, just without the maintainability and documentation benefits >>> conferred by a higher level implementation, using powerful tools. >>> >>> Erik >>> >> No. I don't think so. >> >> The current implementation does it; not the current parser. If we go >> back to the compilation/execution analogy, some error conditions are >> detected at run time; not at compile time. > There is no compile time. Both the current and future parsers are > interpreters only, AIUI. > >> I don't see how the parser can require that G1 has an Fn clause >> defined on the same or some previously executed line. > Nor can I. It doesn't. AIUI, gcode executes with whatever value of that > modaility is current. It does that now, and any new interpreter easily > does the same. The grammar then _permits_ an Fn clause where we choose. > >> The parser knows nothing about execution order; only about lexical >> order. Since the Fn might be hidden away in some subroutine, the parse >> might not have seen it. I would think that knowing whether an Fn is >> active is a difficult problem when looking from the outside, but a >> simple problem from the inside of the run time environment. (Of >> course, feel free to prove me wrong.) > Any "need" to know the run-time state of a modality before run-time is > illusory. That which needs to be known at run-time needs to be known at > run time, not before. It is worth understanding that the run-time value > of a modality is not part of the grammar. I'm not sure what you're > basing these imaginary concerns on, but I can't relate them to reality, > despite some effort :-)
In the past you've implied this, and roughly three or four posts in the future, you bemoan the fact that Haberler's trial grammar "is devoid of any explicit gcode grammar." My concerns were based on previous statements that you thought this should be done. While you could put some grammar rules in place, it is my contention that no matter how good the grammar, you will need some semantic analysis. Once we have that requirement, I believe that a framework that tests for appropriate semantics (including for example, that there MUST have been a previous -- in the execution sense, not the lexical sense -- Fword for G1) will be both necessary. If that exists, and I believe that it must, there is little extra benefit to having the formal grammar handle some of the cases. In short, I'm suggesting that: 1 -- An automated lexer would be useful 2 -- An automated parser might be useful (if it can give reasonable error messages, etc AND be reasonably modified.) If minor changes require digging through shift/reduce conflicts and trying to resolve them, that might be reason to avoid such technology. 3 -- A semantic analyzer (whether rule based or coded) will be necessary. Regards, Ken >> Don't get me wrong. I agree that we need a better definition of the >> grammar and a more structured implementation. In general, though, I >> prefer recursive descent parsers such as the present parser that is used >> for each line. I consider the ability to generate excellent diagnostic >> and error messages to be worth the effort of hand coding. > We usually prefer what we're good at. I'm as guilty of that as the next > bloke. The actual merits of the alternatives have been kicked about > upthread. > > I wouldn't propose replacing the current parser in the forseeable > future. Since there is interest in a more readable input syntax, > expressed several times per year by a subset of LinuxCNC users, I have > upthread already discussed implementing a filter which supplements the > existing parser, but does not replace it. That way, there is scope for > pleasing two groups. > >> I recognize that my control structure (o-word) implementation leaves a >> lot to be desired -- to say the least. About its only saving grace is >> that it enables us to do a lot of things we couldn't do before. It must >> be redone in a way that is obviously correct and maintainable. > As they say, "The perfect is the enemy of the good." An available > practical implementation is superior to any imagined "perfection" which > does not yet exist. If the limitations of the current parser have forced > clutter upon the user, just to get the parser to work, not to improve > readability, then no-one could do a better job with the current tools. > > And I sincerely want to express my thanks for the working gem that is > LinuxCNC. It is so infuriatingly easy for someone to come along, after > all the hard work of making something good out of nothing, and say "Ya > know, we could improve this bit here." But it isn't done to be a PITA. > > There is a large user base which is happy with the status quo. That is > worth infinitely more than any amount of talk about making the syntax > prettier. The current implementation satisfactorily makes swarf around > the globe. > > I honestly don't think it "must be redone". If we make a filter which > pleases the "new look" enthusiasts, it'll just generate your o-word > code, for input into the current parser. Whether anyone then ever goes > so far as to merge the two, partly depends on how valuable a fully > documented interpreter grammar is. > >> I haven't looked closely at modern automated tools for doing this in a >> few decades. If they let us generate effective diagnostic information in >> a straightforward way, we should be using them. On the other hand, the >> grammar should be simple enough that a hand generated recursive descent >> parser should do fine. > In practice, the grammar has to be the same as now for the ""old look", > just with optional "#< .>" etc., and optional "Rapid" instead of "G0" for > the new. There cannot be substantial differences, or it's not just > prettified gcode. > > And in your last sentence, s/hand generated/auto generated/. > There is no need to spend time or effort on actually writing code for > the parser. Only code snippets to implement the actions of each leaf, > i.e to spit out what we want after interpretation. > > The O'Reily "Lex& Yacc" book is an easy introduction to using tools to > generate lexers and parsers with minimal effort. Admittedly, it takes > time and effort to tame them, but that's the case when doing serious > work with any tool. > > Maybe I should strip my embryonic filter of its more "way-out" > translation, and start again with just documenting the existing > grammar, with provision for optional use of decluttered syntax. > The filter would pass legal gcode unaltered, and put back any "#< .>" > etc. missing from the input. It would also convert "Rapid" to "G0", so > nothing but good old gcode would be passed on to LinuxCNC. > > It would take a while, not least because the current legal syntax is > obscure. A practical way to proceed might be to initially pass anything > not explicitly handled, prettify enough for some users to find it > useful, and add error messages later, if/when there was some prospect of > the formal parser entering prime time. > > I'm doubtful that we have enough collegiate enthusiasm to merit getting > carried away, but we're not having much summer anyway, so I could begin > to swing what I have around to approximately meeting the desires > expressed to date. (I'm as curious as anyone, to see how it pans out, > once academic prognostications shuffle into the next room and have to > become a working implementation. I can only go on past experience.) > > Erik > ------------------------------------------------------------------------------ Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d _______________________________________________ Emc-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/emc-users
