Some points I hope will be useful: First, the "equivalence" of the Stack Exchange article you cite is "equivalence of languages". In most cases you aren't simply recognizing a language (a set of strings), but parsing it according to a grammar. Two language-equivalent CFG's do not necessarily use the same grammar, and therefore for most practical purposes are not equivalent at all.
Second, there is theoretical correctness and correctness of implementation. Theoretical correctness depends on what you're targeting. When I do JSON parsers, for example, there's BNF, and it's the BNF that is the standard -- it's correct if I type it in OK. There's also a test suite, but it's a check that I got the *implementation* right. Third, most of the problem is getting the semantics right, I find. I don't think either of the articles address that -- they treat correctness as a matter of recognizing a set of strings. You might want to glance at my recently posted timeline <http://jeffreykegler.github.io/Ocean-of-Awareness-blog/individual/2016/08/timeline2.html>, which touches on this issues. Hope this helps, jeffrey On Thu, Aug 25, 2016 at 11:02 PM, <[email protected]> wrote: > While I realize the question is really in reference to particulars with > Marpa, I'd wondered how to "test grammars" as well. I'd initially thought > about proving equivalence, but here's a reasonable explanation about why > that's not possible: > > http://math.stackexchange.com/questions/231187/an-efficient- > way-to-determine-if-two-context-free-grammars-are-equivalent > > However, the discussion points to, > > "Comparison of Context-free Grammars Based on Parsing Generated Test Data" > http://slps.github.io/testmatch/ > > In the present paper, we leverage systematic test data generation, by >> which we mean that test data sets are generated by effective enumeration >> methods for the coverage criteria of interest. These methods do not require >> any configuration. Also, these methods imply minimality of the test data >> sets in both an intuitive and a formal sense. >> > > Has anyone any experience with this, or similar, efforts? > > > On Tuesday, August 9, 2016 at 7:21:55 PM UTC-7, [email protected] wrote: >> >> I know in advance that my target grammar is complex. So I would like to >> start at the lower, simpler levels and start testing my lexeme and grammar >> rules as I write them. >> >> * Can I change the starting rule of a (SLIF) grammar at runtime? I would >> like to test very basic rules -- the kind that I'll only see in slices far >> into a file -- (bottom-up) before defining the grammar from the top down. >> If I can specify a rule and a string (to G->parse or R->read), I can write >> easy regression tests that each rule recognizes valid strings and rejects >> invalid strings. I could modify the grammar file(s) for each test, but >> that seems like a bad idea. I'm all ears if there's a better way to do >> this. >> >> * At a higher level, can you point me to any great examples of regression >> tests for a Marpa grammar? >> >> * Even more generally, how do people develop and test a Marpa grammar? >> >> Thanks! >> >> - Ryan >> >> -- > You received this message because you are subscribed to the Google Groups > "marpa parser" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "marpa parser" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
