<snip/> > I understand the Dragon books point about LR grammars vs LL grammars. I made > a reference to it in an earlier message.
Sorry. I had missed that reference. > > To put it another way, an LR parser works by following the *reverse* of > > a right-recursive derivation. This allows each input token to be a > > handle that can be used as part of the shift/reduce parsing process. > > > > An LL parser, on the other hand, effectively needs to read in the entire > > file before it can start applying any reductions to avoid ambiguities, > > leading to increased memory requirements and slower parsing. > > Your conclusion about LL parsing efficiency is inaccurate IMHO. LL parsing > isn't inefficient compared to LR parsing (both can be implemented as > tablelookups or procedurally) unless the language in question requires > copious amounts of lookahead/backtracking. Most languages don't. C++ > doesn't. C# doesn't. My mistake. It was apparently a gross over-generalization that we went over in class. *Worst Case* LL would need to read in the whole file. But that seldom occurs in programming languages (likely because no compiler developer in their right mind would make such a grammar). <snip/> > Many (most?) of the well-known commmercial compilers use hand-coded > recursive descent (RD) parsers. The ease of development, flexibility (esp in > error handling) and performance of RD parsers based on LL(k) grammars is an > oft-cited reason. All the same benefit that ANTLR provides. I haven't heard about what commercial compilers use. I do know that GCC uses bison, though they have a hand-coded recursive decent parser for C++. C is apparently using the previous, bison-generated, parser. Thank you for your commentary. I appreciate the feedback. - Jon _______________________________________________ Mono-list maillist - [EMAIL PROTECTED] http://lists.ximian.com/mailman/listinfo/mono-list
