Orlando Hill wrote:
> Right, I forgot to mention what happens to individual characters. I'm
> using a scanner-less design so each character that hasn't been voided
> is added to the AST as a leaf node.
> 
> To me, staying away from larger tokens is very important. Having to
> deal with the problems caused by interactions between lexers and
> parsers is something I want to avoid.

Yes, PEG is a scannerless parsing method. But I don't see how that leads 
to adding every character to the AST. That would seem to make the 
program that parses the AST reparse the original input.

> Were there many grammars that used an 'n' greater than 2?

You're so right, 2 is the magic number.

> I'll gladly be wrong on this but, when only memoizing non-terminals,
> doesn't the following grammar give you O(n^2)?
> 
> e.g.
> 
> This grammar:
> 
> S <- +A
> 
> A <- *'a' 'b' | 'a'
> 
> Parsing this input:
> 
> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Yes, it does. To get linear complexity, you need to memoize terminals as 
well.

I'd be interested to know, though, as a matter of practice how much your 
strategy of memoizing only non-terminals really costs.

Bob
> 
> Orlando.
> 
> 



_______________________________________________
PEG mailing list
PEG@lists.csail.mit.edu
https://lists.csail.mit.edu/mailman/listinfo/peg

Reply via email to