Very good.

Isn't it, also, another more elegant way to solve also the Prevent naïve 
longest token matching in 
Marpa::R2::Scanless<http://stackoverflow.com/questions/17773976/prevent-naive-longest-token-matching-in-marpar2scanless>question
 ?
As an extreme case, if all lexemes are given the forgiven adverb, isn't it 
a way to bypass the "maxmatch" userland lexing hook ?

Regards, Jean-Damien.

Le mercredi 8 janvier 2014 08:25:46 UTC+1, Jeffrey Kegler a écrit :
>
>  I've uploaded a new developer's version, 
> 2.079_007<https://metacpan.org/release/JKEGL/Marpa-R2-2.079_007>, 
> this time with some clearly visible added value: forgiving tokens.  
> Forgiving tokens are tokens that are declared to be exceptions from the 
> usually rigid longest-tokens-match discipline.  The usual (and traditional) 
> LTM discipline insists that the token is the longest match at any point.  
> If the LTM token is not acceptable to the grammar, then the parse fails.
>
> A forgiving token declares itself to be an exception to the LTM 
> discipline.  If a forgiving token is the longest match, but it is rejected 
> by G1, the rejection will be "forgiven" and the SLIF will look for shorter 
> tokens that it can accept.
>
> I've uploaded as Github gists two examples that previously were difficult 
> lexing situations, redone using forgiving tokens: one is Ruslan 
> Zakirov's<https://gist.github.com/jeffreykegler/8312534>and the other is 
> Peter 
> Stuifzand's <https://gist.github.com/jeffreykegler/8312524>.  I have not 
> documented the forgiving adverb yet, but the syntax is easy enough to 
> figure out from the two examples.
>  

-- 
You received this message because you are subscribed to the Google Groups 
"marpa parser" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to