Is this Github gist
<https://gist.github.com/pstuifzand/1745d559300b1c53d459> it? -- jeffrey
On 01/07/2014 02:50 PM, Peter Stuifzand wrote:
Maybe I can find the link tomorrow. But the example is written on the
list on Dec 2. The title is something like G0 is L0 now.
Peter
On Jan 7, 2014 11:42 PM, Jeffrey Kegler
<[email protected]> wrote:
@Peter: If you could locate it and could offer to work it up once
I provide the syntax, that and Ruslan Z.'s example will be enough
to justify making "forgiving tokens" my top priority. -- jeffrey
On 01/07/2014 02:35 PM, Peter Stuifzand wrote:
Some time I had an example where LTM did an unexpected thing. I
should be on the list somewhere. It could be a starting point for
a test case perhaps.
Peter
On Jan 7, 2014 11:25 PM, Jeffrey Kegler
<[email protected]> wrote:
I call the "backtrack" mode, "forgiving" mode. The term
"backtrack" is overloaded in the parsing context.
Marpa::R2 has come very close to allowing a "forgiving" flag
for tokens. In a previous version it was "implemented", but
not tested or documented. I put "implemented" in quotes,
because often documentation and testing reveals that an
already-implemented feature is not quite as fully implemented
as I'd imagined.
If someone will commit to writing a test case for forgiving
mode, I will put other stuff aside and make implementing it
my next priority. Wrt the test case: make it a good one, but
don't worry about the packaging -- I'll redo all that anyway
when I put it in the test suite. Also, you may not want to
start on it until I settle on the syntax -- just let me know
that you're interested in doing it.
-- jeffrey
On 01/07/2014 01:47 PM, Ruslan Zakirov wrote:
On Wed, Jan 8, 2014 at 1:16 AM, Ron Savage
<[email protected] <mailto:[email protected]>> wrote:
Sometimes I catch myself assuming that LTM failures
backtrack and try a shorter match, the same way I assume
it for regexps.
And sometimes I don't catch myself assuming that :-((.
Well, regexps have nobacktrack mode, so tokenizer can have
optional backtrack mode.
Played a little with "longest expected token match" a little
in Repa and it helped me get rid of a grammar workaround I
had, so it proves itself to be useful. Combining this with
per token flag may be even more powerful.
--
You received this message because you are subscribed to
the Google Groups "marpa parser" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to
[email protected]
<mailto:marpa-parser%[email protected]>.
For more options, visit
https://groups.google.com/groups/opt_out.
--
Best regards, Ruslan.
--
You received this message because you are subscribed to the
Google Groups "marpa parser" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to
[email protected].
For more options, visit
https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the
Google Groups "marpa parser" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google
Groups "marpa parser" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "marpa
parser" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.