I was referring to the constant factor; O(n|G|) is claimed by the paper,
via "our grammars blew up exponentially so we had to compact them
continuously, now they seem not to though a pathological case can probably
be constructed". Which isn't the same as "complexity class has been
proven", sure.
But my main goal here is to understand marpa, and in this case specifically
where the intuition that
start: (alt (cat 'a' 'b') (cat 'a' <start> 'b')))
= D_a => (alt 'b' (cat <start> 'b'))
= D_a => (alt null (cat (alt 'b' (cat <start> 'b')) 'b'))
=> (cat (alt 'b' (cat <start> 'b')) 'b')
= D_b => (alt 'b' (cat null 'b')) => 'b'
is equivalent to
0. {[start -> a b]:0 [start ->•a start b]:0}
1.a {[start -> a•b]:0 [start -> a•start b]:0 [start -> •a b]:1 [start ->
•a start b]:1}
2.a {[start -> a•b]:1 [start -> a•start b]:1 [start -> •a b]:2 [start ->
•a start b]:2}
3.b {[start -> a start•b]:0}
modulo tree/by-reference structure, is off base.
On Tuesday, January 12, 2016, Jeffrey Kegler <
[email protected]> wrote:
> I didn't study Might-Darais closely -- Russ Cox did, and I'm relying on
> his observations.
>
> You can (and I do) think of packratting and backtacking as the same
> approach, it's just deciding whether to take the efficiency hit in space or
> in time.
>
> My idea is that, if you want a strong parserr, you should use a parse
> engine designed (like Earley's) to be strong from the ground up. Almost
> everybody else starts with a more or less weak parser, and hacks on the
> power, knowing that's worst-case inefficient, but hoping it will work out
> in enough cases of interest. Since Earley's is O(n) for every
> deterministic CFG, I don't see why folks insist on going for the weaker
> algorithms. But my approach is definitely the minority one so far.
>
> Re "2 seconds to parse a 31-line Python file" -- I really don't pay much
> attention to specific speed claims in theoretical papers -- as opposed to
> big-O worse-case results. With Marpa, you don't have to hope your grammar
> is O(n) -- you can know for sure. Resorting to claims for specific
> grammars on specific processors to my mind shows the approach has run out
> of steam.
>
> Note that I *do* do benchmarks to compare implementations in my blog, but
> I don't report those in theoretical contexts. Readers of theory papers
> should care how fast your algorithm is, not how fast your current processor
> is.
>
> On Tue, Jan 12, 2016 at 9:53 AM, Anton Dyudin <[email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>
>> Would I be correct to infer that memoization is a form of backtracking,
>> and packratting is what the "fixpoint" computation is doing? And is the
>> issue with those being expensive on real-world size grammars/input? 2
>> seconds to parse a 31-line Python file seems a bit steep, though not
>> exponential.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "marpa parser" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected]
>> <javascript:_e(%7B%7D,'cvml','marpa-parser%[email protected]');>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "marpa parser" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected]
> <javascript:_e(%7B%7D,'cvml','marpa-parser%[email protected]');>
> .
> For more options, visit https://groups.google.com/d/optout.
>
--
You received this message because you are subscribed to the Google Groups
"marpa parser" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.