Markus Schiltknecht wrote:

Tom Lane wrote:
You mean four different object types.  I'm not totally clear on bison's
scaling behavior relative to the number of productions

You really want to trade parser performance (which is *very* implementation specific) for ease of use?

Bison generates a LALR [1] parser, which depend quite a bit on the number of productions. But AFAIK the dependency is mostly on memory consumption for the internal symbol sets, not so much on runtime complexity. I didn't find hard facts about runtime complexity of LALR, though (pointers are very welcome).

According to processing one
token in any LR(1) parser in the worst case needs to
 a) Do a lookup in the action table with the current (state, token) pair
 b) Do a lookup in the goto table with a (state, rule) pair.
 c) Push one state onto the stack, and pop n states with
    n being the number of symbols (tokens or other rules) on the right
    hand side of a rule.

a) and b) should be O(1). Processing one token pushes at most one state
onto the stack, so overall no more than N stats can be popped off again,
making the whole algorithm O(N) with N being the number of tokens of the
input stream.

AFAIK the only difference between SLR, LALR and LR(1) lies in the
generation of the goto and action tables.

Are there any ongoing efforts to rewrite the parser (i.e. using another algorithm, like a recursive descent parser)?
Why would you want to do that?

greetings, Florian Pflug

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?


Reply via email to