On 29 Apr 2007, at 21:48, Ramón García wrote:
Those are two applications, but are caused by the same problem: the
merging of the states in a way that the valid tokens are pushed to a
state that may emerge only later, after some reductions have been
made.
I think that the issue of the merging of states for error handling is
essentially solved in the paper of the Burke-Fisher error correction.
Mail me if you do not have the paper.
John Levine, the moderator of the Usenet newsgroup comp.compilers,
though pointed out that with todays highly interactive compilers, the
most important thing is to get the errors accurately pinned down,
rather than some complicated error recovery. This is what pure LR(k)
does, it seems. In addition, "essentially solving the problem" may
not be enough for getting a correct set lookahead token completions.
I haven't read the paper, so I do not know what it does. It may be of
interest to add. But it may not actually solve the problem of getting
a correct lookahead token set. Just some thoughts. :-)
Hans Aberg
_______________________________________________
[email protected] http://lists.gnu.org/mailman/listinfo/help-bison