Daan Leijen <[EMAIL PROTECTED]> wrote,

> >Has anyone had experience with Parsec vs. -- the Parser combinators
> >supplied 
> >with Manuel M. T. Chakravarty's Compiler Toolkit: 
> >Self-optimizing LL(1) parser combinators? 
> 
> Allthough I can't be objective since I wrote the Parsec
> library, I do have a lot of experience with lots of
> different parser libraries.

So, let me as the author of the other library in question
here add my subjective opinion, too.

> The reason for writing the parsec library was my frustration with the
> existing libraries; they are all technically excellent but
> since they are all from research, they normally lack documentation, 

You definitely have more docu.

> examples, 

I have two, you have two.  At least, in the hslibs
repository - I just checked to be sure - I can only find
Henk and Mondrian.  One of mine is C, which is clearly more
real life than Henk and Mondrian.  In fact, I would venture
so far as to say that parsec (in its present form) won't
work for C.  The reason?  This dreaded lexer/scanner
dependency that our friend Dennis Ritchie has added to make
life a bit harder for compiler writers.

> extensive support libraries, 

What do you mean by this.  The Compiler Toolkit is more
extensive than the stuff that comes with parsec as far as I
can see.

> good error messages and sometimes raw speed. 

Doitse Swierstra, whose techniques my library uses, has taken
care of these two.

> All these things however are needed make a parser library truly useful in
> "the real world"

Agreed, but I don't see that parsec really is ahead here.
However, the fact that parsec doesn't work for C, indicates
that it works for nicely designed languages (like most FP
languages), but not for dirty, bad, and ugly real-world
languages.

> I would advise you to try Parsec; It has been written from day one as a
> "real world" parser tool. It is simple (with simpler type
> errors), it doesn't need a seperate lexer pass, it is
> faster than all other libraries,

Would you please prove this point about speed?  Did you run
it against Doitse's and my code?  On the parsec page, it
says,

  It is also fast, doing over 10.000 lines/sec for simple
  grammars and about 5000 lines/sec for complex ones like
  Haskell which makes it an acceptable alternative to
  bottom-up parser generators like Happy [timings done on a
  Pentium 400Mhz using GHC 4.05].

I am sorry, but a lines/sec figure on its own is completely
useless.  Is this one character per line, or a densely
filled line?  Moreover, certain constructs like comments
usually are parsed a lot faster than dense token-rich
structures.  So, with lines/sec, you can basically get any
figure you want.  Giving both the size of the input in bytes
*plus* the token count (excluding whitespace and comments),
may be used as a rough indication of the complexity of the
input, but even this is a very crude estimate.

So, how can you say on these grounds that your library is
"faster than all other libraries"?

You have done some fine work with parsec, so I don't really
see why you have to resort to such forms of "marketing."

Cheers,
Manuel


Reply via email to