Hi,
I'm writing a program for which a major part of both the code and (I
think) the execution time will be taken up by a parser( written using
parsing combinators a la Hutton & Meijer's report on monadic parser
combinators). In order to try to find silly slips through type checking I
wanted to use newtype's for various building blocks. Consequently I have
a few lines of the form

pEmbed (\x->LABEL x) unlabelledparser

where unlabelledParser returns [String], say, which should become
 LABEL[String] and pEmbed applies a function to a the result of a parser,
primarily in order to change its type and/or shape.

My question is: how much is this redundancy going to cost? Clearly the
lambda abstraction is just id, but less obviously (pEmbed (\x->LABEL
x))is now also id. Presumably none of the Haskell compilers can figure
this out though? (I'm currently developing my (incomplete) program with
Hugs (1.4) and as a test took out all the newtypes and associated
redundancy so far and then tested both versions on a small,
NON-PATHOLOGICAL example. Nothing else was changed and all precautions
were taken (e.g., preevaluating CAFs to their limits). The original
version took 24001 reductions and
42541 heap cells whilst the stripped version took 22093 reductions and
38285 cells. So FOR HUGS this has added 8 1/2 % to reductions, 11% to heap
cells. I suspect being an interpreter hugs has to implement `newtype' as
`data', inflating the figures, but can't do some optimisations compilers
can,
deflating the figures.)


Am I writing my code in an obtuse manner (is there a better way?)?
Thanks,
David Tweed
-------------------------------------------------------------------
homepage: http://www.cs.bris.ac.uk/~tweed/ work tel: (0117) 9545104
"... we think mathematics is fun and we aren't ashamed to admit the
 fact."           -- Donald Knuth, Ronald Graham and Oren Patashnik




Reply via email to