> lex/flex generate tokenisers, which match specific regular expressions > in specific contexts, and execute arbitrary code which can reference > the text which was matched. > > yacc/bison generate parsers, which build a parse tree from sequences > of tokens. The token stream would normally be generated by a tokeniser > built with lex. > In most cases, there is some choice between a simple tokeniser with a complex parser, or a complex tokeniser with a simpler parser (depends on the relative freedom in choosing the set of tokens). lex/flex (at least in flex, what is what I've worked with) the scanner state mechanism may be used to implement simple parsers "on top" of the scanner. (I personally prefer using bison, when in doubt, but the flex-only strategy could possibly yield a faster scanner ?). Generally, the importance of using such tools in handling textual (language-like) input can not be overestimated, IMHO. Good references are the docs coming with (at least the official gnu distributions of) flex and bison, (dont know about the lex/yacc docs) and of course the newest Dragon book: something like "Compilers, Tools and Techniques", by Aho, Ullman and a guy whose name I cant remember (indian name?)(sorry) it may or may not be from Addison-Wesley regards, Niels HP [EMAIL PROTECTED]
