Thanks for letting me know. I had intended to use a real lexer for a long time but somehow only got around to it recently.

Konstantin, Yang and I are trying to figure out how to speed up the initial part of C-Reduce when it is given a very large C++ file. The line-based passes are just not that great.

Maybe you can give us feedback on our current idea. The idea is to remove function bodies. This can be done either by replacing a definition with a declaration, or simply by stripping everything out of the function definition (except for an appropriate "return" statement, obviously).

My current idea is to reuse the line-based logic. In other words, we first try to delete all function bodies, then the first half of them, then the second half, then the first quarter, etc...

I think that if this is implemented wisely, a large speedup may be possible. Does this seem reasonable?

John





On 7/12/13 7:31 AM, Konstantin Tokarev wrote:


12.07.2013, 17:18, "Konstantin Tokarev" <[email protected]>:
Hi all,

I'm curious why custom flex-generated parser was used to implement rm-toks-* 
passes
instead of clang which is already used in the project. Is clex faster than
clang::Preprocessor?

BTW, thank you very much for this group of passes - it's useful for almost any 
reduction,
but it's really invalue when reducing under "no warnings" condition!

Reply via email to