>> I think some speed up will be gained even without PCH's. > > I think I've figured out a good way to achieve the speedups you want, > Konstantin: > > You will invoke C-Reduce with a new option that tells C-Reduce how to > run the preprocessor. For example: > > --cpp='gcc small.cpp -Dfoo -E > tmp ; mv tmp small.cpp' > > When C-Reduce sees this option, it will: > > 1. run its initial passes on the non-preprocessed code. This should be > fast since the code is a lot smaller. Also, the line delta passes will > naturally eliminate any #include directives that are not needed. > > 2. run your preprocessor command (and make sure the result is still > interesting) > > 3. run the rest of the C-Reduce passes, as usual, on the preprocessed code > > I don't see any reason why this scheme won't compose easily with PCH -- > no C-Reduce support for PCH should be needed at all.
I'd like to avoid manual splitting of file into two pieces. Instead I'd like to add one or several markers into places where it makes sense to split the file, so creduce splits it in this points, reducing bottom piece each time. In the future it could even be possible to add heuristics to find this points automatically. Also, it's not enough to run initial passes on the small piece. We need to run all passes except maybe pass_peep::b and final ones to reduce dependencies between source and header. -- Regards, Konstantin
