The current version of C-Reduce on Github can take advantage of multiple
cores. In fact, it does so by default. You can control the degree of
parallel execution using the $NPROCS variable (this is just temporary of
course).
I had to make one other change that may be visible to end users. In
order to run multiple delta tests in parallel, each has to be run in its
own subdirectory. So you can no longer rely on test scripts being run in
the directory where you invoked C-Reduce. Your test script should assume
that the file being reduced in in the CWD, but that no other files are
present. This should make no difference for most people.
The parallelization strategy that I ended up with is speculation along
the "failed" branch of the search tree. So what C-Reduce does it to
keep, at all times, $NPROCS concurrent delta tests running along this
branch. As long as transformations fail to produce interesting results,
this is all good. As soon as an interesting result is found (i.e.,
C-Reduce prints "success") all speculative processes are killed and a
fresh batch are forked off.
This strategy will work best when the interestingness test is slow. It
is the only thing that runs in parallel; the bulk of C-Reduce still runs
in a serial fashion.
Anyway, there is much room for refining all of this. I would be
particularly interested to know what sort of speedups (if any) are
observed by people like Konstantin who are reducing large C++ programs.
John
- [creduce-dev] help requested evaluating parallel c-redu... John Regehr
-