I was looking at cleaning up a bit the situation with dataflow analysis for
In particular, I was experimenting with rewriting the current
- To only include the functionality to do analysis (since GHC doesn’t seem
the rewriting part).
- Code simplification (we could remove a lot of unused code).
- Makes it clear what we’re actually using from Hoopl.
- To have an interface that works with transfer functions operating on a
basic block (`Block CmmNode C C`).
This means that it would be up to the user of the algorithm to traverse
- Further simplifications.
- We could remove `analyzeFwdBlocks` hack, which AFAICS is just a
of `analyzeFwd` but ignores the middle nodes (probably for efficiency of
analyses that only look at the blocks).
- More flexible (e.g., the clients could know which block they’re
we could consider memoizing some per block information, etc.).
What do you think about this?
I have a branch that implements the above:
It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2`
module), so that it's possible to run ./validate while comparing the
the old implementation with the new one.
Second question: how could we merge this? (assuming that people are
ok with the approach) Some ideas:
- Change cmm/Hoopl/Dataflow module itself along with the three analyses
it in one step.
- Introduce the Dataflow2 module first, then switch the analyses, then
any unused code that still depends on the old Dataflow module, finally
the old Dataflow module itself.
(Personally I'd prefer the second option, but I'm also ok with the first
I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s
recommended workflow for code that’s not ready for review…
ghc-devs mailing list