I was looking at some of the stuff in util, and it occurred to me that the m5 utility program is cross compiled using different Makefiles for different architectures. Statetrace used to be like that (sort of), but recently I converted it over to scons and set up some configuration variables that made it easier to work with. It would be nice to be able to share some of that with the m5 utility program, although I don't remember it being all that complicated.
Anyway, it seems like it would be useful to be able to have multiple binaries that can be built by scons, specifically the utility stuff and unit tests. That way we could avoid having a hodge podge of small build systems which are either isolated or not in not quite the right ways. I know some of Nate's recent changes suggested this was going to get easier. Could you quickly summarize what that's all about, Nate? Speaking of which, the regressions are still broken. Since that's taking a little while, would you mind please backing out the problem change? Also, I was thinking about how to handle the dependencies/generated files/custom language issue a little while back, and what I kept coming back to were schemes where scons would use a cache of dependency information which it would regenerate if any of the input files which determined outputs and/or dependencies changed. The problem is that scons would need to run once and possibly regenerate its cache, and then run again to actually run. Is this sort of multi-pass setup possible somehow without major hacks? To explain more what I'm getting, lets say you have input file foo.isa which, when processed, generates the files foo_exec.cc and bar_exec.cc. What would happen is that you'd have a file like foo.isa.dep which would describe what would happen and make that depend on foo.isa. When you run for the first time, scons would see that foo.isa.dep doesn't exist. During it's build phase, it would run foo.isa through the system and see that it generated foo_exec.cc and bar_exec.cc and put that into foo.isa.dep (as actual SConscript type code, or flat data, or...). When scons ran the second time, it would read in foo.isa.dep and extract the dependencies from it and build that into the graph. It wouldn't construct foo.isa.dep again since all its inputs were the same, and it would still capture all those dependencies. This time around, the larger binary would see that it depended on foo_exec.cc and bar_exec.cc and that those depend on foo.isa.dep (as a convenient aggregation point of all *.isa files involved). If foo.isa changed later, foo.isa.dep would be out of date and have to be regenerated, and then foo_exec.cc and bar_exec.cc, and then the main binary. The net effect of this is that the thing that processed the .isa would only be run when necessary. In our current setup, that would mean SLICC wouldn't have to be run for every build, only ones where the SLICC input files changed. The problem here is that scons would need to basically call a nested instance of itself on foo.isa.dep, let that build a dep tree and run the build phase, then process foo.isa.dep in the parent dep phase, and then run the parent build phase. It could literally just call scons from scons (though that seems like a major hack) or it could, if scons has a facility for it, do some sort of fancy multi-pass thing. This is sort of related to the first thing (additional targets) because the dependency cache files are sort of like independent targets with their own invocations of scons. Also related to scons are those .pyc files that end up scattered around the source tree. I know I asked about those a long, long time ago, but why are they there? Why don't they end up in the build directories? Gabe _______________________________________________ m5-dev mailing list m5-dev@m5sim.org http://m5sim.org/mailman/listinfo/m5-dev