Alan Burlison wrote: > Peter Memishian wrote: > >> While I certainly support the initiative, I'd like to hear a bit more >> about what's planned. For instance, the earlier thread talked mainly >> about parallelizing the build, but in my experience, a lot of the >> time is >> wasted because of bad algorithms inside the tools themselves rather than >> lack of parallelism. For instance, 6357412 showed how an undersized >> hash >> table inside lint added about 30 minutes to a 90 minute build. > > > I seem to remember the CTF tools were right hogs at one point as well > - I don't know if that is still the case. The main problem with the > build wasn't that it had too little parallelism - in fact in many > parts of the build it had *way* too much, and in other parts it had > way too little. This lead to a distinctly 'lumpy' load profile on the > build. What's probably needed is some sort of queueing mechanism so > we can ensure that the load on a build machine is more constant. As > far as I can tell, this isn't something that our current make offers. > I just run a vanilla nightly on a dual core AMD 64 box and logged the number of instances of dmake running during the build. The average for the first half of the build was 18 (the logger crapped out at this point with 'fork: Not enough space') and the system load average was about 8. In my opinion, that is way too many for the machine to build efficiently. To get the best out of this box, one dmake instance with 4 jobs is the ideal, once it goes beyond 4, the build times increase. So it does look like there is *way* too much parallelism.
Ian _______________________________________________ tools-discuss mailing list tools-discuss@opensolaris.org