Just to be clear, you're saying that the testsuite runs as one long operation, updating one target, and the recipe invokes one test script, right? I can see how that environment might be problematic for this new feature. It works much better for lots of smaller targets.
However, you could avoid this issue if you specify that the target to be built is a sub-make, by prefixing it with a "+" operator for example. If this is true and you use -Otarget or -Ojob then make will not capture the output from that job. Of course this also has the unfortunate side-effect that running "make -n" will still try to run the test scripts. Hm. On Tue, 2013-04-30 at 11:48 +0200, Stefano Lattarini wrote: > So please don't ever make that option the default; if you really > really want to, at least put in place some smart checks that only > enable '-O' when the output is *not* a tty, so that we can have the > best of both worlds (useful feedback for interactive runs, more > readable logs for batch runs). This is a possibility for sure. I have to say that my experience with parallel builds hasn't been as wonderful as others here. I often get output which is corrupted, and not just by intermixing whole lines but also by having individual lines intermixed (that is the beginning of the line is one job's output, then the end of the line is another job's output, etc.) This kind of corruption is often completely unrecoverable and I simply re-run the build without parallelism enabled. I think it depends on how much output jobs emit and how many you are running at the same time. It could also be that Windows is better about avoiding interrupted writes to the same device. Tim Murphy <tnmur...@gmail.com> writes: > What I mean is that: > ./make -Otarget > might be a good interactive default rather than -Omake. I always intended that and never suggested -Omake would be the default. I think -Omake is only really useful for completely automated, background builds. It wouldn't ever be something someone would use if they wanted to watch the build interactively. > I haven't tested to see if this is how the new feature works or not. I > don't think it's completely necessary to keep all output from one > submake together. so turning that off might make things more > interactive, Per-target syncing is a valid compromise. This is the default. If you use -Otarget or (-Ojob) then what is supposed to happen is that make uses the same sub-make detection algorithms used for jobserver, etc. and if it determines that a job it's going to run is a submake it does NOT collect its output. However, I have suspicions that this is not working properly. I have a make-based cross-compilation environment (building gcc + tools) I've been using for years, and I tried it with the latest make and -O and saw similar problems (sub-make output collected into one large log instead of per-target). Thinking about it right now I think I might know what the problem is, actually. I'll look at this later. Stefano Lattarini <stefano.lattar...@gmail.com> writes: > I wasn't even aware of those differences; as of latest Git commit > 'moved-to-git-46-g19a69ba', I don't see them documented in either > the help screen, the manpage, the texinfo manual, nor the NEWS file. I don't see where that code comes from? There is no Git commit in the standard GNU make archive with a SHA g19a69ba. The current HEAD on master is: 19a69ba Support dynamic object loading on MS-Windows. At any rate, the new option and option arguments are documented in the manual and there is an overview including the option arguments in the man page. The NEWS file doesn't discuss the individual option arguments, only the option. Ditto the --help output. _______________________________________________ Bug-make mailing list Bug-make@gnu.org https://lists.gnu.org/mailman/listinfo/bug-make