On Mon, Apr 04, 2016 at 15:55:14 +0000, Erik Brinkman wrote: > As far as I know, tup buffers all output because it can run several > commands in parallel. It only outputs stdout and stderr when a command has > finished. If you really want to get around this I see two options: > > 1. You could try to patch tup so that if it's run with a single job it > doesn't buffer. I have no clue how hard this would be to patch, but it's an > option. > 2. You could have your command output the progress some other way e.g. > write to a file that tup didn't care about, and then read from it with > `tail -f` or similar. I don't necessarily recommend this, but it does > "work".
What ninja does is that it has a concept of "pools" where you can specify how parallel rules within a pool may be. So you may do a -j10 build, but if it's in a pool "mem_intensive_link" with a depth of 2, only 2 "mem_intensive_link" rules will be allowed to run at once, even if there are 8 such links pending. There is then the special "console" pool with a depth of 1 where ninja will allow it to have access directly to the tty. Until it is over, all other output is buffered until the "console" rule is complete. This is useful to implement things like, e.g., "ninja menuconfig" (if the kernel build system used ninja). --Ben -- -- tup-users mailing list email: [email protected] unsubscribe: [email protected] options: http://groups.google.com/group/tup-users?hl=en --- You received this message because you are subscribed to the Google Groups "tup-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
