Hello.
I really like what I read about tup, and I am considering tup to be the 
next generation build system for a large project. 
The project takes thousands of files from possibly hundreds of directories 
totaling multiple GBs and produces thousands of files in a dozen 
directories, about 10 GB of data. Some files may be 100-200 MB.  Dozens of 
pre-built tools and Python, Perl, Tcl scripts make inputs -> outputs 
conversions (or "compiles"). Certain chunks of processing must be staged 
after others. Updates may involve anything from a single file fix to 300 
files changing. Build from scratch may take several hours, but many tasks 
could build in parallel. 

*Q1: *I've read some posts about performance issues, particularly with 
multicore machines. Can anyone tell from experience, what sort of 
performance difference I may expect for this type of a project compared to 
building with makefiles?

*Q2: *Somewhat related question: The outputs must go into the same 
directory as the tupfile (I wish it wasn't the case!). To avoid mixing 
source and outputs, I am planning to always store tupfile in some 
"generator" subdirectory of the related source files. Then I would need to 
copy/move the outputs to the output tree, which is where I actually want 
them. Copying 10 GB will take significant time, plus extra disk space. 
Moving them would trigger full rebuild by "tup upd" next time (I assume). 
What is the recommended approach to reduce the penalty in this situation?

Regards,
Evgenii

-- 
-- 
tup-users mailing list
email: [email protected]
unsubscribe: [email protected]
options: http://groups.google.com/group/tup-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"tup-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to