Hi Mike,

thank you for your great reply. I haven't expected that it's that simple to 
check the speed which can be saved by monitor.
To provide a better understanding of the following numbers, this is my PC:
Hewlett-Packard HP ZBook 14 G2
Intel Core i7-5600U CPU @ 2.6GHz with 16.0 GB RAM
Samsung SSD 860 QVO 1TB

Mid size Project:
4901 files in 1650 folders -> Scan took 0.7702s

Big Project:
18835 files in 4238 folders -> Scan took 2.0722s

The scan time is the average of 5 scans.

Our actual build system takes for a build* without any *modification:
Preprocessing, Compile and Link, Postprocessing -> 13 Minutes
Only Compile and Link -> 4 Minutes
But I get at least the same times if one c-file is touched :-D.

I think there is a big potential in TUP for us :-P.

Regards

Andreas


[email protected] schrieb am Montag, 19. April 2021 um 07:23:37 UTC+2:

> On Fri, Apr 16, 2021 at 8:20 AM Andreas Schnell <[email protected]> 
> wrote:
>
>> Hello,
>>
>> I moved a small test project to TUP to learn the possibilities and I'm 
>> very happy.
>> The next step would be the migration of a project with around 16000 files 
>> on Windows.
>> What I understood TUP gains most of it's speed by the monitor which is 
>> not available on Windows.
>> What's your opinion/experience on Windows.
>> Will TUP be faster than MAKE on Windows?
>> Will NINJA be faster than TUP on Windows?
>>
>> I know that it depends on the project, but I want to hear a bit of your 
>> experience.
>>
>>
> Hi Andreas,
>
> Some background: tup was designed with the file monitor in mind, with the 
> idea that the build system should already know which files the user changed 
> ahead of time, and then the dependency information can be structured in a 
> way to take advantage of that information. I was expecting that 'make' 
> spends too much time calling stat() on a bunch of unnecessary files, and 
> that this would be the bulk of the time savings. But in reality, make also 
> spends a lot of time parsing the Makefiles and processing all the funky 
> variable constructions that people end up with (like a bunch of $(call) and 
> $(eval) and such). Tup separates out the parsing into a separate phase, and 
> so most of the time that can all be skipped. The partial DAG that tup 
> constructs is also much smaller than a full DAG that a make-like system 
> would use, so it needs to do less work overall there.
>
> In practice, I almost never use the file monitor. Without the file 
> monitor, tup defaults to "scanning mode", where it simply walks the whole 
> tree from the top of the tup hierarchy (ie: where 'tup init' was run, and 
> the .tup directory exists). So you can determine how much exactly the 
> monitor would save you by running 'time tup scan'. On Linux at least, even 
> for a reasonably large project (72k files, roughly 5k commands), the scan 
> takes 260ms on my machine, which is noticeable but still low enough that I 
> don't bother running the monitor on it most of the time.
>
> Of course, Windows is pretty notorious for having much worse file I/O than 
> Linux. You can get an idea for the scanning overhead very easily (without 
> writing any Tupfiles) by going to the top of your project directory and 
> doing something like:
>
> $ tup init
> $ tup
> # Run this once to initialize the database
> $ tup
> [ tup ] [0.000s] Scanning filesystem...
> [ tup ] [0.261s] Reading in new environment variables...
> ...
>
> The time on the second line (0.261s in this case) is the time it took to 
> scan, which is all the time the monitor would save.
>
> Note that in comparison to something like make or ninja, it is very easy 
> to construct a case where the scanning mode of tup performs very poorly. 
> Make or Ninja will only look at the files that you declare in the 
> Makefile/ninja files, while tup will scan the whole tree, even if you don't 
> have any Tupfiles at all. Tup could certainly be improved in this case but 
> only tracking files that are mentioned in the Tupfiles and/or are detected 
> as dependencies, but again that would only help when scanning, and the 
> upper bound of the benefit is the time it takes scan to run. In my 
> experience (again, on Linux), the time tup saves by using its more 
> intelligent partial-DAG construction more than makes up for the time for 
> the naive scanning algorithm. Not to mention the peace-of-mind of having 
> automated dependency checking!
>
> I'm curious what your results are and if you find the scanning time 
> prohibitive for your project.
>
> -Mike
>

-- 
-- 
tup-users mailing list
email: [email protected]
unsubscribe: [email protected]
options: http://groups.google.com/group/tup-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"tup-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tup-users/77649dc0-215c-4b88-942d-47aaddc85df8n%40googlegroups.com.

Reply via email to