There are quite a few situations in rdmd and dmd generally when we
compute a dependency structure over sets of files. Based on that, we
write new files that overwrite old, obsoleted files. Those changes in
turn trigger other dependencies to go stale so more building is done etc.
Simplest case is - source file is being changed, therefore a new object
file is being produced, therefore a new executable is being produced.
And it only gets more involved.
We've discussed before using a simple method to avoid unnecessary stale
dependencies when it's possible that a certain file won't, in fact,
1. Do all work on the side in a separate file e.g. file.ext.tmp
2. Compare the new file with the old file file.ext
3. If they're identical, delete file.ext.tmp; otherwise, rename
file.ext.tmp into file.ext
There is actually an even better way at the application level. Consider
a function in std.file:
updateS, Range)(S name, Range data);
updateFile does something interesting: it opens the file "name" for
reading AND writing, then reads data from the Range _and_ the file. For
as long as the data and the contents in the file agree, it just moves
reading along. At the first difference between the data and the file
contents, starts writing the data into the file through the end of the
So this makes zero writes (and leaves the "last modified time" intact)
if the file has the same content as the data. Better yet, if it so
happens that the file and the data have the same prefix, there's less
writing going on, which IIRC is faster for most filesystems. Saving on
writes happens to be particularly nice on new solid-state drives.
Who wants to take this with testing, measurements etc? It's a cool mini