Using a cached build sounds like a good idea on paper.

I honestly have only worked on projects that do not use a cached build. For a 
very long time I wanted to move to a build that used a cached build to speed up 
the build. After a few years, I have come to the conclusion a cached build is 
not a good idea.

Number one reason for not using a cached build. "Reliability". 
  - Case 1: A problem is solved just by doing a clean build.
  - Case 2: A problem is hidden because a left over build product from the 
cached build items.

A false failure like Case 1 where the build fails but can be fixed with a clean 
build is annoying but not a critical problem as long as there is a way to force 
a clean build. The big problem is Case 2 where it passes because of picking up 
something from the cached build. The error will, likely, not be caught till it 
is merged. Both problems I agree are unusual but they still happen. 

I am all for speeding up the build but I suspect adding cached build would 
cause more problems than its worth.

The only things that it may make since to cache are build products from outside 
libraries that are not changing. Things like gtest, cbor, etc. If we can figure 
out how to cache only those parts of the build it may be worth looking at.

Just my thoughts.

George Nash


-----Original Message-----
From: iotivity-dev@lists.iotivity.org [mailto:iotivity-dev@lists.iotivity.org] 
On Behalf Of Mats Wichmann
Sent: Monday, October 29, 2018 4:30 PM
To: IoTivity Developer List <iotivity-dev@lists.iotivity.org>
Subject: [dev] caching build objects in scons


Hi, folks.  I'm still around even if not very active, wanted to run something 
by you all.

While working on the scons project, something else I do on the side, I notice 
people keep asking about the caching capabilities of SCons. Some are able to 
use it to considerable benefit.

Tl:dr version of caching: if there exists in the cache a file which could be 
used instead of building a target, use the cached version.

The longer version of caching: for every target scons calculates a 
cryptographic hash which is based on a variety of contributors which include 
hashes of the things it depends on, as well as some of the build environment 
(the specific compiler has a hash, arguments, etc).  This goes into the 
.sconsign.dblite file (if you care to look, the sconsign program can dump it if 
you give the name as an argument, there is no default file).  If you enable 
caching during a build, the derived files are put into the specified cache 
directory using the signature as a filename.  There is also a cache-warming 
option to scons which populates the cache from a built tree.  On a subsequent 
build, normal behavior is if the computed file signature for a derived file 
(target) matches the signature on the file at the target location, then no 
rebuilding has to be done for that file.  If caching is enabled, as an 
additional check if a file named by that signature exists in the cache, then 
the file is copied from the cache.

This is maybe mildly interesting to an individual iotivity developer.
If you rebuild after making a change, scons should only rebuild the affected 
targets, like any decent buildsystem, and the cache is no help.
 If you cleaned in between builds, the cache would help.

When the builds are done by the CI system (Jenkins), a fresh environment is 
used every time, so there are no existing files from a previous build and you 
have to pay the price of a complete build every time.  In checking with LF, it 
should be possible to provision the OpenStack instances in a way that they 
mount a persistent location from outside the instance which could contain an 
SCons cache.  A quick local test here suggests that using cached data speeds 
things up quite a bit: a relatively parallel build with 8 threads takes 7 
minutes or so on my machine; the same build with a preloaded cache takes about 
1/4 of the time.  My setup is fairly quick, the performance gains in other 
scenarios could be a whole lot more.  Or not.  Don't really know until tried.  
If an impressive ratio holds, that could mean a substantial speedup in the 
turnaround for a "vote" from the CI system on whether a patch passes builds or 
not.

So what is the purpose of this email?  To ask whether this is something the 
team wants to pursue.

It's not a slam dunk: if we could just turns on caching in the project and 
that's it, the experiment would be painless.  But there are plenty of 
infrastructure questions - setting up the sharing as mentioned above, whether 
to share one cache between all builders or use one for each type (there are 
unlikely to be commonalities among builder types - a Linux
x64 object will always be different, and even have a different name than a 
Windows object, and the ARM object for Tizen, or Android, or iOS will also 
always differ), how to keep the cache fresh and not growing endlessly over 
time, etc.  And it's not clear if the "Jenkins vote" will actually come any 
faster: a couple of the builders run unit tests, and the actual running of the 
tests is not affected by the cache at all, if the majority of time is running 
the tests, it doesn't help them. I don't know how much the unit test run time 
gates the overall full CI completion time.

Thoughts?




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9971): 
https://lists.iotivity.org/g/iotivity-dev/message/9971
Mute This Topic: https://lists.iotivity.org/mt/27787640/21656
Group Owner: iotivity-dev+ow...@lists.iotivity.org
Unsubscribe: https://lists.iotivity.org/g/iotivity-dev/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to