Re: [yocto] sstate-cache clobbering

2013-07-04 Thread Trevor Woerner
Hi Khem,

On 4 July 2013 01:13, Khem Raj raj.k...@gmail.com wrote:
 hmm you need to compare signatures( bitbake-diffsigs is your friend),
 may be there is a variable that should be added
 to ignore list which is causing unnecessary rebuilds.

Thanks for the tip, I'll take a look.

 secondly, why are you copying sstate, set up a nfssever/http on machine A
 and let it server sstate to machine B via NFS or http

I'm simulating what participants to a training session would be doing.
We're trying to provide a download of the downloads directory and
some sstate-cache directories (to be downloaded in advance) to help
speed up builds and reduce network use during the session.
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] sstate-cache clobbering

2013-07-03 Thread Trevor Woerner
Let's say I have two build machines (A and B), both running the exact
same version of a given distribution. Also assume both machines have a
fully populated downloads directory but otherwise have not performed
any OE/Yocto builds.

On machine A I perform a fresh bitbake core-image-minimal and end up
with a sstate-cache of 733MB and I receive a message saying 324/1621
tasks didn't need to be (re)run.

I then take this sstate-cache directory, copy it to machine B and
perform a bitbake core-image-minimal. This build takes under 5
minutes, and didn't need to run 1374/1621 tasks.

So far this makes good sense. The sstate-cache is getting hit quite a
lot and the performance of the second build shows considerable
improvement (relative to the first build on machine A) to reflect this
fact.

Now I wipe machine B then get it ready for a fresh build (i.e. put the
downloads directory in place, make sure it has all the necessary
host packages, etc). Then on machine A I perform a bitbake
core-image-minimal -c populate_sdk. Now, on machine A, I end up with
a sstate-cache that is 1.6GB in size. I take machine A's 1.6GB
sstate-cache and copy it to machine B. Then on machine B I perform a
bitbake core-image-minimal.

I would have expected this build on machine B to take the same under
5 minutes, and didn't need to run 1374/1621 tasks. But instead I find
this build takes 27 minutes and only 781/1621 tasks didn't need to be
run.

Doesn't it seem strange that a larger sstate-cache involving the same
base image has such a lower sstate-cache hit rate?
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] sstate-cache clobbering

2013-07-03 Thread Khem Raj

On Jul 3, 2013, at 10:06 PM, Trevor Woerner trevor.woer...@linaro.org wrote:

 Let's say I have two build machines (A and B), both running the exact
 same version of a given distribution. Also assume both machines have a
 fully populated downloads directory but otherwise have not performed
 any OE/Yocto builds.
 
 On machine A I perform a fresh bitbake core-image-minimal and end up
 with a sstate-cache of 733MB and I receive a message saying 324/1621
 tasks didn't need to be (re)run.
 
 I then take this sstate-cache directory, copy it to machine B and
 perform a bitbake core-image-minimal. This build takes under 5
 minutes, and didn't need to run 1374/1621 tasks.
 
 So far this makes good sense. The sstate-cache is getting hit quite a
 lot and the performance of the second build shows considerable
 improvement (relative to the first build on machine A) to reflect this
 fact.
 
 Now I wipe machine B then get it ready for a fresh build (i.e. put the
 downloads directory in place, make sure it has all the necessary
 host packages, etc). Then on machine A I perform a bitbake
 core-image-minimal -c populate_sdk. Now, on machine A, I end up with
 a sstate-cache that is 1.6GB in size. I take machine A's 1.6GB
 sstate-cache and copy it to machine B. Then on machine B I perform a
 bitbake core-image-minimal.
 
 I would have expected this build on machine B to take the same under
 5 minutes, and didn't need to run 1374/1621 tasks. But instead I find
 this build takes 27 minutes and only 781/1621 tasks didn't need to be
 run.

hmm you need to compare signatures( bitbake-diffsigs is your friend),
may be there is a variable that should be added
to ignore list which is causing unnecessary rebuilds.

secondly, why are you copying sstate, set up a nfssever/http on machine A
and let it server sstate to machine B via NFS or http


 
 Doesn't it seem strange that a larger sstate-cache involving the same
 base image has such a lower sstate-cache hit rate?
 ___
 yocto mailing list
 yocto@yoctoproject.org
 https://lists.yoctoproject.org/listinfo/yocto

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto