Fathi Boudra <[email protected]> writes:
> On 18 May 2013 04:06, Kevin Hilman <[email protected]> wrote:
>> Fathi Boudra <[email protected]> writes:
>>
>> [...]
>>
>>>> Is there a way for us (linaro folks) to see more of the Jenkins setup
>>>> for these jobs (including the scripts.) There appears to be some useful
>>>> add-ons being used. Read-only access to the detailed configuration of
>>>> the jenkins jobs would be very useful.
>>>
>>> You've got access and can even view/modify the set up.
>>> I used a multi-configuration project for these jobs.
>>> If you have more questions, just ask.
>>
>> Now that I have permission on linaro jenkins, I started experimenting
>> with trying to get something useful for ARM maintainers, and I created a
>> basic job[1] for building all ARM defconfigs (120+) in linux-next.
>>
>> This is a simple build-only job. No downloading toolchains, ubuntu
>> packaging, lava testing, etc. etc like the other Linaro CI jobs.
>
> It will be great if we can keep using latest Linaro GCC. We copy a
> tarball from the master to slave and extract it only once, it's taking
> less than a minute on the full build time.
OK.
> We probably want to keep the boot testing part optional, there's
> several options to implement, without impacting the main build testing
> job. IMO, to investigate.
Yes, the build, packaging, boot test, etc. should all be separate jobs
that are independent, but could be chained together as needed.
>> Just kernel builds, and output that should make sense to kernel developers.
>> IMO, this is the starting point to having some basic sanity testing for
>> maintainers.
>>
>> I have several questions (and suggestions) now on how to speed this up
>> based on configuration of the slaves, as well as several questions
>> around best practice for using the slaves, how workspaces/tools/scripts
>> are (or aren't) shared between slaves, etc.
>>
>> The first suggestion is to speed up the git clones/fetches. Even a
>> shallow git clone (--depth=1) is taking > 3 minutes on the slaves. What
>> I do on my home jenkins box is to have a local repo (periodically
>> updated via cron), and then use the advanced options under jenkins git
>> SCM config to point to it using the "Path of the reference repo to use
>> during clone" option. That makes the git clones/fetches very fast since
>> they're (almost) always from a local repo.
>
> The difference between the slaves and your home box is that the slaves
> are ephemeral instances (terminated after a defined timeout) while
> your box is always up.
> We'll need to move to a persistent slave (stopped instead of
> terminated), opening the door to proper caching (local mirror).
Yes, that sounds better. I'll be glad to test.
> We have such set up for OpenEmbedded builds and other tricks in our
> toolbox for Android builds.
>
>> Another suggestion is to have ccache installed on the slaves. Since
>> this job is building much of the same code over and over (120+
>> defconfigs), ccache would dramatically speed this up, and probably make
>> it more sensible to run all the builds sequentially on the same slave.
>
> ccache is already installed but due to the ephemeral property of
> current instances, it isn't exploited. Once again, moving to
> persistent slave will resolve the issue.
OK
> Also, the EC2 instances aren't I/O optimized.
Yes, I've definitely noticed that.
Here's a quick comparison of my home build box (6 x i7, 16G RAM)
compared to the linaro build slaves. Build times in seconds:
mine linaro
allnoconfig 18 128
exynos4_defconfig 32 201
imx_v6_v7_defconfig 72 703
omap2plus_defconfig 100 ?
That's roughly a 10x difference, and that's only comparing the build
time, not the time it takes for the git clone, toolchain copy/unpack, etc.
However, with all parallel builds due to the slaves, all 120+ defconfigs
still finish quicker on the slaves than my single box at home.
> On some jobs, I create a tmpfs directory where I build. It reduces
> dramatically build time.
Do you have some examples of how to do that as a normal jenkins user?
I see there's a /run directory on the slaves mounted as a tmpfs but it's
only 1.5G and only root accessible.
I'd be willing to run some experiments with a local tmpfs. I'm sure
that will make a huge difference.
Kevin
_______________________________________________
linaro-dev mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/linaro-dev