Hi,
On Sat, Jul 30, 2016 at 2:39 PM, khmarbaise <[email protected]> wrote:
> so have you really measured the time you are "wasting" ? Furthermore which
> Maven version and plugin versions do you use? Are you using freestyle or
> Maven job type...
Yes, we have measured. Both with the Tesla Maven Profiler, but also
even just crudely. The profiler led us to remove a lot of stuff that
we had attached to the build lifecycle and have now pushed into
dedicated jobs (e.g. pathological but necessary m-enforcer-p
enforcement, other stuff). So for example, running a clean install and
skipping all tests on a typical project versus clean install yields
about 10-20% reduction in wall clock execution time. That, to me,
means we are repeating about 80-90% of "stuff" when we execute it
again but just to run the integration tests and again for acceptance
tests. Those timings were found on my development machine.
We do not run the Maven job type anymore, I used to be a big fan of
it, but in order to get supremely fast flow, we abandoned it. A lot of
the nice stuff where Maven integrates with Jenkins just isn't all
useful enough for the cost. I realize that is a sensitive topic, and I
don't want to get into it more, and all the research that I performed
is at least a year old, I'm not looking to start a discussion on this,
and I'm certainly not willing to go through another round of research.
We use Freestyle, and that is unlikely to change unless someone else
can demonstrate very sizable performance wins.
We always run the latest Maven version, 3.3.9 currently, all
dependencies and plugins are up to date (thank you
versions:display-{dependency,plugin}-updates mojo which we run weekly
and then perform updates). Finally, most developers in our area use
--threads 2.0C but on Jenkins we do not because we strictly control
the number of slaves per real machine, and do not like to get
surprised about how much usage a slave is actually going to cost.
Though, since we did move to RHEL 7 with systemd and cgroups, this
probably isn't an actual issue anymore and we should revisit it. But
for simplicity, -T1 is used for all slaves.
> On Saturday, July 30, 2016 at 8:13:14 PM UTC+2, Jesse Farinacci wrote:
>>
>> My team faces similar challenges, and I agree with almost everything
>> said so far. I definitely echo the sentiment that it feels like there
>> is a lot of wasted repetition when invoking discrete phases
>> separately. We currently do this via separate jobs now, which often
>> run at different frequencies. However, we have been able to configure
>> Maven such that we really only have two repetitious phases, across
>> these multiple jobs, which luckily are not costly for us. Maven
>> generate-resources and compile phases, as we can skip over tests and
>> deployments which are not part of the logical unit being executed. In
>> the end, even though we know we are wasting some execution time, it
>> turns out not to be all that much at all.. and isn't yet worthy of any
>> kind of optimization or special attention.
>>
>> So right now our (~2min) compile+unit test runs every commit and only
>> runs Junit tests.
>
> 2 Minutes..Ok how many time is "wasted" (timestamper plugin and the mojo
> execution in Jenkins if you you are using maven job type can help here)
> BTW: How many modules does you project contains of?
Please see above for timing calculation process and results. A typical
project for us has 25 Maven modules.
>> Our (~8min) compile+integration test runs almost
>
> You might run jobs in parallel ?
It would be kind of crummy if we burned 8 minutes on integration test
if a 2 minute unit test would have aborted the build pipeline
previously. Our integration tests do not run the unit tests. Our user
acceptance tests do not run either the integration or unit tests. We
basically check out the project, update the version to be
1.0.$BUILD_NUMBER, run the unit tests, and if they pass we push the
branch to GitLab. That kicks off a ton of compliance jobs, all in
parallel (yaay!) And then after those all pass, we go to integration
test stage. This stage is just running our Arquillian tests, and given
how costly this phase and the phases after it is in time and
resources, we go back down to blocking pipeline on this job.
>> Our (~15min) compile+user acceptance test runs
>> every day and only runs Selenium tests. Then we have lots of on the
>> side jobs which ensure legal compliance, code coverage, file encoding
>> fixes, code style fixes, static analysis, etc etc, which run hourly,
>> daily, or weekly, and are mostly fast (<5min).
>>
>> We are exploring Jenkins Pipeline in order to chain, both sequential
>> where necessary and then parallel where it makes sense, these discrete
>> jobs in such a way that we will have a continuous delivery available
>> at the end which has run the full gauntlet of all our testing and best
>> practices verification for every commit without doing unnecessary and
>> expensive steps, failing the pipeline as soon as possible. Each commit
>> triggers a build which triggers a new git branch which is passed to
>> subsequent phases,
>>
>> if any stage fails, the branch is deleted and makes
>> no further progress. Additionally, we are trying to leverage separate
>> Jenkins jobs for each discrete job so that we can trigger it on demand
>> in order to do some quick checking, we also find that we can get
>> easier cross-project reuse using that technique. We're using Git,
>> Apache Maven, Jenkins CI, Jacoco, Arquillian, SonarQube, Checkstyle,
>> PMD, FindBugs, and many others.
>>
>> There are only two critiques we can find with this style: 1) the
>> aforementioned modest amount of wasted generate-resources and compile
>> phases across multiple jobs in the same pipeline,
>> and 2) each phase is
>> producing artifacts to be tested and not reusing the ones built and
>> verified at previous stages, thus there may be a hypothetical
>> difference between what different stages are actually testing, though
>> we think the risk of this is not consequential.
>
>
> You could create in Jenkins Maven repositories to provided the artifacts to
> other steps...Or can package them by Jenkins an give them to the next
> step...
> So it sounds you don't trust your version control ? If you checkout the same
> sha1 you will get always the same state? And if you build from that you will
> always get the same result didn't you ? Or I could say you don't trust the
> build system to create the same result from the same source state ?
I like the idea, I would also consider using the archive and unarchive
options of Jenkins Pipeline. The problem to me seems to be how to run
unit and integration tests on previously built .jar files. Like, skip
the compile phase for src/main/java and just run src/test/java and
src/it/java against some Jenkins' archived version of the previously
compiled .jar. It didn't seem possible to me. We could definitely skip
packaging the web applications which are used for the user acceptance
tests using Selenium, but we don't use Docker as it isn't actually
available on our target platform.
>> This technique
>> otherwise seems like a holy grail to me and the team. I welcome ideas
>> about it.
>
> If it is already the hole grail I can't say anything about it...
:-D
--
You received this message because you are subscribed to the Google Groups
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jenkinsci-users/CAArU9iZVr4i%3DcvngPNTm63kKr2LAQtGYiGLP-WBxuZ%3DmVKSLVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.