Re: performance labs

2015-10-14 Thread Michael Neale
If broadening scope a bit, I would like to include memory footprint
measurements too (something I spend time thinking about). Stability of of
more pressing importance I agree.

On Thu, 15 Oct 2015 at 12:04 AM, Andrew Bayer <andrew.ba...@gmail.com>
wrote:

> Yeah, stability is my biggest concern, and an even harder thing to test
> for than performance. Might be worth scraping through JIRA to find examples
> of behavior that tends to trigger instability in various ways to come up
> with some ideas...
>
> A.
>
> On Wed, Oct 14, 2015 at 1:36 PM, Artur Szostak <aszos...@partner.eso.org>
> wrote:
>
>> The thread has been focusing on performance in terms of speed. But let me
>> add another performance dimension that honestly is much more important to
>> me right now (and causing me a lot of plain):
>> performance as in stability.
>>
>> The following kinds of tests might go a long way in first quantitatively
>> evaluating how stable Jenkins is and fixing these problems down the line.
>> - Perform continual start/stop cycles of the Jenkins master under various
>> loads (system stress).
>> - Perform continual build slave start/build/stop cycles under various
>> loads of the system and network. Ideally one would add simulations of
>> intermittent network failure and check that Jenkins follows the expected
>> error path.
>>
>> I dont know about other people's experience, but I see that above a
>> handful of build slave nodes one starts seeing a lot of connectivity and
>> start up / shutdown issues. I also suspect there are a number of race
>> conditions in there.
>>
>> --
>> *From:* jenkinsci-dev@googlegroups.com [jenkinsci-dev@googlegroups.com]
>> on behalf of Michael Neale [michael.ne...@gmail.com]
>> *Sent:* 14 October 2015 03:52
>> *To:* Jenkins Dev
>> *Subject:* Re: performance labs
>>
>> Ok so it sounds like exhuming Stephens scalability stuff (not sure if it
>> did startup time, but it doesn't sound like it would be hard to ad) would
>> be a great place to start. Like you said, turning the dials and seeing what
>> happens is super useful. Even on VMs (vs bare metal) would be informative
>> as its increasingly common to run Jenkins not on bare hardware.
>> On Wed, 14 Oct 2015 at 2:42 AM, Jesse Glick <jgl...@cloudbees.com> wrote:
>>
>>> On Tue, Oct 13, 2015 at 7:29 AM, Michael Neale <michael.ne...@gmail.com>
>>> wrote:
>>> > Looking at the variation of times people see, I am questioning the
>>> utility
>>> > of a generic test suite. Things vary so much there may be too many
>>> variables
>>> > at play to make something like this useful right now.
>>>
>>> Well a generic test suite is not going to predict any given
>>> installation’s performance, of course. But it can serve a controlled
>>> baseline by which you can measure the effects of changes. And many
>>> widely applicable bugs, like the ones Google engineers found, can be
>>> reproduced this way. When Stephen and I were poring over results from
>>> sample tests using his scalability framework, which did really generic
>>> stuff—run lots of builds from lots of jobs, each build producing gobs
>>> of output—it was immediately clear what was broken. You set n=10 and
>>> all is well. You set n=500 and things start to look worse. You set
>>> n=1 and the system basically hangs, and you look at a thread dump,
>>> and oh yes a thousand threads are waiting on this one lock for no good
>>> reason. So you fix that problem and rerun the test and you find the
>>> next problem.
>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "Jenkins Developers" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> jenkinsci-dev+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr1kkepm0P%2B9QzLzZq9a5dmjzos3jYu6r4W6r4wbvXPX2A%40mail.gmail.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Jenkins Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to jenkinsci-dev+unsubscr...@googlegroups.com.
>

RE: performance labs

2015-10-14 Thread Artur Szostak
The thread has been focusing on performance in terms of speed. But let me add 
another performance dimension that honestly is much more important to me right 
now (and causing me a lot of plain):
performance as in stability.

The following kinds of tests might go a long way in first quantitatively 
evaluating how stable Jenkins is and fixing these problems down the line.
- Perform continual start/stop cycles of the Jenkins master under various loads 
(system stress).
- Perform continual build slave start/build/stop cycles under various loads of 
the system and network. Ideally one would add simulations of intermittent 
network failure and check that Jenkins follows the expected error path.

I dont know about other people's experience, but I see that above a handful of 
build slave nodes one starts seeing a lot of connectivity and start up / 
shutdown issues. I also suspect there are a number of race conditions in there.


From: jenkinsci-dev@googlegroups.com [jenkinsci-dev@googlegroups.com] on behalf 
of Michael Neale [michael.ne...@gmail.com]
Sent: 14 October 2015 03:52
To: Jenkins Dev
Subject: Re: performance labs

Ok so it sounds like exhuming Stephens scalability stuff (not sure if it did 
startup time, but it doesn't sound like it would be hard to ad) would be a 
great place to start. Like you said, turning the dials and seeing what happens 
is super useful. Even on VMs (vs bare metal) would be informative as its 
increasingly common to run Jenkins not on bare hardware.
On Wed, 14 Oct 2015 at 2:42 AM, Jesse Glick 
<jgl...@cloudbees.com<mailto:jgl...@cloudbees.com>> wrote:
On Tue, Oct 13, 2015 at 7:29 AM, Michael Neale 
<michael.ne...@gmail.com<mailto:michael.ne...@gmail.com>> wrote:
> Looking at the variation of times people see, I am questioning the utility
> of a generic test suite. Things vary so much there may be too many variables
> at play to make something like this useful right now.

Well a generic test suite is not going to predict any given
installation’s performance, of course. But it can serve a controlled
baseline by which you can measure the effects of changes. And many
widely applicable bugs, like the ones Google engineers found, can be
reproduced this way. When Stephen and I were poring over results from
sample tests using his scalability framework, which did really generic
stuff—run lots of builds from lots of jobs, each build producing gobs
of output—it was immediately clear what was broken. You set n=10 and
all is well. You set n=500 and things start to look worse. You set
n=1 and the system basically hangs, and you look at a thread dump,
and oh yes a thousand threads are waiting on this one lock for no good
reason. So you fix that problem and rerun the test and you find the
next problem.

--
You received this message because you are subscribed to a topic in the Google 
Groups "Jenkins Developers" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
jenkinsci-dev+unsubscr...@googlegroups.com<mailto:jenkinsci-dev%2bunsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr1kkepm0P%2B9QzLzZq9a5dmjzos3jYu6r4W6r4wbvXPX2A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
jenkinsci-dev+unsubscr...@googlegroups.com<mailto:jenkinsci-dev+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CAKVMTi5sSUG%3DY9rNcQ4MoZLdV1%3DWq9uQoYt6rGTrB8LJOL1poQ%40mail.gmail.com<https://groups.google.com/d/msgid/jenkinsci-dev/CAKVMTi5sSUG%3DY9rNcQ4MoZLdV1%3DWq9uQoYt6rGTrB8LJOL1poQ%40mail.gmail.com?utm_medium=email_source=footer>.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/3C350C9D1C9DEE47BAEA43102B433C9EBE5DFDE5%40HQMBX2.ads.eso.org.
For more options, visit https://groups.google.com/d/optout.


Re: performance labs

2015-10-14 Thread Andrew Bayer
Yeah, stability is my biggest concern, and an even harder thing to test for
than performance. Might be worth scraping through JIRA to find examples of
behavior that tends to trigger instability in various ways to come up with
some ideas...

A.

On Wed, Oct 14, 2015 at 1:36 PM, Artur Szostak <aszos...@partner.eso.org>
wrote:

> The thread has been focusing on performance in terms of speed. But let me
> add another performance dimension that honestly is much more important to
> me right now (and causing me a lot of plain):
> performance as in stability.
>
> The following kinds of tests might go a long way in first quantitatively
> evaluating how stable Jenkins is and fixing these problems down the line.
> - Perform continual start/stop cycles of the Jenkins master under various
> loads (system stress).
> - Perform continual build slave start/build/stop cycles under various
> loads of the system and network. Ideally one would add simulations of
> intermittent network failure and check that Jenkins follows the expected
> error path.
>
> I dont know about other people's experience, but I see that above a
> handful of build slave nodes one starts seeing a lot of connectivity and
> start up / shutdown issues. I also suspect there are a number of race
> conditions in there.
>
> --
> *From:* jenkinsci-dev@googlegroups.com [jenkinsci-dev@googlegroups.com]
> on behalf of Michael Neale [michael.ne...@gmail.com]
> *Sent:* 14 October 2015 03:52
> *To:* Jenkins Dev
> *Subject:* Re: performance labs
>
> Ok so it sounds like exhuming Stephens scalability stuff (not sure if it
> did startup time, but it doesn't sound like it would be hard to ad) would
> be a great place to start. Like you said, turning the dials and seeing what
> happens is super useful. Even on VMs (vs bare metal) would be informative
> as its increasingly common to run Jenkins not on bare hardware.
> On Wed, 14 Oct 2015 at 2:42 AM, Jesse Glick <jgl...@cloudbees.com> wrote:
>
>> On Tue, Oct 13, 2015 at 7:29 AM, Michael Neale <michael.ne...@gmail.com>
>> wrote:
>> > Looking at the variation of times people see, I am questioning the
>> utility
>> > of a generic test suite. Things vary so much there may be too many
>> variables
>> > at play to make something like this useful right now.
>>
>> Well a generic test suite is not going to predict any given
>> installation’s performance, of course. But it can serve a controlled
>> baseline by which you can measure the effects of changes. And many
>> widely applicable bugs, like the ones Google engineers found, can be
>> reproduced this way. When Stephen and I were poring over results from
>> sample tests using his scalability framework, which did really generic
>> stuff—run lots of builds from lots of jobs, each build producing gobs
>> of output—it was immediately clear what was broken. You set n=10 and
>> all is well. You set n=500 and things start to look worse. You set
>> n=1 and the system basically hangs, and you look at a thread dump,
>> and oh yes a thousand threads are waiting on this one lock for no good
>> reason. So you fix that problem and rerun the test and you find the
>> next problem.
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Jenkins Developers" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> jenkinsci-dev+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr1kkepm0P%2B9QzLzZq9a5dmjzos3jYu6r4W6r4wbvXPX2A%40mail.gmail.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/CAKVMTi5sSUG%3DY9rNcQ4MoZLdV1%3DWq9uQoYt6rGTrB8LJOL1poQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/jenkinsci-dev/CAKVMTi5sSUG%3DY9rNcQ4MoZLdV1%3DWq9uQoYt6rGTrB8LJOL1poQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving email

Re: performance labs

2015-10-13 Thread Michael Neale
Can you tell us more about the hardware used?
On Tue, 13 Oct 2015 at 9:29 PM, Michael Neale 
wrote:

> Yes recently I have heard rumblings about fingerprints and other accreted
> files over time.
>
> Looking at the variation of times people see, I am questioning the utility
> of a generic test suite. Things vary so much there may be too many
> variables at play to make something like this useful right now. It's
> certainly useful to profile specific cases when people have a problem, and
> it's great there have been recent improvements (eg Apache example), but it
> may be a bit hard to justify right now.
>
>
> On Fri, 9 Oct 2015 at 8:29 PM, Robert Sandell 
> wrote:
>
>> A theory of mine is that startup times can depend on how "old" your build
>> records are, if there needs to be a lot of conversion of old data
>> structures in new plugin versions that could have a measurable impact,
>> maybe even OldDataMonitor gets involved and slows things down.
>>
>> So there could be a difference in generated test data vs. "real world"
>> data where it has grown over time.
>>
>> /B
>>
>> On Fri, Oct 9, 2015 at 11:15 AM, James Nord  wrote:
>>
>>> So I actually tried creating test data a year or so ago (maven job type
>>> with a large number of sub modules) and creating several of them in folders
>>> - but I never saw the issues (3 hour cold startup time) I was seeing on the
>>> production instance :(
>>>
>>> Maven project is available at
>>> https://github.com/jtnord/maven-test-project if you want to experiment.
>>>
>>> It may well have been around fingerprinting as my fingerprint file on
>>> production was > 2GB
>>> but I invested in some better storage and got the startup to under 3
>>> minutes so no longer had the inclination to try any further...
>>>
>>> On Wednesday, October 7, 2015 at 12:29:47 AM UTC+2, Michael Neale wrote:

 Yes that would be quite interesting. A stand alone tool could be
 useful. There are lots of things to measure but generating a lot of noise
 and jobs would be a great start. When you say "job upload" what were you
 measuring?
 On Fri, 2 Oct 2015 at 9:59 PM, Vojtech Juranek 
 wrote:

> Hi,
>
> > Is this something people would be interested in?
>
> yes, sounds interesting
>
> > Having either large sample JENKINS_HOME specimens or test code that
> can
> > generate pathological data would be required, as well as automation
> around
> > running it on a variety of machines (not necessarily cloud, ideally
> want to
> > be testing code not cloud infrastructure).
>
> IMHO it's better to have some code do generate various type of
> workspaces and
> loads - same as in the mentioned presentation, you should check
> performance
> characteristics for various job types, log sizes, number of plugin
> used etc.
> Using one big workspace can be harder to understand as it can combine
> multiple
> issues together and you can end up with tuned Jenkins which works fine
> with
> this use case, but performs not that well with other use cases.
>
> I did very simple job generator of freestyle jobs [1] for PerfCake [2]
> to
> measure responsiveness of job upload in the past. If you are
> interested, I can
> updated it to generate various jobs or it can be done in any other
> tool you
> prefer (or standalone application if you like).
>
> Cheers
> Vojta
>
> [1]
> https://github.com/vjuranek/jenkins-perf-tests/blob/master/perfcake/src/main/resources/scenarios/create-freestyle.xml
> [2] https://www.perfcake.org/
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Jenkins Developers" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe
> .
> To unsubscribe from this group and all its topics, send an email to
> jenkinsci-de...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/1541911.yeAEkfGOKe%40localhost.localdomain
> .
> For more options, visit https://groups.google.com/d/optout.
>
 --
>>>
>> You received this message because you are subscribed to the Google Groups
>>> "Jenkins Developers" group.
>>>
>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to jenkinsci-dev+unsubscr...@googlegroups.com.
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/jenkinsci-dev/06f87cca-9af7-4624-90d2-6b85516e3eb0%40googlegroups.com
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --

Re: performance labs

2015-10-13 Thread Jesse Glick
On Tue, Oct 13, 2015 at 7:29 AM, Michael Neale  wrote:
> Looking at the variation of times people see, I am questioning the utility
> of a generic test suite. Things vary so much there may be too many variables
> at play to make something like this useful right now.

Well a generic test suite is not going to predict any given
installation’s performance, of course. But it can serve a controlled
baseline by which you can measure the effects of changes. And many
widely applicable bugs, like the ones Google engineers found, can be
reproduced this way. When Stephen and I were poring over results from
sample tests using his scalability framework, which did really generic
stuff—run lots of builds from lots of jobs, each build producing gobs
of output—it was immediately clear what was broken. You set n=10 and
all is well. You set n=500 and things start to look worse. You set
n=1 and the system basically hangs, and you look at a thread dump,
and oh yes a thousand threads are waiting on this one lock for no good
reason. So you fix that problem and rerun the test and you find the
next problem.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr1kkepm0P%2B9QzLzZq9a5dmjzos3jYu6r4W6r4wbvXPX2A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance labs

2015-10-13 Thread Michael Neale
Ok so it sounds like exhuming Stephens scalability stuff (not sure if it
did startup time, but it doesn't sound like it would be hard to ad) would
be a great place to start. Like you said, turning the dials and seeing what
happens is super useful. Even on VMs (vs bare metal) would be informative
as its increasingly common to run Jenkins not on bare hardware.
On Wed, 14 Oct 2015 at 2:42 AM, Jesse Glick  wrote:

> On Tue, Oct 13, 2015 at 7:29 AM, Michael Neale 
> wrote:
> > Looking at the variation of times people see, I am questioning the
> utility
> > of a generic test suite. Things vary so much there may be too many
> variables
> > at play to make something like this useful right now.
>
> Well a generic test suite is not going to predict any given
> installation’s performance, of course. But it can serve a controlled
> baseline by which you can measure the effects of changes. And many
> widely applicable bugs, like the ones Google engineers found, can be
> reproduced this way. When Stephen and I were poring over results from
> sample tests using his scalability framework, which did really generic
> stuff—run lots of builds from lots of jobs, each build producing gobs
> of output—it was immediately clear what was broken. You set n=10 and
> all is well. You set n=500 and things start to look worse. You set
> n=1 and the system basically hangs, and you look at a thread dump,
> and oh yes a thousand threads are waiting on this one lock for no good
> reason. So you fix that problem and rerun the test and you find the
> next problem.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Jenkins Developers" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr1kkepm0P%2B9QzLzZq9a5dmjzos3jYu6r4W6r4wbvXPX2A%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CAKVMTi5sSUG%3DY9rNcQ4MoZLdV1%3DWq9uQoYt6rGTrB8LJOL1poQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance labs

2015-10-09 Thread James Nord
So I actually tried creating test data a year or so ago (maven job type 
with a large number of sub modules) and creating several of them in folders 
- but I never saw the issues (3 hour cold startup time) I was seeing on the 
production instance :(

Maven project is available at https://github.com/jtnord/maven-test-project 
if you want to experiment.

It may well have been around fingerprinting as my fingerprint file on 
production was > 2GB
but I invested in some better storage and got the startup to under 3 
minutes so no longer had the inclination to try any further...

On Wednesday, October 7, 2015 at 12:29:47 AM UTC+2, Michael Neale wrote:
>
> Yes that would be quite interesting. A stand alone tool could be useful. 
> There are lots of things to measure but generating a lot of noise and jobs 
> would be a great start. When you say "job upload" what were you measuring?
> On Fri, 2 Oct 2015 at 9:59 PM, Vojtech Juranek  > wrote:
>
>> Hi,
>>
>> > Is this something people would be interested in?
>>
>> yes, sounds interesting
>>
>> > Having either large sample JENKINS_HOME specimens or test code that can
>> > generate pathological data would be required, as well as automation 
>> around
>> > running it on a variety of machines (not necessarily cloud, ideally 
>> want to
>> > be testing code not cloud infrastructure).
>>
>> IMHO it's better to have some code do generate various type of workspaces 
>> and
>> loads - same as in the mentioned presentation, you should check 
>> performance
>> characteristics for various job types, log sizes, number of plugin used 
>> etc.
>> Using one big workspace can be harder to understand as it can combine 
>> multiple
>> issues together and you can end up with tuned Jenkins which works fine 
>> with
>> this use case, but performs not that well with other use cases.
>>
>> I did very simple job generator of freestyle jobs [1] for PerfCake [2] to
>> measure responsiveness of job upload in the past. If you are interested, 
>> I can
>> updated it to generate various jobs or it can be done in any other tool 
>> you
>> prefer (or standalone application if you like).
>>
>> Cheers
>> Vojta
>>
>> [1] 
>> https://github.com/vjuranek/jenkins-perf-tests/blob/master/perfcake/src/main/resources/scenarios/create-freestyle.xml
>> [2] https://www.perfcake.org/
>>
>> --
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "Jenkins Developers" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> jenkinsci-de...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/jenkinsci-dev/1541911.yeAEkfGOKe%40localhost.localdomain
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/06f87cca-9af7-4624-90d2-6b85516e3eb0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Re: performance labs

2015-10-09 Thread Vojtech Juranek
On Tuesday 06 October 2015 22:29:31 Michael Neale wrote:
> When you say "job upload" what were you measuring?

it was actually a measurement how Jenkins responsiveness to POST requests 
changes when you switch underlying web server and was done by one of my 
students as part of his master thesis, see [1] for more details

(perf tests were very basic - the main goal of the thesis was to implement 
Winstone -> Undertow switch in Jenkins, as this thesis was started before 
Kohsuke implemented switch to Jetty)

[1] https://groups.google.com/forum/#!topic/jenkinsci-dev/7dOnX2mNaw0

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/1508657.Fyldg3aeWD%40localhost.localdomain.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: This is a digitally signed message part.


Re: performance labs

2015-10-09 Thread Robert Sandell
A theory of mine is that startup times can depend on how "old" your build
records are, if there needs to be a lot of conversion of old data
structures in new plugin versions that could have a measurable impact,
maybe even OldDataMonitor gets involved and slows things down.

So there could be a difference in generated test data vs. "real world" data
where it has grown over time.

/B

On Fri, Oct 9, 2015 at 11:15 AM, James Nord  wrote:

> So I actually tried creating test data a year or so ago (maven job type
> with a large number of sub modules) and creating several of them in folders
> - but I never saw the issues (3 hour cold startup time) I was seeing on the
> production instance :(
>
> Maven project is available at https://github.com/jtnord/maven-test-project
> if you want to experiment.
>
> It may well have been around fingerprinting as my fingerprint file on
> production was > 2GB
> but I invested in some better storage and got the startup to under 3
> minutes so no longer had the inclination to try any further...
>
> On Wednesday, October 7, 2015 at 12:29:47 AM UTC+2, Michael Neale wrote:
>>
>> Yes that would be quite interesting. A stand alone tool could be useful.
>> There are lots of things to measure but generating a lot of noise and jobs
>> would be a great start. When you say "job upload" what were you measuring?
>> On Fri, 2 Oct 2015 at 9:59 PM, Vojtech Juranek 
>> wrote:
>>
>>> Hi,
>>>
>>> > Is this something people would be interested in?
>>>
>>> yes, sounds interesting
>>>
>>> > Having either large sample JENKINS_HOME specimens or test code that can
>>> > generate pathological data would be required, as well as automation
>>> around
>>> > running it on a variety of machines (not necessarily cloud, ideally
>>> want to
>>> > be testing code not cloud infrastructure).
>>>
>>> IMHO it's better to have some code do generate various type of
>>> workspaces and
>>> loads - same as in the mentioned presentation, you should check
>>> performance
>>> characteristics for various job types, log sizes, number of plugin used
>>> etc.
>>> Using one big workspace can be harder to understand as it can combine
>>> multiple
>>> issues together and you can end up with tuned Jenkins which works fine
>>> with
>>> this use case, but performs not that well with other use cases.
>>>
>>> I did very simple job generator of freestyle jobs [1] for PerfCake [2] to
>>> measure responsiveness of job upload in the past. If you are interested,
>>> I can
>>> updated it to generate various jobs or it can be done in any other tool
>>> you
>>> prefer (or standalone application if you like).
>>>
>>> Cheers
>>> Vojta
>>>
>>> [1]
>>> https://github.com/vjuranek/jenkins-perf-tests/blob/master/perfcake/src/main/resources/scenarios/create-freestyle.xml
>>> [2] https://www.perfcake.org/
>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "Jenkins Developers" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> jenkinsci-de...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/jenkinsci-dev/1541911.yeAEkfGOKe%40localhost.localdomain
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/06f87cca-9af7-4624-90d2-6b85516e3eb0%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Robert Sandell
*Software Engineer*
*CloudBees Inc.*

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CALzHZS0Ee2W0nsZyhU3-tAdn3vjEDa7Tue_D7d0VA%3DcVMAvSww%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance labs

2015-10-08 Thread Michael Neale
Wow fantastic. Actually 3 minutes means that the changes are pretty
successful - I doubt there would be a whole lot to optimise in that case
right? or could be even more lazy loaded? Still, probably a great example.
Taking that base and then adding more plugins and config changes to the mix
would also shed light on when things suddenly go bad.

Is there publicly available tarball backups of that JENKINS_HOME or are
there secrets in it?

On Wed, Oct 7, 2015 at 11:33 PM Andrew Bayer  wrote:

> So builds.apache.org is like 1500 jobs plus another ~30k Maven modules
> (stupid Maven project type!), $JENKINS_HOME is somewhere around 1tb. Until
> recently, startup time was a good 15 minutes or so, but since going from
> 1.565 to 1.609 seems to have made a *massive* difference in startup time -
> down to like three minutes.
>
> A.
>
> On Fri, Oct 2, 2015 at 2:18 AM, Michael Neale 
> wrote:
>
>> Oh wow - they may be a perfect test workload. Do you know if boot up
>> times are in the many many minutes for those instances? Some data on the
>> jenkins_home dir sizes?
>> It would be ideal to use opensource workloads (even if it is a point in
>> time) vs something contrived, or a scrubbed version of a private users data
>> that has donated it, however it would want to be pretty hefty (not
>> necessarily 2TB jenkins_homes that I have heard of, or 40 minute boot up,
>> but something up there would be nice).
>>
>> I guess the next step is an initial scope of what we want to measure. To
>> keep things focussed I am thinking or boot up to job load time, and listing
>> a few things.
>>
>>
>>
>>
>> On Thursday, October 1, 2015 at 5:55:08 PM UTC+10, Andrew Bayer wrote:
>>>
>>> ...and I can most likely provide builds.apache.org's
>>> jobs/builds/load/etc as a use case.
>>>
>>> A.
>>>
>>> On Thu, Oct 1, 2015 at 9:54 AM, Andrew Bayer 
>>> wrote:
>>>
 +1 - that'd be fantastic. I'd love to help with that.

 A.

 On Thu, Oct 1, 2015 at 4:50 AM, Michael Neale 
 wrote:

> Hey all - I have thought it would be a great idea to have some quasi
> formal "performance lab" setups for Jenkins.
>
> Recently around Jenkins 2.0 planning threads there have been lots of
> comments around performance challenges. Often things like launch time
> (talking many minutes to an hour for large workspaces - launch times are
> probably a good proxy for a whole lot of issues, but there are other 
> issues
> too).
>
> At JUC west there was an excellent talk by Akshay Dayal from Google,
> on scaling jenkins. I highly recommend flicking through the slides
> 
> or watching the talk
> 
> if you have time.
>
> Basically, they had some performance goals and started by setting up
> measurements and test scenarios to validate their progress - both around
> scalability of slaves (an interesting issue) but also on bootup time (time
> to recovery) which is very interesting. It reminded me that to improve
> something like this you kind of need easily repeatable measurements in
> controlled environments, which currently I don't think the Jenkins project
> has set up? (correct me if wrong).
>
> I know Stephen Connolly did some work a few years back on slave
> scalability which was interesting (building out a test suite
> infrastructure), but I am not aware of subsequent efforts.
>
> Is this something people would be interested in?
>
> Having either large sample JENKINS_HOME specimens or test code that
> can generate pathological data would be required, as well as automation
> around running it on a variety of machines (not necessarily cloud, ideally
> want to be testing code not cloud infrastructure).
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to jenkinsci-de...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/76a12929-8f10-4b50-bf01-04cc77768149%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


>>> --
>> You received this message because you are subscribed to the Google Groups
>> "Jenkins Developers" group.
>>
> To unsubscribe from this group and stop receiving emails from it, send an
>> email to jenkinsci-dev+unsubscr...@googlegroups.com.
>>
> To view this discussion on the web visit
>> 

Re: performance labs

2015-10-07 Thread Andrew Bayer
So builds.apache.org is like 1500 jobs plus another ~30k Maven modules
(stupid Maven project type!), $JENKINS_HOME is somewhere around 1tb. Until
recently, startup time was a good 15 minutes or so, but since going from
1.565 to 1.609 seems to have made a *massive* difference in startup time -
down to like three minutes.

A.

On Fri, Oct 2, 2015 at 2:18 AM, Michael Neale 
wrote:

> Oh wow - they may be a perfect test workload. Do you know if boot up times
> are in the many many minutes for those instances? Some data on the
> jenkins_home dir sizes?
> It would be ideal to use opensource workloads (even if it is a point in
> time) vs something contrived, or a scrubbed version of a private users data
> that has donated it, however it would want to be pretty hefty (not
> necessarily 2TB jenkins_homes that I have heard of, or 40 minute boot up,
> but something up there would be nice).
>
> I guess the next step is an initial scope of what we want to measure. To
> keep things focussed I am thinking or boot up to job load time, and listing
> a few things.
>
>
>
>
> On Thursday, October 1, 2015 at 5:55:08 PM UTC+10, Andrew Bayer wrote:
>>
>> ...and I can most likely provide builds.apache.org's
>> jobs/builds/load/etc as a use case.
>>
>> A.
>>
>> On Thu, Oct 1, 2015 at 9:54 AM, Andrew Bayer  wrote:
>>
>>> +1 - that'd be fantastic. I'd love to help with that.
>>>
>>> A.
>>>
>>> On Thu, Oct 1, 2015 at 4:50 AM, Michael Neale 
>>> wrote:
>>>
 Hey all - I have thought it would be a great idea to have some quasi
 formal "performance lab" setups for Jenkins.

 Recently around Jenkins 2.0 planning threads there have been lots of
 comments around performance challenges. Often things like launch time
 (talking many minutes to an hour for large workspaces - launch times are
 probably a good proxy for a whole lot of issues, but there are other issues
 too).

 At JUC west there was an excellent talk by Akshay Dayal from Google, on
 scaling jenkins. I highly recommend flicking through the slides
 
 or watching the talk
 
 if you have time.

 Basically, they had some performance goals and started by setting up
 measurements and test scenarios to validate their progress - both around
 scalability of slaves (an interesting issue) but also on bootup time (time
 to recovery) which is very interesting. It reminded me that to improve
 something like this you kind of need easily repeatable measurements in
 controlled environments, which currently I don't think the Jenkins project
 has set up? (correct me if wrong).

 I know Stephen Connolly did some work a few years back on slave
 scalability which was interesting (building out a test suite
 infrastructure), but I am not aware of subsequent efforts.

 Is this something people would be interested in?

 Having either large sample JENKINS_HOME specimens or test code that can
 generate pathological data would be required, as well as automation around
 running it on a variety of machines (not necessarily cloud, ideally want to
 be testing code not cloud infrastructure).


 --
 You received this message because you are subscribed to the Google
 Groups "Jenkins Developers" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to jenkinsci-de...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/jenkinsci-dev/76a12929-8f10-4b50-bf01-04cc77768149%40googlegroups.com
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/297b9981-edc5-49e2-b46f-f0b49183f32c%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: performance labs

2015-10-06 Thread Michael Neale
Agree on all counts, as long as it is sufficiently large/complicated enough 
to exercise enough code that it represents what people are observing. 

Certainly the result would have to be a trend of some meaningful statistic 
over time, with permutations of versions. I think sticking to launch time 
(or time to launch/render some important page) is worth measuring. 


I guess some tool people can clone and try themselves (which implies 
something that generates a large-enough workspace would be great, but 
downloading a tar.gz of a large one could also do) would be great to 
encourage experimentation and ultimately profiling. Obviously to get a 
trend over time we need somewhere to run it regularly against versions and 
permutations of plugins. Quickly gets complicated. 

So where to start, a repo with some parametrised launch script people can 
try? Use Jenkins itself to test launch times and calculate and establish 
trends? 


On Friday, October 2, 2015 at 7:12:24 PM UTC+10, Artur Szostak wrote:
>
> Hi,
>
> You do not necessarily need very large setups to do these performance 
> tests. What you need to be doing is performing proper measurements on the 
> right things. I would even go so far as to say that large setups might 
> actually make things difficult to disentangle.
>
> Since it sounds like the project has got nothing in terms of systematic 
> performance measurements, I would advise to start smaller. You certainly 
> want larger setups to be part of the mix, but I think the first focus 
> should be on the smaller setups for a baseline. Also, one should check 
> behaviour when you have a large mix of plugins.
>
> Again, the more important point is measurement. To anyone who is setting 
> this up who is not a physicist by training (maybe you need reminding if you 
> are): a single number is not a measurement. At a minimum, a measurement is 
> 2 numbers, lower and upper range. And even better is a mean + standard 
> deviation or confidence interval. Why am I pointing this out you may ask. 
> Well, if I supposedly measured the time for something and tell you it took 
> 30 seconds. Then I change some code and time again and get 25 seconds. Have 
> I improved things? You would maybe say yes. But what if I tell you the 
> measurement was 30 +/- 10 versus 25 +/- 10 seconds? I haven't really 
> improved anything, now have I. It's just noise. So, if we are serious about 
> test driven software development, we should also be serious about 
> measurement.
>
> It is also important to record and keep trends of the timings. There will 
> be outliers and there will be weird stuff in the trends from time to time, 
> which needs to be checked, analysed and understood.
>
> As a last point on measurement, I dont know if there is an easy way to get 
> profile information, but a break down of how much CPU and I/O each plugin 
> or core service consumes should be a goal. If you can measure that, you can 
> make quick progress on weeding out the culprits.
>
> Kind regards.
>
> Artur
>
>
> --
> *From:* jenkin...@googlegroups.com  [
> jenkin...@googlegroups.com ] on behalf of Michael Neale [
> michae...@gmail.com ]
> *Sent:* 02 October 2015 02:18
> *To:* Jenkins Developers
> *Subject:* Re: performance labs
>
> Oh wow - they may be a perfect test workload. Do you know if boot up times 
> are in the many many minutes for those instances? Some data on the 
> jenkins_home dir sizes? 
> It would be ideal to use opensource workloads (even if it is a point in 
> time) vs something contrived, or a scrubbed version of a private users data 
> that has donated it, however it would want to be pretty hefty (not 
> necessarily 2TB jenkins_homes that I have heard of, or 40 minute boot up, 
> but something up there would be nice). 
>
> I guess the next step is an initial scope of what we want to measure. To 
> keep things focussed I am thinking or boot up to job load time, and listing 
> a few things. 
>
>
>
>
> On Thursday, October 1, 2015 at 5:55:08 PM UTC+10, Andrew Bayer wrote: 
>>
>> ...and I can most likely provide builds.apache.org's 
>> jobs/builds/load/etc as a use case. 
>>
>> A.
>>
>> On Thu, Oct 1, 2015 at 9:54 AM, Andrew Bayer <andrew...@gmail.com 
>> <http://UrlBlockedError.aspx>> wrote:
>>
>>> +1 - that'd be fantastic. I'd love to help with that. 
>>>
>>> A.
>>>
>>> On Thu, Oct 1, 2015 at 4:50 AM, Michael Neale <michae...@gmail.com 
>>> <http://UrlBlockedError.aspx>> wrote:
>>>
>>>> Hey all - I have thought it would be a great idea to have some quasi 
>>>> formal "performance lab" setups for Jenkins.  
>>>>
>>&g

Re: performance labs

2015-10-06 Thread Michael Neale
Yes that would be quite interesting. A stand alone tool could be useful.
There are lots of things to measure but generating a lot of noise and jobs
would be a great start. When you say "job upload" what were you measuring?
On Fri, 2 Oct 2015 at 9:59 PM, Vojtech Juranek  wrote:

> Hi,
>
> > Is this something people would be interested in?
>
> yes, sounds interesting
>
> > Having either large sample JENKINS_HOME specimens or test code that can
> > generate pathological data would be required, as well as automation
> around
> > running it on a variety of machines (not necessarily cloud, ideally want
> to
> > be testing code not cloud infrastructure).
>
> IMHO it's better to have some code do generate various type of workspaces
> and
> loads - same as in the mentioned presentation, you should check performance
> characteristics for various job types, log sizes, number of plugin used
> etc.
> Using one big workspace can be harder to understand as it can combine
> multiple
> issues together and you can end up with tuned Jenkins which works fine with
> this use case, but performs not that well with other use cases.
>
> I did very simple job generator of freestyle jobs [1] for PerfCake [2] to
> measure responsiveness of job upload in the past. If you are interested, I
> can
> updated it to generate various jobs or it can be done in any other tool you
> prefer (or standalone application if you like).
>
> Cheers
> Vojta
>
> [1]
> https://github.com/vjuranek/jenkins-perf-tests/blob/master/perfcake/src/main/resources/scenarios/create-freestyle.xml
> [2] https://www.perfcake.org/
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Jenkins Developers" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/jenkinsci-dev/1F9DHyMOutw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/1541911.yeAEkfGOKe%40localhost.localdomain
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CAKVMTi6oFi%2BkcUnFKaTF02pZ9%2B1%2B%2B%3DzK55TTrboZmEPBBM8riA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance labs

2015-10-02 Thread Vojtech Juranek
Hi,

> Is this something people would be interested in?

yes, sounds interesting
 
> Having either large sample JENKINS_HOME specimens or test code that can
> generate pathological data would be required, as well as automation around
> running it on a variety of machines (not necessarily cloud, ideally want to
> be testing code not cloud infrastructure).

IMHO it's better to have some code do generate various type of workspaces and 
loads - same as in the mentioned presentation, you should check performance 
characteristics for various job types, log sizes, number of plugin used etc. 
Using one big workspace can be harder to understand as it can combine multiple 
issues together and you can end up with tuned Jenkins which works fine with 
this use case, but performs not that well with other use cases.

I did very simple job generator of freestyle jobs [1] for PerfCake [2] to 
measure responsiveness of job upload in the past. If you are interested, I can 
updated it to generate various jobs or it can be done in any other tool you 
prefer (or standalone application if you like).

Cheers
Vojta

[1] 
https://github.com/vjuranek/jenkins-perf-tests/blob/master/perfcake/src/main/resources/scenarios/create-freestyle.xml
[2] https://www.perfcake.org/

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/1541911.yeAEkfGOKe%40localhost.localdomain.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: This is a digitally signed message part.


RE: performance labs

2015-10-02 Thread Artur Szostak
Hi,

You do not necessarily need very large setups to do these performance tests. 
What you need to be doing is performing proper measurements on the right 
things. I would even go so far as to say that large setups might actually make 
things difficult to disentangle.

Since it sounds like the project has got nothing in terms of systematic 
performance measurements, I would advise to start smaller. You certainly want 
larger setups to be part of the mix, but I think the first focus should be on 
the smaller setups for a baseline. Also, one should check behaviour when you 
have a large mix of plugins.

Again, the more important point is measurement. To anyone who is setting this 
up who is not a physicist by training (maybe you need reminding if you are): a 
single number is not a measurement. At a minimum, a measurement is 2 numbers, 
lower and upper range. And even better is a mean + standard deviation or 
confidence interval. Why am I pointing this out you may ask. Well, if I 
supposedly measured the time for something and tell you it took 30 seconds. 
Then I change some code and time again and get 25 seconds. Have I improved 
things? You would maybe say yes. But what if I tell you the measurement was 30 
+/- 10 versus 25 +/- 10 seconds? I haven't really improved anything, now have 
I. It's just noise. So, if we are serious about test driven software 
development, we should also be serious about measurement.

It is also important to record and keep trends of the timings. There will be 
outliers and there will be weird stuff in the trends from time to time, which 
needs to be checked, analysed and understood.

As a last point on measurement, I dont know if there is an easy way to get 
profile information, but a break down of how much CPU and I/O each plugin or 
core service consumes should be a goal. If you can measure that, you can make 
quick progress on weeding out the culprits.

Kind regards.

Artur



From: jenkinsci-dev@googlegroups.com [jenkinsci-dev@googlegroups.com] on behalf 
of Michael Neale [michael.ne...@gmail.com]
Sent: 02 October 2015 02:18
To: Jenkins Developers
Subject: Re: performance labs

Oh wow - they may be a perfect test workload. Do you know if boot up times are 
in the many many minutes for those instances? Some data on the jenkins_home dir 
sizes?
It would be ideal to use opensource workloads (even if it is a point in time) 
vs something contrived, or a scrubbed version of a private users data that has 
donated it, however it would want to be pretty hefty (not necessarily 2TB 
jenkins_homes that I have heard of, or 40 minute boot up, but something up 
there would be nice).

I guess the next step is an initial scope of what we want to measure. To keep 
things focussed I am thinking or boot up to job load time, and listing a few 
things.




On Thursday, October 1, 2015 at 5:55:08 PM UTC+10, Andrew Bayer wrote:
...and I can most likely provide builds.apache.org<http://builds.apache.org>'s 
jobs/builds/load/etc as a use case.

A.

On Thu, Oct 1, 2015 at 9:54 AM, Andrew Bayer 
<andrew...@gmail.com> wrote:
+1 - that'd be fantastic. I'd love to help with that.

A.

On Thu, Oct 1, 2015 at 4:50 AM, Michael Neale 
<michae...@gmail.com> wrote:
Hey all - I have thought it would be a great idea to have some quasi formal 
"performance lab" setups for Jenkins.

Recently around Jenkins 2.0 planning threads there have been lots of comments 
around performance challenges. Often things like launch time (talking many 
minutes to an hour for large workspaces - launch times are probably a good 
proxy for a whole lot of issues, but there are other issues too).

At JUC west there was an excellent talk by Akshay Dayal from Google, on scaling 
jenkins. I highly recommend flicking through the 
slides<https://www.cloudbees.com/jenkins/juc-2015/abstracts/us-west/02-01-1600> 
or watching the 
talk<https://www.youtube.com/watch?v=9-DUVroz7yk=16=PLvBBnHmZuNQKyjKInevHYsRq8J7Q1I6Fq>
 if you have time.

Basically, they had some performance goals and started by setting up 
measurements and test scenarios to validate their progress - both around 
scalability of slaves (an interesting issue) but also on bootup time (time to 
recovery) which is very interesting. It reminded me that to improve something 
like this you kind of need easily repeatable measurements in controlled 
environments, which currently I don't think the Jenkins project has set up? 
(correct me if wrong).

I know Stephen Connolly did some work a few years back on slave scalability 
which was interesting (building out a test suite infrastructure), but I am not 
aware of subsequent efforts.

Is this something people would be interested in?

Having either large sample JENKINS_HOME specimens or test code that can 
generate pathological data would be required, as well as automation around 
running it on a variety of machines (not necessarily cloud, ideally want to be 
testing c

Re: performance labs

2015-10-01 Thread Andrew Bayer
...and I can most likely provide builds.apache.org's jobs/builds/load/etc
as a use case.

A.

On Thu, Oct 1, 2015 at 9:54 AM, Andrew Bayer  wrote:

> +1 - that'd be fantastic. I'd love to help with that.
>
> A.
>
> On Thu, Oct 1, 2015 at 4:50 AM, Michael Neale 
> wrote:
>
>> Hey all - I have thought it would be a great idea to have some quasi
>> formal "performance lab" setups for Jenkins.
>>
>> Recently around Jenkins 2.0 planning threads there have been lots of
>> comments around performance challenges. Often things like launch time
>> (talking many minutes to an hour for large workspaces - launch times are
>> probably a good proxy for a whole lot of issues, but there are other issues
>> too).
>>
>> At JUC west there was an excellent talk by Akshay Dayal from Google, on
>> scaling jenkins. I highly recommend flicking through the slides
>> 
>> or watching the talk
>> 
>> if you have time.
>>
>> Basically, they had some performance goals and started by setting up
>> measurements and test scenarios to validate their progress - both around
>> scalability of slaves (an interesting issue) but also on bootup time (time
>> to recovery) which is very interesting. It reminded me that to improve
>> something like this you kind of need easily repeatable measurements in
>> controlled environments, which currently I don't think the Jenkins project
>> has set up? (correct me if wrong).
>>
>> I know Stephen Connolly did some work a few years back on slave
>> scalability which was interesting (building out a test suite
>> infrastructure), but I am not aware of subsequent efforts.
>>
>> Is this something people would be interested in?
>>
>> Having either large sample JENKINS_HOME specimens or test code that can
>> generate pathological data would be required, as well as automation around
>> running it on a variety of machines (not necessarily cloud, ideally want to
>> be testing code not cloud infrastructure).
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Jenkins Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to jenkinsci-dev+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/jenkinsci-dev/76a12929-8f10-4b50-bf01-04cc77768149%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CAPbPdOYAiBEDObK2HkzNhWdc-Ehm9UY-%2B1%2B4LYOYHcDHLifPhw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance labs

2015-10-01 Thread Andrew Bayer
+1 - that'd be fantastic. I'd love to help with that.

A.

On Thu, Oct 1, 2015 at 4:50 AM, Michael Neale 
wrote:

> Hey all - I have thought it would be a great idea to have some quasi
> formal "performance lab" setups for Jenkins.
>
> Recently around Jenkins 2.0 planning threads there have been lots of
> comments around performance challenges. Often things like launch time
> (talking many minutes to an hour for large workspaces - launch times are
> probably a good proxy for a whole lot of issues, but there are other issues
> too).
>
> At JUC west there was an excellent talk by Akshay Dayal from Google, on
> scaling jenkins. I highly recommend flicking through the slides
> 
> or watching the talk
> 
> if you have time.
>
> Basically, they had some performance goals and started by setting up
> measurements and test scenarios to validate their progress - both around
> scalability of slaves (an interesting issue) but also on bootup time (time
> to recovery) which is very interesting. It reminded me that to improve
> something like this you kind of need easily repeatable measurements in
> controlled environments, which currently I don't think the Jenkins project
> has set up? (correct me if wrong).
>
> I know Stephen Connolly did some work a few years back on slave
> scalability which was interesting (building out a test suite
> infrastructure), but I am not aware of subsequent efforts.
>
> Is this something people would be interested in?
>
> Having either large sample JENKINS_HOME specimens or test code that can
> generate pathological data would be required, as well as automation around
> running it on a variety of machines (not necessarily cloud, ideally want to
> be testing code not cloud infrastructure).
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/76a12929-8f10-4b50-bf01-04cc77768149%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CAPbPdOaLf62SY2gR7c-fzudi6yT44wvtnj2rhFCGJ1UEtGng3Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance labs

2015-10-01 Thread Michael Neale
Oh wow - they may be a perfect test workload. Do you know if boot up times 
are in the many many minutes for those instances? Some data on the 
jenkins_home dir sizes?
It would be ideal to use opensource workloads (even if it is a point in 
time) vs something contrived, or a scrubbed version of a private users data 
that has donated it, however it would want to be pretty hefty (not 
necessarily 2TB jenkins_homes that I have heard of, or 40 minute boot up, 
but something up there would be nice). 

I guess the next step is an initial scope of what we want to measure. To 
keep things focussed I am thinking or boot up to job load time, and listing 
a few things. 




On Thursday, October 1, 2015 at 5:55:08 PM UTC+10, Andrew Bayer wrote:
>
> ...and I can most likely provide builds.apache.org's jobs/builds/load/etc 
> as a use case.
>
> A.
>
> On Thu, Oct 1, 2015 at 9:54 AM, Andrew Bayer  > wrote:
>
>> +1 - that'd be fantastic. I'd love to help with that.
>>
>> A.
>>
>> On Thu, Oct 1, 2015 at 4:50 AM, Michael Neale > > wrote:
>>
>>> Hey all - I have thought it would be a great idea to have some quasi 
>>> formal "performance lab" setups for Jenkins. 
>>>
>>> Recently around Jenkins 2.0 planning threads there have been lots of 
>>> comments around performance challenges. Often things like launch time 
>>> (talking many minutes to an hour for large workspaces - launch times are 
>>> probably a good proxy for a whole lot of issues, but there are other issues 
>>> too). 
>>>
>>> At JUC west there was an excellent talk by Akshay Dayal from Google, on 
>>> scaling jenkins. I highly recommend flicking through the slides 
>>>  
>>> or watching the talk 
>>> 
>>>  
>>> if you have time. 
>>>
>>> Basically, they had some performance goals and started by setting up 
>>> measurements and test scenarios to validate their progress - both around 
>>> scalability of slaves (an interesting issue) but also on bootup time (time 
>>> to recovery) which is very interesting. It reminded me that to improve 
>>> something like this you kind of need easily repeatable measurements in 
>>> controlled environments, which currently I don't think the Jenkins project 
>>> has set up? (correct me if wrong). 
>>>
>>> I know Stephen Connolly did some work a few years back on slave 
>>> scalability which was interesting (building out a test suite 
>>> infrastructure), but I am not aware of subsequent efforts. 
>>>
>>> Is this something people would be interested in? 
>>>
>>> Having either large sample JENKINS_HOME specimens or test code that can 
>>> generate pathological data would be required, as well as automation around 
>>> running it on a variety of machines (not necessarily cloud, ideally want to 
>>> be testing code not cloud infrastructure). 
>>>
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Jenkins Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to jenkinsci-de...@googlegroups.com .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/jenkinsci-dev/76a12929-8f10-4b50-bf01-04cc77768149%40googlegroups.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/297b9981-edc5-49e2-b46f-f0b49183f32c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


performance labs

2015-09-30 Thread Michael Neale
Hey all - I have thought it would be a great idea to have some quasi formal 
"performance lab" setups for Jenkins. 

Recently around Jenkins 2.0 planning threads there have been lots of 
comments around performance challenges. Often things like launch time 
(talking many minutes to an hour for large workspaces - launch times are 
probably a good proxy for a whole lot of issues, but there are other issues 
too). 

At JUC west there was an excellent talk by Akshay Dayal from Google, on 
scaling jenkins. I highly recommend flicking through the slides 
 
or watching the talk 

 
if you have time. 

Basically, they had some performance goals and started by setting up 
measurements and test scenarios to validate their progress - both around 
scalability of slaves (an interesting issue) but also on bootup time (time 
to recovery) which is very interesting. It reminded me that to improve 
something like this you kind of need easily repeatable measurements in 
controlled environments, which currently I don't think the Jenkins project 
has set up? (correct me if wrong). 

I know Stephen Connolly did some work a few years back on slave scalability 
which was interesting (building out a test suite infrastructure), but I am 
not aware of subsequent efforts. 

Is this something people would be interested in? 

Having either large sample JENKINS_HOME specimens or test code that can 
generate pathological data would be required, as well as automation around 
running it on a variety of machines (not necessarily cloud, ideally want to 
be testing code not cloud infrastructure). 


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/76a12929-8f10-4b50-bf01-04cc77768149%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.