Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-14 Thread Sven Anderson
TL;DR: +1 for 1-year release, without reducing face-to-face meetings.

On Wed, Dec 13, 2017 at 6:35 PM Matt Riedemann  wrote:

>
> Same question as above about just doing CD then.


Why not getting rid of stable branches and releases altogether then?

Honestly, I'm a big fan of CD, but CD and OpenStack is nothing but a wet
dream. That's why I don't think, the 1-year release proposal is about
cutting travel costs. Compared to the costs of the release production
upstream and downstream, travel costs are just a joke. I fully support the
1-year cycle, not because I think it's good to have fewer releases in
general (the opposite is true, I like "release early and often"), but
because I think it's a necessary adaption to reality of OpenStack
development. Release production upstream and downstream creates a _huge_
overhead at the moment, if we like that fact or not, and cutting this
overhead in half is great! In the end the release production is done in
large parts by the same developers that develop upstream as well, and it
would free a lot of resources to do actual upstream development.

Of course, in the perfect world, upstream OpenStack would be a continuous
release-free stream of fresh and bug free software, that people can pull
downstream releases from whenever they like. But that's not the reality, at
least not as long the scope of the product is broader than "Nova on
Devstack". And I honestly don't see a project like OpenStack be "CD'able"
in a foreseeable future. So, to reach CD (which, again, would be awesome)
you have a dependency chain like "better test coverage" -> "shorter
stabilization phase" -> "more frequent releases" -> "CD". So, the time we
reach a stabilization phase of 0 days, that is, no stable branches are
required in general, we reached true CD. But I don't see stabilization
becoming shorter or easier, rather the opposite, because OpenStack becomes
more and more complex and featureful. So, as long as we can't achieve that,
we have to bite the bullet and adapt release cadence to the stabilization
and production efforts, if we like it or not.

BTW. I don't see the 1-year release connected to the frequency of
face-to-face meetings (PTG, Summit, ...), which I think should _not_ be
reduced.


Cheers,

Sven
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-13 Thread Sven Anderson
On Sat, Dec 9, 2017 at 12:35 AM Alex Schultz  wrote:

> Please take some time to review the list of blueprints currently
> associated with Rocky[0] to see if your efforts have been moved. If
> you believe you're close to implementing the feature in the next week
> or two, let me know and we can move it back into Queens. If you think
> it will take an extended period of time (more than 2 weeks) to land
> but we need it in Queens, please submit an FFE.
>

 As discussed on IRC today, I'd like to try to implement

https://blueprints.launchpad.net/tripleo/+spec/tripleo-realtime

until Queens M3. It has been punted many releases already, and depends now
on the ironic ansible driver, which just merged and now gets it's finishing
touch. Since it's a pure add-on feature that is off by default and
shouldn't have impact on existing functionality, it's a pretty safe thing
to try on best effort basis. If we see it becomes unfeasible to land this
until M3 I will punt it.

Even if I make good progress next week, it is very unlikely to finish it
this year, so I also like to submit an FFE for it.

Cheers,

Sven
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ci] recheck impact on CI infrastructure

2017-01-02 Thread Sven Anderson
Hi Emilien and all,

On 16.12.2016 01:26, Emilien Macchi wrote:
> On Thu, Dec 15, 2016 at 12:22 PM, Sven Anderson  wrote:
>> Hi all,
>>
>> while I was waiting again for the CI to be fixed and didn't want to
>> torture it with additional rechecks, I wanted to find out, how much of
>> our CI infrastructure we waste with rechecks. My assumption was that
>> every recheck is a waste of resources based on a false negative, because
>> it renders the previous build useless. So I wrote a small script[1] to
>> calculate how many rechecks are made on average per built patch-set. It
>> calculates the number of patch-sets of merged changes that CI was
>> testing (some patch-sets are not, because they were updated before CI
>> started testing), the number of rechecks issued on these patch-sets, and
>> a value "CI-factor", which is the factor by which the rechecks increased
>> the the CI runs, that is, without rechecks it would be 1, if every
>> tested patch-set would have exactly one recheck it would be 2.
> 
> I see 2 different topics here.
> 
> # One is not related to $topic but still worth mentioning:
> "while I was waiting again for the CI to be fixed"
> 
> This week has been tough, and many of us burnt our time to resolve
> different complex problems in TripleO CI, mostly related to external
> dependencies (qemu upgrade, centos 7.3 upgrade, tripleo-ci infra,
> etc).
> Resolving these problems is very challenging and you'll notice that
> only a few of us actually work on this task, while a lot of people
> continue to push their features "hoping" that it will pass CI
> sometimes and if not, well, we'll do 'recheck'.
> That is a way of working I would say. I personally can't continue to
> code if the project I'm working on has broken CI.
> 
> In a previous experience, I've been working in a team where everyone
> stopped regular work when CI was broken and focus on fixing it.
> I'm not saying everyone should stop their tasks and help, but this
> "wait and see" comment doesn't actually help us to move forward.
> People need to get more involved in CI and be more helpful. I know
> it's difficult, but it's something anyone can learn, like you would
> learn how to write Python code for example.

I think you got my mail in the wrong way. I didn't want to say that
anyone is not doing it's job right and I didn't want to complain. I know
how challenging this is. In my previous job I was the person running the
CI (among other things). I just wanted to share the results, because I
think it's interesting how much percentage of our CI infrastructure is
"wasted" by rechecks, to on one hand raise awareness that we not just
blindly "recheck until verified", and on the other hand, how valuable it
is to keep CI stable.

Is it really the case that more CI people would help here? I would have
expected, as long as we don't do more modularized testing, that it
doesn't scale. Would more CI people fix the problems more quickly? Or is
it more like: the burden could be distributed on more shoulders, so not
always the same people have to interrupt their work? The second wouldn't
improve the situation but just spread the burden in a more fair manner.

With my post I mainly wanted to provide reliable data and emphasize how
important a stable CI and the work on this is, and that we all restrain
ourselves from blindly rechecking.


Happy New Year to everyone!

Sven

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] [ci] recheck impact on CI infrastructure

2016-12-15 Thread Sven Anderson
Hi all,

while I was waiting again for the CI to be fixed and didn't want to
torture it with additional rechecks, I wanted to find out, how much of
our CI infrastructure we waste with rechecks. My assumption was that
every recheck is a waste of resources based on a false negative, because
it renders the previous build useless. So I wrote a small script[1] to
calculate how many rechecks are made on average per built patch-set. It
calculates the number of patch-sets of merged changes that CI was
testing (some patch-sets are not, because they were updated before CI
started testing), the number of rechecks issued on these patch-sets, and
a value "CI-factor", which is the factor by which the rechecks increased
the the CI runs, that is, without rechecks it would be 1, if every
tested patch-set would have exactly one recheck it would be 2.

The results were not as bad as my feeling, we are below 2 for most of
the projects I tested. :-) But still, on THT for instance we use 71%
more resources because of the false negatives. I made monthly
breakdowns, so you can see a positive trend at least.


Here the results:

Project: tripleo-heat-templates

 month  patches  rechecks  CI-factor
 1  221   102   1.46
 2  282   300   2.06
 3  588   567   1.96
 4  220   253   2.15
 5  333   242   1.73
 6  459   325   1.71
 7  612   390   1.64
 8  694   442   1.64
 9  717   440   1.61
10  474   316   1.67
11  358   189   1.53
12  16880   1.48
 total 5126  3646   1.71

Project: tripleo-common

 month  patches  rechecks  CI-factor
 1   73291.4
 2   5948   1.81
 3   92   1012.1
 4   1719   2.12
 5   4727   1.57
 6   8346   1.55
 7   6626   1.39
 8  209   102   1.49
 9  261   129   1.49
10  11051   1.46
11  12147   1.39
12   4019   1.48
 total 1178   644   1.55

Project: tripleo-puppet-elements

 month  patches  rechecks  CI-factor
 1   24 9   1.38
 2920   3.22
 3716   3.29
 4924   3.67
 5   1417   2.21
 6   1733   2.94
 7   1216   2.33
 8   15212.4
 9   10142.4
10   12 5   1.42
11   3425   1.74
12   10132.3
 total  173   213   2.23

Project: puppet-tripleo

 month  patches  rechecks  CI-factor
 1   2923   1.79
 2   3668   2.89
 3   40442.1
 4   6874   2.09
 5  12943   1.33
 6  265   206   1.78
 7  235   1181.5
 8  193   130   1.67
 9  147   123   1.84
10  233   159   1.68
11  13786   1.63
12   20 5   1.25
 total 1532  10791.7


[1] https://gist.github.com/ansiwen/e139cbf25bc243d30629e0157fc753ff

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Your draft logo & a sneak peek

2016-10-29 Thread Sven Anderson


On 27.10.2016 19:54, Ryan Brady wrote:
> 
> How is the current draft logo expressive?  What does it express to you?

To me it appears like an stylized owl with a three-quarter circle as
opened wings, and with lines of three nested O's.


Cheers,

Sven


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Your draft logo & a sneak peek

2016-10-27 Thread Sven Anderson
Hi all,

On 25.10.2016 14:28, Ryan Brady wrote:
> I feel the logo draft is missing a lot of the detail and fidelity of our
> current logo. 
> The draft logo has lines that are much too thick especially in the face
> area.  It's
> recognizable from a shorter distance than our current logo. 
> 
> Our current logo has more of a cartoon / angry birds type feel to it -
> something
> with personality.  To me, the draft logo is devoid of personality. I
> understand why
> the foundation wants to have more consistency between logos, but I'm hoping
> this isn't the final design approach.

to balance the feedback a bit: I like the new logo. I'm sure it could be
improved, but in general I think it qualifies as a logo, while the old
version does not really from my perspective. Logos _have_ to be sparse
in detail and still expressive. That's what differentiates it from a
normal drawing.

Cheers,

Sven

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FFE request for ec2-api integration

2016-08-31 Thread Sven Anderson
Hi,

I'm working on the integration of the puppet-ec2api module. It is a
(probably) very straight forward task. The only thing that is a current
impediment is that puppet CI is currently not deploying and running
tempest on puppet-ec2api. I'm currently working on getting the ec2
credentials created within puppet-tempest, which are needed to run
tempest on ec2api. Once this is done, it should be very quick thing.
Here the changes that are not yet ready/merged. The change for THT is
still missing.

https://review.openstack.org/#/c/357971
https://review.openstack.org/#/c/356442
https://review.openstack.org/#/c/336562

I'd like to formally request an FFE for this.

Thanks,

Sven

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Austin summit - session recap/summary

2016-05-04 Thread Sven Anderson
Thanks a ton, Steve! I have to admit, although I was at the summit in
person, I can get out of your write-up a lot more than from the sessions
itself. (Probably because of a mixture of not being a native speaker and
being new to tripleO.)

Cheers,

Sven
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev