Sorry, accidentally hit the wrong key and message went out..... Was making a mention about the definition of Success. I thought the debate in the meeting was very productive - when a CI posts a +1 that is success, and when a CI posts a -1 (or no vote with comment) is also a success - as this reflects that the CI is doing what it is suppose to do.
So, when it comes to stackalytics, it is more critical to show if a given CI is operational or not - and for how long? Another thing we can debate is how to present the +1/-1 votes by a given CI - unless we have some benchmark, it will be hard to consider success/failure. So, I am of the opinion that, initially, we only report on the operational status and duration of the CIs, and a counter of +1 and -1 votes over a period of time. For example, looking at Arista CI, it has casted 7,958 votes so far and it has been operational for past 6 months. This information is not available anywhere - hence, presenting this kind of information on a dashboard created by Ilya would be very useful to the community as well to the vendors.. thoughts? -Sukhdev On Mon, Jun 30, 2014 at 12:49 PM, Sukhdev Kapur <[email protected]> wrote: > Well, Luke, this is collaborative effort by everybody. By having these CI > systems in place ensures that one person's code does not break other > person's code and vice versa. Therefore, having these CI systems > operational and voting 24x7 is a critical step in achieving this goal. > > However, the details as to how and what should be tested is definitely > debatable and the team has done excellent job in converging on that. > > Now, as to the issue at hand which Anita is describing, I attended the > meeting this morning and was very pleased with the debate that took place > and the definition as to Sucess > > > On Mon, Jun 30, 2014 at 12:27 PM, Luke Gorrie <[email protected]> wrote: > >> On 30 June 2014 21:08, Anita Kuno <[email protected]> wrote: >> >>> I am disappointed to realize that Ilya (or stackalytics, I don't know >>> where this is coming from) is unwilling to cease making up definitions >>> of success for third party ci systems to allow the openstack community >>> to arrive at its own definition. >>> >> >> There is indeed a risk that the new dashboards won't give a meaningful >> view of whether a 3rd party CI is voting correctly or not. >> >> However, there is an elephant in the room and a much more important >> problem: >> >> To measure how accurately a CI is voting says much more about a driver >> author's "Gerrit-fu" ability to operate a CI system than it does about >> whether the code they have contributed to OpenStack actually works, and the >> latter is what is actually important. >> >> To my mind the whole 3rd party testing discussion should refocus on >> helping developers maintain good working code and less on waving "you will >> be kicked out of OpenStack if you don't keep your swarm of nebulous daemons >> running 24/7". >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> [email protected] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
