Re: [openstack-dev] [OpenStack-Infra] [infra] Third Party CI naming and contact (action required)

2014-08-14 Thread Franck Yelles
Hi James,

I have added the Nuage CI system to the list; I also took the liberty
to reorder alphabetically the list

Franck
Franck


On Wed, Aug 13, 2014 at 11:23 AM, James E. Blair cor...@inaugust.com wrote:
 Hi,

 We've updated the registration requirements for third-party CI systems
 here:

   http://ci.openstack.org/third_party.html

 We now have 86 third-party CI systems registered and have undertaken an
 effort to make things more user-friendly for the developers who interact
 with them.  There are two important changes to be aware of:

 1) We now generally name third-party systems in a descriptive manner
 including the company and product they are testing.  We have renamed
 currently-operating CI systems to match these standards to the best of
 our abilities.  Some of them ended up with particularly bad names (like
 Unknown Function...).  If your system is one of these, please join us
 in #openstack-infra on Freenode to establish a more descriptive name.

 2) We have established a standard wiki page template to supply a
 description of the system, what is tested, and contact information for
 each system.  See https://wiki.openstack.org/wiki/ThirdPartySystems for
 an index of such pages and instructions for creating them.  Each
 third-party CI system will have its own page in the wiki and it must
 include a link to that page in every comment that it leaves in Gerrit.

 If you operate a third-party CI system, please ensure that you register
 a wiki page and update your system to link to it in every new Gerrit
 comment by the end of August.  Beginning in September, we will disable
 systems that have not been updated.

 Thanks,

 Jim

 ___
 OpenStack-Infra mailing list
 openstack-in...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-15 Thread Franck Yelles
Hi Edgar,

I am running the Nuage CI.
The Nuage CI has run and posted successfully the result for the first
review.

Because of infra issue(internet can be down, etc) we are running manually
the -1
 -1 take some time away for developper and before doing a -1, I want to be
sure that it's a valid issue. (we had a talk about it during the
third-party meeting on mondays)

I have posted the failure report and you can see that it was run, uploaded
the result to our server but we didn't vote (reason mention above)
http://208.113.169.228/nuage-ci/29_114629_1/ (seetimestamp of the files)

Let me know if you have any questions.

Franck (lyxus)


Franck


On Fri, Aug 15, 2014 at 3:35 PM, Edgar Magana edgar.mag...@workday.com
wrote:

 Team,

 I did a quick audit on the Neutron CI. Very sad results. Only few plugins
 and drivers are running properly and testing all Neutron commits.
 I created a report here:
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin
 _and_Drivers


 We will discuss the actions to take on the next Neutron IRC meeting. So
 please, reach me out to clarify what is the status of your CI.
 I had two commits to quickly verify the CI reliability:

 https://review.openstack.org/#/c/114393/

 https://review.openstack.org/#/c/40296/


 I would expect all plugins and drivers passing on the first one and
 failing for the second but I got so many surprises.

 Neutron code quality and reliability is a top priority, if you ignore this
 report that plugin/driver will be candidate to be remove from Neutron tree.

 Cheers,

 Edgar

 P.s. I hate to be the inquisitor hereŠ but someone has to do the dirty job!


 On 8/14/14, 8:30 AM, Kyle Mestery mest...@mestery.com wrote:

 Folks, I'm not sure if all CI accounts are running sufficient tests.
 Per the requirements wiki page here [1], everyone needs to be running
 more than just Tempest API tests, which I still see most neutron
 third-party CI setups doing. I'd like to ask everyone who operates a
 third-party CI account for Neutron to please look at the link below
 and make sure you are running appropriate tests. If you have
 questions, the weekly third-party meeting [2] is a great place to ask
 questions.
 
 Thanks,
 Kyle
 
 [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
 [2] https://wiki.openstack.org/wiki/Meetings/ThirdParty
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-08-26 Thread Franck Yelles
I am experiencing the exact same issue. Several reviews did not rebase
recently, which make some of my tests fail.
What should be the behaviour from the CI point of view ?
1) -1 and request a rebase
2) 0 and request a rebase
3) Ci to do the cherry pick but like Kevin was stating, it need some
custom code that I want to avoid

Thanks,
Franck
Franck


On Thu, Jul 24, 2014 at 4:15 PM, Kevin Benton blak...@gmail.com wrote:
 Cherry-picking onto the target branch requires an extra step and custom code
 that I wanted to avoid.
 Right now I can just pass the gerrit ref into devstack's local.conf as the
 branch and everything works.
 If there was a way to get that Zuul ref, I could just use that instead and
 no new code would be required.

 Is exposing that ref in a known format/location something the infra team
 might consider?

 Thanks


 On Tue, Jul 22, 2014 at 4:16 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-07-21 11:36:43 -0700 (-0700), Kevin Benton wrote:
  I see. So then back to my other question, is it possible to get
  access to the same branch that is being passed to the OpenStack CI
  devstack tests?
 
  For example, in the console output I can see it uses a ref
  like refs/zuul/ master/Z75ac747d605b4eb28d4add7fa5b99890.[1] Is
  that visible somewhere (other than the logs of course) could be
  used in a third-party system?

 Right now, no. It's information passed from Zuul to a Jenkins master
 via Gearman, but as far as I know is currently only discoverable
 within the logs and the job parameters displayed in Jenkins. There
 has been some discussion in the past of Zuul providing some more
 detailed information to third-party systems (perhaps the capability
 to add them as additional Gearman workers) but that has never been
 fully fleshed out.

 For the case of independent pipelines (which is all I would expect a
 third-party CI to have any interest in running for the purpose of
 testing a proposed change) it should be entirely sufficient to
 cherry-pick a patch/series from our Gerrit onto the target branch.
 Only _dependent_ pipelines currently make use of Zuul's capability
 to provide a common ref representing a set of different changes
 across multiple projects, since independent pipelines will only ever
 have an available ZUUL_REF on a single project (the same project for
 which the change is being proposed).
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] Tempest test validation

2014-01-24 Thread Franck Yelles
Hi,
I sent this message to the openstack-qa but it might be more to openstack-dev

 Hi everyone,

 I would need some clarification of the Tempest testcases.
 I am trying to run tempest on a vanilla devstack environment.

 My localrc file has the API_RATE_LIMIT set to false.
 This is the only modification that I have.

 I would run ./stack.sh and then run ./run_tempest.sh and would have X errors.
 (running the  failing testcases manually works)
 Then I would unstack and stack again and run again ./run_tempest.sh
 and would have Y errors.

 My VM has 2 dual quad core and 8Go of RAM

 Why do I have this inconsistency ? Or am I doing something wrong ?

 Thanks,
 Franck

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Neutron][3rd Party Testing] Methodology for 3rd party

2014-02-04 Thread Franck Yelles
Hello,

I was wondering how everyone was doing 3rd party testing at the moment
when it comes to the process.
It takes me around 10 minutes for me to do a +1 or -1.

my flow is the following:
(I only use Jenkins for listening to the feed)
1) a job is triggered from Jenkins.
2) a VM is booted
3) the devstack repo is clone
4) the patch is applied
5) stack.sh is run (longest time is here)
6) the test are run
7) the result is posted
8) the VM is destroyed

I am looking for ways to speed up the process.
I was thinking of keeping the stack.sh up;  and follow this

1) Shutdown the affected component  (neutron, etc..)
2) apply the patch
3) restart the component
4) run the test
5) post the result
6) shutdown the affected component
7) remove the patch
8) restart the component

What are you thoughts ?
Ideally I would like to achieve a sub 3 minutes.

Thanks,
Franck

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is Success exactly?

2014-06-30 Thread Franck Yelles
Hi Jay,

Couple of points.

I support the fact that we need to define what is success is.
I believe that the metrics that should be used are Voted +1 and
Skipped.
But to certain valid case, I would say that the  Voted -1 is really
mostly a metric of bad health of a CI.
Most of the -1 are due to environment issue, configuration problem, etc...
In my case, the -1 are done manually since I want to avoid giving some
extra work to the developer.

That are some possible solutions ?

On the Jenkins, I think we could develop a script that will parse the
result html file.
Jenkins will then vote (+1, 0, -1) on the behalf of the 3rd party CI.
- It would prevent the abusive +1
- If the result HTML is empty, it would indicate the CI health is bad
- if all the result are failing, it would also indicate that CI health is
bad


Franck


Franck




Franck


On Mon, Jun 30, 2014 at 1:22 PM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,

 Some recent ML threads [1] and a hot IRC meeting today [2] brought up some
 legitimate questions around how a newly-proposed Stackalytics report page
 for Neutron External CI systems [2] represented the results of an external
 CI system as successful or not.

 First, I want to say that Ilya and all those involved in the Stackalytics
 program simply want to provide the most accurate information to developers
 in a format that is easily consumed. While there need to be some changes in
 how data is shown (and the wording of things like Tests Succeeded), I
 hope that the community knows there isn't any ill intent on the part of
 Mirantis or anyone who works on Stackalytics. OK, so let's keep the
 conversation civil -- we're all working towards the same goals of
 transparency and accuracy. :)

 Alright, now, Anita and Kurt Taylor were asking a very poignant question:

 But what does CI tested really mean? just running tests? or tested to
 pass some level of requirements?

 In this nascent world of external CI systems, we have a set of issues that
 we need to resolve:

 1) All of the CI systems are different.

 Some run Bash scripts. Some run Jenkins slaves and devstack-gate scripts.
 Others run custom Python code that spawns VMs and publishes logs to some
 public domain.

 As a community, we need to decide whether it is worth putting in the
 effort to create a single, unified, installable and runnable CI system, so
 that we can legitimately say all of the external systems are identical,
 with the exception of the driver code for vendor X being substituted in the
 Neutron codebase.

 If the goal of the external CI systems is to produce reliable, consistent
 results, I feel the answer to the above is yes, but I'm interested to
 hear what others think. Frankly, in the world of benchmarks, it would be
 unthinkable to say go ahead and everyone run your own benchmark suite,
 because you would get wildly different results. A similar problem has
 emerged here.

 2) There is no mediation or verification that the external CI system is
 actually testing anything at all

 As a community, we need to decide whether the current system of
 self-policing should continue. If it should, then language on reports like
 [3] should be very clear that any numbers derived from such systems should
 be taken with a grain of salt. Use of the word Success should be avoided,
 as it has connotations (in English, at least) that the result has been
 verified, which is simply not the case as long as no verification or
 mediation occurs for any external CI system.

 3) There is no clear indication of what tests are being run, and therefore
 there is no clear indication of what success is

 I think we can all agree that a test has three possible outcomes: pass,
 fail, and skip. The results of a test suite run therefore is nothing more
 than the aggregation of which tests passed, which failed, and which were
 skipped.

 As a community, we must document, for each project, what are expected set
 of tests that must be run for each merged patch into the project's source
 tree. This documentation should be discoverable so that reports like [3]
 can be crystal-clear on what the data shown actually means. The report is
 simply displaying the data it receives from Gerrit. The community needs to
 be proactive in saying this is what is expected to be tested. This alone
 would allow the report to give information such as External CI system ABC
 performed the expected tests. X tests passed. Y tests failed. Z tests were
 skipped. Likewise, it would also make it possible for the report to give
 information such as External CI system DEF did not perform the expected
 tests., which is excellent information in and of itself.

 ===

 In thinking about the likely answers to the above questions, I believe it
 would be prudent to change the Stackalytics report in question [3] in the
 following ways:

 a. Change the Success % column header to % Reported +1 Votes
 b. Change the phrase  Green cell - tests ran successfully, red cell -
 tests failed 

Re: [openstack-dev] [infra] Nominating Anita Kuno for project-config-core

2014-09-28 Thread Franck Yelles
Big +1

Franck

On Sat, Sep 27, 2014 at 1:18 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 +1

 On Sat, Sep 27, 2014 at 9:51 AM, Nikhil Manchanda nik...@manchanda.me
 wrote:

 Big +1 from me.
 Anita has been super helpful, both with reviews and with discussions on
 IRC.


 On Fri, Sep 26, 2014 at 8:34 AM, James E. Blair cor...@inaugust.com
 wrote:

 I'm pleased to nominate Anito Kuno to the project-config core team.

 The project-config repo is a constituent of the Infrastructure Program
 and has a core team structured to be a superset of infra-core with
 additional reviewers who specialize in the area.

 Anita has been reviewing new projects in the config repo for some time
 and I have been treating her approval as required for a while.  She has
 an excellent grasp of the requirements and process for creating new
 projects and is very helpful to the people proposing them (who are often
 creating their first commit to any OpenStack repository).

 She also did most of the work in actually creating the project-config
 repo from the config repo.

 Please respond with support or concerns and if the consensus is in
 favor, we will add her next week.

 Thanks,

 Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev