[openstack-dev] [tripleo] TripleO CI end of sprint status

2018-02-16 Thread Arx Cruz
*Hello,On February 14 we came the end of sprint using our new team
structure, and here’s the highlights.Sprint Review:On this sprint, was the
first one where the team worked in collaboration with another team to have
TripleO Upgrade jobs running on RDO cloud on tripleo-quickstart,
tripleo-quickstart-extras and tripleo-upgrade projects.One can see the
results of the sprint via https://tinyurl.com/y8h8xmo8
Ruck and RoverWhat is Ruck and RoverOne
person in our team is designated Ruck and another Rover, one is responsible
to monitoring the CI, checking for failures, opening bugs, participate on
meetings, and this is your focal point to any CI issues. The other person,
is responsible to work on these bugs, fix problems and the rest of the team
are focused on the sprint. For more information about our structure, check
[1]List of bugs that Ruck and Rover were working on: -
https://bugs.launchpad.net/tripleo/+bug/1749335
-
https://bugs.launchpad.net/tripleo/+bug/1749186
-
https://bugs.launchpad.net/tripleo/+bug/1749105
-
https://bugs.launchpad.net/tripleo/+bug/1748971
-
https://bugs.launchpad.net/tripleo/+bug/1748934
-
https://bugs.launchpad.net/tripleo/+bug/1748751
-
https://bugs.launchpad.net/tripleo/+bug/1748315
-
https://bugs.launchpad.net/tripleo/+bug/1748262
-
https://bugs.launchpad.net/tripleo/+bug/1748199
-
https://bugs.launchpad.net/tripleo/+bug/1748180
-
https://bugs.launchpad.net/tripleo/+bug/1747986
-
https://bugs.launchpad.net/tripleo/+bug/1747690
-
https://bugs.launchpad.net/tripleo/+bug/1747623
-
https://bugs.launchpad.net/tripleo/+bug/1747294
-
https://bugs.launchpad.net/tripleo/+bug/1747089
-
https://bugs.launchpad.net/tripleo/+bug/1747055
-
https://bugs.launchpad.net/tripleo/+bug/1747043
-
https://bugs.launchpad.net/tripleo/+bug/1746978
-
https://bugs.launchpad.net/tripleo/+bug/1746857
-
https://bugs.launchpad.net/tripleo/+bug/1746812
-
https://bugs.launchpad.net/tripleo/+bug/1746737
-
https://bugs.launchpad.net/tripleo/+bug/1746734
We also have our new Ruck
and Rover for this week: - Ruck- Arx Cruz - arxcruz|ruck- Rover- Ronele
Landy  - rlandy|roverIf you have any questions and/or suggestions, please
contact us[1]
https://github.com/openstack/tripleo-specs/blob/master/specs/policy/ci-team-structure.rst

*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2018-02-02 Thread Arx Cruz
Hello,

On January 31 we came the end of sprint using our new team structure, and
here’s the highlights.

Sprint Review:

On this sprint, the team worked on internal infrastructure.

One can see the results of the sprint via https://tinyurl.com/ycpw42pj


Ruck and Rover

What is Ruck and Rover

One person in our team is designated Ruck and another Rover, one is
responsible to monitoring the CI, checking for failures, opening bugs,
participate on meetings, and this is your focal point to any CI issues. The
other person, is responsible to work on these bugs, fix problems and the
rest of the team are focused on the sprint. For more information about our
structure, check [1]

List of bugs that Ruck and Rover were working on:


   -

   https://bugs.launchpad.net/tripleo/+bug/1744151 -
   
barbican_tempest_plugin.tests.scenario.test_volume_encryption.VolumeEncryptionTest
   fails on Invalid Volume
   -

   https://bugs.launchpad.net/tripleo/+bug/1745712 - master:
   etc/pki/tls/certs/undercloud-192.168.24.2.pem]: Failed to generate
   additional resources using 'eval_generate': comparison of Array with Array
   failed
   -

   https://bugs.launchpad.net/tripleo/+bug/1746023 - rdo phase 2 status to
   dlrn_api fails with file not found
   -

   https://bugs.launchpad.net/tripleo/+bug/1746026 - Tracker, CI: OVB jobs
   on RDO cloud can't get OVB env because of 504 gateway timeout
   -

   https://bugs.launchpad.net/tripleo/+bug/1746281 - tripleo jobs running
   in rax have slow package downloads, jobs timing out
   -

   https://bugs.launchpad.net/tripleo/pike/+bug/1745686 -
   [gnocchi-db-sync]: gnocchi-upgrade --config-file /etc/gnocchi/gnocchi.conf
   --skip-storage --skip-incoming returned 2
   -

   https://bugs.launchpad.net/tripleo/+bug/1746729 - tracker, rdo sf
   nodepool slaves going off line
   -

   https://bugs.launchpad.net/tripleo/+bug/1746737 - "msg": "No package
   matching 'jq' found available, installed or updated"
   -

   https://bugs.launchpad.net/tripleo/+bug/1746734 - Periodic Jobs failing
   at tempest config while creating image(with swift backend)


We also have our new Ruck and Rover for this week:


   -

   Ruck
   -

  Rafael Folco - rfolco|ruck
  -

   Rover
   -

  Sagi Shnaidman  - sshnaidm|rover


If you have any questions and/or suggestions, please contact us

[1]
https://github.com/openstack/tripleo-specs/blob/master/specs/policy/ci-team-structure.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2018-01-23 Thread Arx Cruz
Hello,

Sorry the delay...


On January 17 we came the end of sprint using our new team structure, and
here’s the highlights.

Sprint Review:

On this sprint, the team worked in the first steps to have metrics enabled
in Tripleo Jobs. With this in place will be easier to identify places where
we are seeing the code taking more time than usual, and will be easier to
developers identify more easily where to focus.

One can see the results of the sprint via https://tinyurl.com/yb4z5gd4


Ruck and Rover

What is Ruck and Rover

One person in our team is designated Ruck and another Rover, one is
responsible to monitoring the CI, checking for failures, opening bugs,
participate on meetings, and this is your focal point to any CI issues. The
other person, is responsible to work on these bugs, fix problems and the
rest of the team are focused on the sprint. For more information about our
structure, check [1]

List of bugs that Ruck and Rover were working on:


   -

   https://bugs.launchpad.net/tripleo/+bug/1741445 -
   tripleo-ci-centos-7-scenario002-multinode-oooq-container failing with
   timeout in deploy overcloud
   -

   https://bugs.launchpad.net/tripleo/+bug/1741850 - VolumeEncryptionTest
   is failing in tripleo-ci-centos-7-scenario002-multinode-oooq-container with
   request timeout
   -

   https://bugs.launchpad.net/tripleo/+bug/1742080 - Job
   tripleo-quickstart-gate-newton-delorean-quick-basic fails with missing
   ci_centos_libvirt.yml
   -

   https://bugs.launchpad.net/tripleo/+bug/1742435 -
   tripleo-quickstart-gate-master-delorean-quick-basic failing to parse
   jenkins env vars
   -

   https://bugs.launchpad.net/tripleo/+bug/1742465 -
   periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset002-ocata is failing
   with Resource could not be found on mistral
   -

   https://bugs.launchpad.net/tripleo/+bug/1742557 - quickstart reproducer
   create script is getting skipped
   -

   https://bugs.launchpad.net/tripleo/+bug/1742528 - ovb jobs in rdo-cloud
   are not logging the overcloud nodes


We also have our new Ruck and Rover for this week:


   -

   Ruck
   -

  John Trownbridge - trown|ruck
  -

   Rover
   -

  Wesley Hayutin - weshay|ruck


If you have any questions and/or suggestions, please contact us

[1] https://review.openstack.org/#/c/509280/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2018-01-04 Thread Arx Cruz
Hello,

On January 03 we came the end of sprint using our new team structure, and
here’s the highlights.

Sprint Review:

This was a tech debt sprint, and due the holidays and mostly of the team
out, we haven't set a goal for this sprint, leaving the team free to work
on the tech debt cards, as much as the time permits.

One can see the results of the sprint via https://trello.com/c/fvLpZMF6/

Tripleo CI community meeting


   - Promotion issues due mistral
   -


  
http://lists.openstack.org/pipermail/openstack-dev/2018-January/125935.html
  

  -

  The plan (from Emilien's email)
  -

 Carry Steve's patch in Mistral distgit:
 -

https://review.rdoproject.org/r/#/c/11140/ - DONE
-

 Remove featureset010 from promotion requirements - DONE
 -

 Once we have a promotion, we'll be able to land
 https://review.openstack.org/#/c/530783/ - IN PROGRESS
 -

 Once https://review.openstack.org/#/c/530783/ is landed, and the
 upstream patch is landed, revert
 https://review.rdoproject.org/r/#/c/11140/ (otherwise RDO will
 become inconsistent) and failing to build on master)
 -

 Re-add featureset010 in promotion requirements (revert
 https://review.rdoproject.org/r/#/c/11142) so we'll catch the
 issue next time.
 -

 Landed in current-tripleo because we don't have voting in
 multinode job and scenario 001, 002 and 003 were non voting
 -

   Scenario jobs not voting due timeouts
   -


  
http://lists.openstack.org/pipermail/openstack-dev/2018-January/125935.html
  -

   Which scenario / services we care about
   -

  We need an investigation to determine what we want to test in our
  scenario jobs, and what we don't, in order to release resources and focus
  our work on
  -

   Graphite report status
   -

  Working on grafana
  -

  Initially focused on OVB jobs


Ruck and Rover

What is Ruck and Rover

One person in our team is designated Ruck and another Rover, one is
responsible to monitoring the CI, checking for failures, opening bugs,
participate on meetings, and this is your focal point to any CI issues. The
other person, is responsible to work on these bugs, fix problems and the
rest of the team are focused on the sprint. For more information about our
structure, check [1]

List of bugs that Ruck and Rover were working on:


   -

   https://bugs.launchpad.net/tripleo/+bug/1736113
   CI: newton promotion fails because no stable/newton branch in aodh
   -

   https://bugs.launchpad.net/tripleo/+bug/1740940
   Tempest test on Ocata failing with Error: No valid Host was found
   -

   https://bugs.launchpad.net/tripleo/+bug/1740934 Tracker Bug: Tempest
   fails with packaging error - python-oslo-db-tests
   -

   https://bugs.launchpad.net/tripleo/+bug/1739661 Tracker Bug:
   Intermittent failures creating OVB stacks on RDO Cloud since upgrade  (**
   would like to close this bug - tenant has been cleaned up and is working)
   -

   https://bugs.launchpad.net/tripleo/+bug/1739639 ci.centos gate are
   failing with THT default change


We also have our new Ruck and Rover for this week:


   -

   Ruck
   -

  Arx Cruz - arxcruz|ruck
  -

   Rover
   -

  Gabrielle Cerami - panda|rover


If you have any questions and/or suggestions, please contact us

[1] https://review.openstack.org/#/c/509280/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread John Trowbridge
On Fri, Dec 15, 2017 at 1:15 PM, Ben Nemec  wrote:

>
>
> On 12/15/2017 10:26 AM, Emilien Macchi wrote:
>
>> On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
>> [...]
>>
>>> The goal on this sprint was to enable into quickstart a way to reproduce
>>> upstream jobs, in your personal RDO cloud tenant, making easy to
>>> developers
>>> to debug and reproduce their code.
>>>
>>
>> This phrase confused some non-Red-Hat OpenStack contributors on
>> #openstack-tc:
>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23op
>> enstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59
>>
>> 2 questions came up:
>>
>> 1) Do we need RDO Cloud access to reproduce TripleO CI jobs?
>>
>> I think the answer is no. What you need though is an OpenStack cloud,
>> with the work that is being done here:
>> https://review.openstack.org/#/c/525743
>>
>> I'll let the TripleO CI team to confirm that, no, you don't need RDO
>> Cloud access.
>>
>
> /me makes yet another note to try OVB against a public cloud
>
> At the moment, at least for the OVB jobs, you pretty much do need access
> to either RDO cloud or rh1/2.  It _may_ work against some public clouds,
> but I don't know of anyone trying it yet so I can't really recommend it.
>

Ah right, didnt think about the OVB part. That has nothing to do with the
reproducer script though... It is just not possible to reproduce OVB jobs
against a non-OVB cloud. The multinode jobs will work against any cloud
though.


>
>
>>
>> 2) Can anyone have access to RDO Cloud resources?
>>
>> One of the reasons of creating RDO Cloud was for developers so they
>> can get resources to build OpenStack.
>> RDO community organizes something called "test days", where anyone is
>> welcome to join and test OpenStack on centos7 with RDO packages.
>> See: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-qu
>> eens-deployment/
>> The event is announced on RDO users mailing list:
>> https://lists.rdoproject.org/pipermail/users/2017-December/79.html
>> Other than that, I'm not sure about the process if someone needs
>> full-time access. FWIW, I never saw any rejection in the past. We
>> welcome contributors and we want to help how we can.
>>
>
> I am aware of a few people who have been rejected for RDO cloud access,
> and given the capacity constraints it is currently under I suspect there
> would need to be strong justification for new users.  I'm _not_ an RDO
> cloud admin though, so that's not an official statement of any kind.
>
> Also note that the test day is not happening on RDO cloud, but on a
> separate single node cloud (per https://etherpad.openstack.org
> /p/rdo-queens-m2-cloud).  It would not be particularly well suited to
> reproducing CI and presumably won't be around for long.
>
> So the story's not great right now unless you already have access to cloud
> resources.  The developer hardware requirements problem is not quite solved
> yet. :-/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread Ben Nemec



On 12/15/2017 10:26 AM, Emilien Macchi wrote:

On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
[...]

The goal on this sprint was to enable into quickstart a way to reproduce
upstream jobs, in your personal RDO cloud tenant, making easy to developers
to debug and reproduce their code.


This phrase confused some non-Red-Hat OpenStack contributors on #openstack-tc:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59

2 questions came up:

1) Do we need RDO Cloud access to reproduce TripleO CI jobs?

I think the answer is no. What you need though is an OpenStack cloud,
with the work that is being done here:
https://review.openstack.org/#/c/525743

I'll let the TripleO CI team to confirm that, no, you don't need RDO
Cloud access.


/me makes yet another note to try OVB against a public cloud

At the moment, at least for the OVB jobs, you pretty much do need access 
to either RDO cloud or rh1/2.  It _may_ work against some public clouds, 
but I don't know of anyone trying it yet so I can't really recommend it.





2) Can anyone have access to RDO Cloud resources?

One of the reasons of creating RDO Cloud was for developers so they
can get resources to build OpenStack.
RDO community organizes something called "test days", where anyone is
welcome to join and test OpenStack on centos7 with RDO packages.
See: 
https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
The event is announced on RDO users mailing list:
https://lists.rdoproject.org/pipermail/users/2017-December/79.html
Other than that, I'm not sure about the process if someone needs
full-time access. FWIW, I never saw any rejection in the past. We
welcome contributors and we want to help how we can.


I am aware of a few people who have been rejected for RDO cloud access, 
and given the capacity constraints it is currently under I suspect there 
would need to be strong justification for new users.  I'm _not_ an RDO 
cloud admin though, so that's not an official statement of any kind.


Also note that the test day is not happening on RDO cloud, but on a 
separate single node cloud (per 
https://etherpad.openstack.org/p/rdo-queens-m2-cloud).  It would not be 
particularly well suited to reproducing CI and presumably won't be 
around for long.


So the story's not great right now unless you already have access to 
cloud resources.  The developer hardware requirements problem is not 
quite solved yet. :-/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread John Trowbridge
On Fri, Dec 15, 2017 at 11:26 AM, Emilien Macchi  wrote:

> On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
> [...]
> > The goal on this sprint was to enable into quickstart a way to reproduce
> > upstream jobs, in your personal RDO cloud tenant, making easy to
> developers
> > to debug and reproduce their code.
>
> This phrase confused some non-Red-Hat OpenStack contributors on
> #openstack-tc:
> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%
> 23openstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59
>
> 2 questions came up:
>
> 1) Do we need RDO Cloud access to reproduce TripleO CI jobs?
>
> I think the answer is no. What you need though is an OpenStack cloud,
> with the work that is being done here:
> https://review.openstack.org/#/c/525743
>
> I'll let the TripleO CI team to confirm that, no, you don't need RDO
> Cloud access.
>

Correct, the reproducer script work does not require being run specifically
on RDO Cloud. Downloading images will be
a bit slower, since the images are hosted on the same infra as RDO Cloud.
However, the script simply creates the
resources nodepool would create on any OpenStack cloud, then runs the exact
script from CI.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread Emilien Macchi
On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
[...]
> The goal on this sprint was to enable into quickstart a way to reproduce
> upstream jobs, in your personal RDO cloud tenant, making easy to developers
> to debug and reproduce their code.

This phrase confused some non-Red-Hat OpenStack contributors on #openstack-tc:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59

2 questions came up:

1) Do we need RDO Cloud access to reproduce TripleO CI jobs?

I think the answer is no. What you need though is an OpenStack cloud,
with the work that is being done here:
https://review.openstack.org/#/c/525743

I'll let the TripleO CI team to confirm that, no, you don't need RDO
Cloud access.


2) Can anyone have access to RDO Cloud resources?

One of the reasons of creating RDO Cloud was for developers so they
can get resources to build OpenStack.
RDO community organizes something called "test days", where anyone is
welcome to join and test OpenStack on centos7 with RDO packages.
See: 
https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
The event is announced on RDO users mailing list:
https://lists.rdoproject.org/pipermail/users/2017-December/79.html
Other than that, I'm not sure about the process if someone needs
full-time access. FWIW, I never saw any rejection in the past. We
welcome contributors and we want to help how we can.

Any feedback is welcome to improve our transparency and inclusiveness.

[...]

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread Arx Cruz
Hello,

On December 13 we came the end of sprint using our new team structure, and
here’s the highlights.

Sprint Review:

The goal on this sprint was to enable into quickstart a way to reproduce
upstream jobs, in your personal RDO cloud tenant, making easy to developers
to debug and reproduce their code.

Unfortunately due the RDO cloud upgrade and upstream infra issues, we
weren’t able to finish successfully this sprint, having completed 4 cards,
5 that are waiting to be merged, and 3 blocked.

Nevertheless we have reviews [1] and documentation that can be used to
reproduce the jobs upstream manually [2], and your feedback is more than
welcome to improve it in our future automated script

One can see the results of the sprint via https://trello.com/c/EWotWxGe/

Tripleo CI community meeting


   - TLS Everywhere Job:
   - Harry is working to have a periodic job running TLS
  - OVB (most recent hash) has the ability to deploy an extra node
  alongside the undercloud
 - https://github.com/cybertron/openstack-virtual-baremetal/commit/
 c288a1ae973f3b32e9e4481f6604204386cbae9c
 

 - https://github.com/cybertron/openstack-virtual-baremetal/commit/
 2e4dd517d1736f004725ee29ac3e4764af246bab
 

  - Patch is live to deploy extra nodes in te-broker
 - https://review.openstack.org/#/c/512899/
  - OVB migration
  - Blocked due RDO cloud upgrade
   - RDO Cloud
  - Upgrade process still in progress
   - Kubernetes
  - Will we see kubernetes in CI? Yes
  - There are several scenarios for kubernetes and openshift already
  - Kubernetes jobs are in check queue
  - Openshift jobs are in experimental
  - For more questions, please ping Flavio (flaper87)
   - Upgrades related patches:
  - https://review.openstack.org/#/c/504822/ Support release change
  during playbook run
  - https://review.openstack.org/#/c/504939/ Set repo setup release in
  playbook




Ruck and Rover

What is Ruck and Rover

One person in our team is designated Ruck and another Rover, one is
responsible to monitoring the CI, checking for failures, opening bugs,
participate on meetings, and this is your focal point to any CI issues. The
other person, is responsible to work on these bugs, fix problems and the
rest of the team are focused on the sprint. For more information about our
structure, check [1]

List of bugs that Ruck and Rover were working on:


   - Undercloud fails with " [ERROR] Could not lock
   /var/run/os-refresh-config.lock
  - intermittently seen from upstream --> rdophase2.  posted patch to
  add elastic-recheck query to help id future instances
  - https://bugs.launchpad.net/tripleo/+bug/1669110
  - https://review.openstack.org/#/c/527559
   - Newton: RDO CI gate and phase1 promotion jobs fail with error
   "publicURL endpoint for workflowv2 service not found" because mistral is
   disabled
  - https://bugs.launchpad.net/tripleo/+bug/1737502
   - http_proxy makes quickstart fail
  - https://bugs.launchpad.net/tripleo/+bug/1736499
   - Need better logging for HA jobs
  - https://bugs.launchpad.net/tripleo/+bug/1695237
  - https://review.openstack.org/#/c/527554/
   - Remove mistral tempest tests jenkins-periodic-master-rdo_
   trunk-virtbasic-1ctlr_1comp_64gb
  - https://bugs.launchpad.net/tripleo/+bug/1736252
   - 1737940 CI: neutron.tests.tempest.scenario.test_qos.QoSTest.test_qos
   fails in pike promotion jobs
   - Fix Released - #1737716 TripleO CI jobs fail: can't connect to
   nodepool hosts
   - #1737688 RDO CI quickstart job logs collection of dlrn logs is broken
   in tripleo-quickstart-extras-gate-master-tripleo-ci-
   delorean-full-minimal_pacemaker
   - #1737617 master promoter script looping and failing since 12/9 (should
   put a 2 hr timeout on the promioter script  - to tech debt for the sprint
   team)
   - #1737568 CI: fs035 OVB job fails to run mysql_init_bundle container  (
   still running on RH1 - could fix it there - contact HA people)
   - Fix Released - 1737502 Newton: RDO CI gate and phase1 promotion jobs
   fail with error "publicURL endpoint for workflowv2 service not found"
   because mistral is disabled
   - #1737485 CI: featureset020 (all tempest tests) promotion job fails
   with timeout on mtu test
   - Cron jobs of promotions scripts are commented, please uncomment them
   when RDO cloud is online and stable


We also have our new Ruck and Rover for this week:


   - Ruck
  - John Trownbridge - trown|ruck
   - Rover
  - Ronelle Landy - rlandy|rover


If you have any questions and/or suggestions, please contact us

[1] https://review.openstack.org/#/c/509280/

[2] https://review.openstack.org/#/c/525743/

[3] https://etherpad.openstack.or

[openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-01 Thread Arx Cruz
Hello,

On November 29 we came the end of sprint using our new team structure [1],
and here’s the highlights:

Sprint Review:

The goal on this sprint was to reduce the tech debt generated by the other
sprints as a way to reduce the work of the Ruck and Rover.

We choose the most relevant cards in our tech debt list, and I am glad to
report that we were able to complete mostly of them. Since it was tech debt
cards, we set a goal of the cards we would like to complete, and as the
time permits, other cards.

As result, we have 10 cards completed, 4 cards that are being finished
(just pending review or comments updates) and 4 cards that remains in tech
debt.

One can see the results of the sprint via https://tinyurl.com/y8wwntvc

Tripleo CI community meeting


   - Saneax is working to introduce some update jobs in stable branch
  - Initially he wanted to introduce it in upstream, however, after
  discuss with the team, probably it’s best to have it in RDO
cloud since we
  have a more flexible timeout for the jobs
  - Master check/gate job blockers
   - TestVolumeBootPattern
  - Still work in progress, we have Daniel Alvarez working in debug the
  jobs.
   - OVB Migration https://trello.com/c/wGUUEqty
  - RDO Cloud upgrade from newton to ocata was blocked by
  https://bugs.launchpad.net/tripleo/+bug/1724328 which is now resolved
  thanks to Pradeep.
  - Checking with David Machado on the status of the RDO Cloud upgrade
  to Ocata
  - All upstream jobs are currently running on both RH1 and RDO Cloud
 - Sagi put together a nice chart displaying the pass/fail rates of
 OVB jobs in both environments
 -
 
https://trello-attachments.s3.amazonaws.com/57a843f924c8f76569579c8b/5a0b479898ccb207352b5d9f/c299f41da138a92aac3984298530a6d1/rdo-rh1-cloud.pdf
  - Looking for input on when to transition off RH1 and onto RDO Cloud
  in full
   - Promotion Status
  - Master 21 days since last promotion , 4 known issues
  - Pike 7 days since last promotion, 3 known issues
  - Ocata 17 days since last promotion, 1 known issue



Ruck and Rover

List of bugs that Ruck and Rover were working on:


   - https://bugs.launchpad.net/tripleo/+bug/1734928
   - ui_validate_simple is failing on master gate - logs are not collected
(related to tripleo-ci-centos-7-scenario001-multinode-oooq-container
  failure)
   - https://bugs.launchpad.net/tripleo/+bug/1731988
  - TestNetworkBasicOps.test_mtu_sized_frames timing out on pike
  promotion jobs
   - https://bugs.launchpad.net/tripleo/+bug/1733672
  - tripleo-ci-centos-7-scenario001-multinode-oooq-container is failing
  on deploying the overcloud - master release only ( related to
  tripleo-ci-centos-7-scenario001-multinode-oooq-container)
   - https://bugs.launchpad.net/tripleo/+bug/1732477
  -  Container deployment failing at overcloud-prep-containers (not
  logged by CI - fixed by CI)
   - https://bugs.launchpad.net/tripleo/+bug/1733983
  -  Tempest reports 'missing Worker 1!'
   - https://bugs.launchpad.net/tripleo/+bug/1734752
  -  Master containers build are failing with 'No package yum-axelget
  available'
   - https://bugs.launchpad.net/tripleo/+bug/1734709
  -  Master promotion featuresets005- 008 are failing overcloud deploy
  -  ''
  
/usr/share/openstack-tripleo-heat-templates/ci/environments/scenario00x-multinode.yaml"
  file not found
   - https://bugs.launchpad.net/tripleo/+bug/1734134
  -  Pike periodic promotion job multinode-1ctlr-featureset016 fail
  with error running docker 'gnocchi_db_sync' - rados.Rados.connect
  PermissionDeniedError: error connecting to the cluster
   - https://bugs.launchpad.net/tripleo/+bug/1733858
  - Upstream containers promotion fails with "unauthorized:
  authentication required" while pulling the images from RDO registry
   - https://bugs.launchpad.net/tripleo/+bug/1733598
  - newton jobs on rdo cloud fail with 'dlrn_hash_tag' is undefined
   - https://bugs.launchpad.net/tripleo/+bug/1733345
  - Master promotion: error creating the default Deployment Plan
  overcloud
   - https://bugs.launchpad.net/tripleo/+bug/1732706
  - tripleo ci / quickstart jobs have duplicate entries in /etc/hosts
   - https://bugs.launchpad.net/tripleo/+bug/173198
   
  - dstat files are not in the upstream tripleo logs in /var/log/extra
   - https://bugs.launchpad.net/tripleo/+bug/1731346
  - dlrn_hash_tag is undefined failing ovb jobs in pike/master promotion
   - https://bugs.launchpad.net/tripleo/+bug/1731456
  - Timed out CI jobs not collecting logs, "FAILED with status: 137"
  (not logged by CI but fixed by CI)
   - https://bugs.launchpad.net/tripleo/+bug/1734348
  - legacy-instack-undercloud-puppet-lint is failing with ERROR Unable
  to find playbook
   - https://bugs.launchpad.net/tripleo/+bu

[openstack-dev] [tripleo] TripleO CI end of sprint status

2017-11-15 Thread Arx Cruz
Hello,

On November 13 we came the end of sprint using our new team structure [1],
and here’s the highlights:

Sprint Review:

The sprint epic was Reproduce of upstream CI jobs against RDO cloud
personal tenant [2] in order to help our Ruck and Rover to reproduce CI
issues.

It was we set several cards each with one specific task to achieve our
objective, and I am glad to report that we were able to complete it and now
the Ruck and Rover have a easy tool to reproduce CI issues upstream.

There are some reviews pending to be merged, but we are considering the
work done. You can try it by following the documentation [3]! I'm also
happy to say that in this sprint, we have only one card in tech debt!

One can see the results of the sprint via https://tinyurl.com/ybfds8p3

List of what was done by the Ruck and Rover:

   -

   https://bugs.launchpad.net/tripleo/+bug/1729586
   -

   https://bugs.launchpad.net/tripleo/+bug/1729328
   -

   https://bugs.launchpad.net/tripleo/+bug/1728135
   -

   https://bugs.launchpad.net/tripleo/+bug/1728070


We also have our new Ruck and Rover for this wee:

   - Ruck
  - Attila Darazs - adaras|ruck
   - Rover
  - Ronelle Landy - rlandy|rover


If you have any questions and/or sugestions, please contact us

Kind regards,
Arx Cruz

[1] https://review.openstack.org/#/c/509280/

[2] https://trello.com/c/aPuHTfo4

[3] https://etherpad.openstack.org/p/ruck-rover-reproduce-jobs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2017-10-11 Thread Arx Cruz
Sorry, I forgot to add [tripleo] tag.


Hello,

On October 10 we came to our first end of sprint using our new team
structure [1], and here’s the highlights:

TripleO CI Infra meeting notes:


   -

   Zuul v3 related patch:
   -

  The new Zuul v3 doesn’t have the cirros image cached, so we have a
  patch to change the tempest image to default value, that is download the
  image from cirros website.
  -

 https://review.openstack.org/510839
 -

   Zuul migration
   -

  There will have an outage in order to fix some issues found during
  the Zuul migration to v3
  -

 http://lists.openstack.org/pipermail/openstack-dev/2017-
 October/123337.html
 -

   Job for migration
   -

  We are planning to start moving some jobs rom rh1 cloud to rdo cloud.
  -

   RDO Software Factory outage
   -

  There were an outage on RDO cloud on October 9, some jobs were
  stalled for a long time, now everything is working.


Sprint Review:

The sprint epic was utilizing the DLRN api across TripleO and RDO [2] to
report job status and promotions, and we set several tasks in 20 cards, and
I am glad to report that we were able to complete 19 cards! Some of these
cards generate some tech debts, and after a review, we got 11 card in the
tech debt list, plus 3 new bugs opened and XYZ bugs closed by the Ruck and
Rover.

One can see the results of the sprint via https://tinyurl.com/yblqs5z2

Below the list of new bugs related to the work completed in the sprint:


   -

   https://bugs.launchpad.net/tripleo/+bug/1722552
   -

   https://bugs.launchpad.net/tripleo/+bug/1722554
   -

   https://bugs.launchpad.net/tripleo/+bug/1722558


And here the list of what was done by the Ruck and Rover:

   -

   https://bugs.launchpad.net/tripleo/+bug/1722640
   -

   https://bugs.launchpad.net/tripleo/+bug/1722621
   -

   https://bugs.launchpad.net/tripleo/+bug/1722596
   -

   https://bugs.launchpad.net/tripleo/+bug/1721790
   -

   https://bugs.launchpad.net/tripleo/+bug/1721366
   -

   https://bugs.launchpad.net/tripleo/+bug/1721134
   -

   https://bugs.launchpad.net/tripleo/+bug/1720556
   -

   https://bugs.launchpad.net/tripleo/+bug/1719902
   -

   https://bugs.launchpad.net/tripleo/+bug/1719421



[1] https://review.openstack.org/#/c/509280/

[2] https://trello.com/c/5FnfGByl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev