Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2018-01-08 Thread Mohammed Naser
Hi Tobias,

I think that's mainly the biggest issue we were dealing with which
forced us to stop Ubuntu from being voting.  I'm really not sure why
this is happening but it's happening only in Ubuntu.

Thanks,
Mohammed

On Sun, Jan 7, 2018 at 8:39 AM, Tobias Urdin <tobias.ur...@crystone.com> wrote:
> Hello everyone and a happy new year!
>
> I will follow this thread up with some information about the tempest failure 
> that occurs on Ubuntu.
> Saw it happen on my recheck tonight and took some time now to check it out 
> properly.
>
> * Here is the job: 
> http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/
>
> * The following test is failing but only sometimes: 
> tempest.api.compute.servers.test_create_server.ServersTestManualDisk
> http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/job-output.txt.gz#_2018-01-07_01_56_31_072370
>
> * Checking the nova API log is fails the request against neutron server
> http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/nova/nova-api.txt.gz#_2018-01-07_01_46_47_301
>
> So this is the call that times out: 
> https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/attach_interfaces.py#L61
>
> The timeout occurs at 01:46:47 but the first try is done at 01:46:17, 
> checking the log 
> http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/neutron/neutron-server.txt.gz
>  and searching for "GET 
> /v2.0/ports?device_id=285061f8-2e8e-4163-9534-9b02900a8887"
>
> You can see that neutron-server reports all request as 200 OK, so what I 
> think is that neutron-server performs the request properly but for some 
> reason nova-api does not get the reply and hence the timeout.
>
> This is where I get stuck because since I can see all requests coming in 
> there is no real way of seeing the replies.
> At the same time you can see nova-api and neutron-server are continously 
> handling requests so they are working but just that reply that neutron-server 
> should send to nova-api does not occur.
>
> Does anybody have any clue to why? Otherwise I guess the only way is to start 
> running the tests on a local machine until I get that issue, which does not 
> occur regularly.
>
> Maybe loop in the neutron and/or Canonical OpenStack team on this one.
>
> Best regards
> Tobias
>
>
> 
> From: Tobias Urdin <tobias.ur...@crystone.com>
> Sent: Friday, December 22, 2017 2:44 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [puppet] Ubuntu problems + Help needed
>
> Follow up, have been testing some integration runs on a tmp machine.
>
> Had to fix the following:
> * Ceph repo key E84AC2C0460F3994 perhaps introduced in [0]
> * Run glance-manage db_sync (have not seen in integration tests)
> * Run neutron-db-manage upgrade heads (have not seen in integration tests)
> * Disable l2gw because of
> https://bugs.launchpad.net/ubuntu/+source/networking-l2gw/+bug/1739779
>proposed temp fix until resolved as [1]
>
> [0] https://review.openstack.org/#/c/507925/
> [1] https://review.openstack.org/#/c/529830/
>
> Best regards
>
> On 12/22/2017 10:44 AM, Tobias Urdin wrote:
>> Ignore that, seems like it's the networking-l2gw package that fails[0]
>> Seems like it hasn't been packaged for queens yet[1] or more it seems
>> like a release has not been cut for queens for networking-l2gw[2]
>>
>> Should we try to disable l2gw like done in[3] recently for CentOS?
>>
>> [0]
>> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564
>> [1]
>> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html
>> [2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/
>> [3] https://review.openstack.org/#/c/529711/
>>
>>
>> On 12/22/2017 10:19 AM, Tobias Urdin wrote:
>>> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>>>
>>> [0] http://paste.openstack.org/show/629628/
>>>
>>> On 12/22/2017 04:57 AM, Alex Schultz wrote:
>>>>> Just a note, the queens repo is not currently synced in the infra so
>>>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
>>>>> queens to the infra configuration to resolve this:
>&

Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2018-01-07 Thread Tobias Urdin
Hello everyone and a happy new year!

I will follow this thread up with some information about the tempest failure 
that occurs on Ubuntu.
Saw it happen on my recheck tonight and took some time now to check it out 
properly.

* Here is the job: 
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/

* The following test is failing but only sometimes: 
tempest.api.compute.servers.test_create_server.ServersTestManualDisk
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/job-output.txt.gz#_2018-01-07_01_56_31_072370

* Checking the nova API log is fails the request against neutron server
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/nova/nova-api.txt.gz#_2018-01-07_01_46_47_301

So this is the call that times out: 
https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/attach_interfaces.py#L61

The timeout occurs at 01:46:47 but the first try is done at 01:46:17, checking 
the log 
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/neutron/neutron-server.txt.gz
 and searching for "GET 
/v2.0/ports?device_id=285061f8-2e8e-4163-9534-9b02900a8887"

You can see that neutron-server reports all request as 200 OK, so what I think 
is that neutron-server performs the request properly but for some reason 
nova-api does not get the reply and hence the timeout.

This is where I get stuck because since I can see all requests coming in there 
is no real way of seeing the replies.
At the same time you can see nova-api and neutron-server are continously 
handling requests so they are working but just that reply that neutron-server 
should send to nova-api does not occur.

Does anybody have any clue to why? Otherwise I guess the only way is to start 
running the tests on a local machine until I get that issue, which does not 
occur regularly.

Maybe loop in the neutron and/or Canonical OpenStack team on this one.

Best regards
Tobias



From: Tobias Urdin <tobias.ur...@crystone.com>
Sent: Friday, December 22, 2017 2:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

Follow up, have been testing some integration runs on a tmp machine.

Had to fix the following:
* Ceph repo key E84AC2C0460F3994 perhaps introduced in [0]
* Run glance-manage db_sync (have not seen in integration tests)
* Run neutron-db-manage upgrade heads (have not seen in integration tests)
* Disable l2gw because of
https://bugs.launchpad.net/ubuntu/+source/networking-l2gw/+bug/1739779
   proposed temp fix until resolved as [1]

[0] https://review.openstack.org/#/c/507925/
[1] https://review.openstack.org/#/c/529830/

Best regards

On 12/22/2017 10:44 AM, Tobias Urdin wrote:
> Ignore that, seems like it's the networking-l2gw package that fails[0]
> Seems like it hasn't been packaged for queens yet[1] or more it seems
> like a release has not been cut for queens for networking-l2gw[2]
>
> Should we try to disable l2gw like done in[3] recently for CentOS?
>
> [0]
> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564
> [1]
> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html
> [2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/
> [3] https://review.openstack.org/#/c/529711/
>
>
> On 12/22/2017 10:19 AM, Tobias Urdin wrote:
>> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>>
>> [0] http://paste.openstack.org/show/629628/
>>
>> On 12/22/2017 04:57 AM, Alex Schultz wrote:
>>>> Just a note, the queens repo is not currently synced in the infra so
>>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
>>>> queens to the infra configuration to resolve this:
>>>> https://review.openstack.org/529670
>>>>
>>> As a follow up, the mirrors have landed and two of the four scenarios
>>> now pass.  Scenario001 is failing on ceilometer-api which was removed
>>> so I have a patch[0] to remove it. Scenario004 is having issues with
>>> neutron and the db looks to be very unhappy[1].
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/529787
>>> [1] 
>>> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338
>>>
>>&g

Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-22 Thread Tobias Urdin
Follow up, have been testing some integration runs on a tmp machine.

Had to fix the following:
* Ceph repo key E84AC2C0460F3994 perhaps introduced in [0]
* Run glance-manage db_sync (have not seen in integration tests)
* Run neutron-db-manage upgrade heads (have not seen in integration tests)
* Disable l2gw because of
https://bugs.launchpad.net/ubuntu/+source/networking-l2gw/+bug/1739779
   proposed temp fix until resolved as [1]

[0] https://review.openstack.org/#/c/507925/
[1] https://review.openstack.org/#/c/529830/

Best regards

On 12/22/2017 10:44 AM, Tobias Urdin wrote:
> Ignore that, seems like it's the networking-l2gw package that fails[0]
> Seems like it hasn't been packaged for queens yet[1] or more it seems
> like a release has not been cut for queens for networking-l2gw[2]
>
> Should we try to disable l2gw like done in[3] recently for CentOS?
>
> [0]
> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564
> [1]
> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html
> [2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/
> [3] https://review.openstack.org/#/c/529711/
>
>
> On 12/22/2017 10:19 AM, Tobias Urdin wrote:
>> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>>
>> [0] http://paste.openstack.org/show/629628/
>>
>> On 12/22/2017 04:57 AM, Alex Schultz wrote:
 Just a note, the queens repo is not currently synced in the infra so
 the queens repo patch is failing on Ubuntu jobs. I've proposed adding
 queens to the infra configuration to resolve this:
 https://review.openstack.org/529670

>>> As a follow up, the mirrors have landed and two of the four scenarios
>>> now pass.  Scenario001 is failing on ceilometer-api which was removed
>>> so I have a patch[0] to remove it. Scenario004 is having issues with
>>> neutron and the db looks to be very unhappy[1].
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/529787
>>> [1] 
>>> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-22 Thread Tobias Urdin
Ignore that, seems like it's the networking-l2gw package that fails[0]
Seems like it hasn't been packaged for queens yet[1] or more it seems
like a release has not been cut for queens for networking-l2gw[2]

Should we try to disable l2gw like done in[3] recently for CentOS?

[0]
http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564
[1]
http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html
[2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/
[3] https://review.openstack.org/#/c/529711/


On 12/22/2017 10:19 AM, Tobias Urdin wrote:
> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>
> [0] http://paste.openstack.org/show/629628/
>
> On 12/22/2017 04:57 AM, Alex Schultz wrote:
>>> Just a note, the queens repo is not currently synced in the infra so
>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
>>> queens to the infra configuration to resolve this:
>>> https://review.openstack.org/529670
>>>
>> As a follow up, the mirrors have landed and two of the four scenarios
>> now pass.  Scenario001 is failing on ceilometer-api which was removed
>> so I have a patch[0] to remove it. Scenario004 is having issues with
>> neutron and the db looks to be very unhappy[1].
>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/529787
>> [1] 
>> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-22 Thread Jens Harbott
2017-12-22 9:18 GMT+00:00 Tobias Urdin :
> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>
> [0] http://paste.openstack.org/show/629628/

This seems to be a known issue, see [2]. Also I think that this is a
red herring caused by the database migration being run by the Ubuntu
postinst before there is a proper configuration. Where did you find
that log? You are not trying to run neutron with sqlite for real, are
you?

[2] https://bugs.launchpad.net/neutron/+bug/1697881

> On 12/22/2017 04:57 AM, Alex Schultz wrote:
>>> Just a note, the queens repo is not currently synced in the infra so
>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
>>> queens to the infra configuration to resolve this:
>>> https://review.openstack.org/529670
>>>
>> As a follow up, the mirrors have landed and two of the four scenarios
>> now pass.  Scenario001 is failing on ceilometer-api which was removed
>> so I have a patch[0] to remove it. Scenario004 is having issues with
>> neutron and the db looks to be very unhappy[1].

The later errors seem to be coming from some issues with neutron-l2gw,
which IIUC no longer is a stadium project, so maybe you should factor
that out of your default testing scenario.

>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/529787
>> [1] 
>> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-22 Thread Tobias Urdin
Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].

[0] http://paste.openstack.org/show/629628/

On 12/22/2017 04:57 AM, Alex Schultz wrote:
>> Just a note, the queens repo is not currently synced in the infra so
>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
>> queens to the infra configuration to resolve this:
>> https://review.openstack.org/529670
>>
> As a follow up, the mirrors have landed and two of the four scenarios
> now pass.  Scenario001 is failing on ceilometer-api which was removed
> so I have a patch[0] to remove it. Scenario004 is having issues with
> neutron and the db looks to be very unhappy[1].
>
> Thanks,
> -Alex
>
> [0] https://review.openstack.org/529787
> [1] 
> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
> Just a note, the queens repo is not currently synced in the infra so
> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
> queens to the infra configuration to resolve this:
> https://review.openstack.org/529670
>

As a follow up, the mirrors have landed and two of the four scenarios
now pass.  Scenario001 is failing on ceilometer-api which was removed
so I have a patch[0] to remove it. Scenario004 is having issues with
neutron and the db looks to be very unhappy[1].

Thanks,
-Alex

[0] https://review.openstack.org/529787
[1] 
http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
On Thu, Dec 21, 2017 at 10:40 AM, Alex Schultz  wrote:
> Currently they are all globally failing in master (we are using pike
> still[0] which is probably the problem) in the tempest run[1] due to:
> AttributeError: 'module' object has no attribute 'requires_ext'
>
> I've submit a patch[2] to switch UCA to queens. If history is any
> indication, it will probably end up with a bunch of failing tests that
> will need to be looked at. Feel free to follow along/help with the
> switch.
>

Just a note, the queens repo is not currently synced in the infra so
the queens repo patch is failing on Ubuntu jobs. I've proposed adding
queens to the infra configuration to resolve this:
https://review.openstack.org/529670

> Thanks,
> -Alex
>
> [0] 
> https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6
> [1] 
> http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190
> [2] https://review.openstack.org/#/c/529657/
>
> On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdin
>  wrote:
>> Thanks for letting us know!
>>
>> I can push for time on this if we can get a list.
>>
>>
>> Best regards
>>
>> Tobias
>>
>>
>> On 12/21/2017 08:04 AM, Andrew Woodward wrote:
>>
>> Some pointers for perusal as to the observed problems would be helpful,
>> Thanks!
>>
>> On Wed, Dec 20, 2017 at 11:09 AM Chuck Short  wrote:
>>>
>>> Hi Mohammed,
>>>
>>> I might be able to help where can I find this info?
>>>
>>> Thanks
>>> chuck
>>>
>>> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser 
>>> wrote:

 Hi everyone,

 I'll get right into the point.

 At the moment, the Puppet OpenStack modules don't have much
 contributors which can help maintain the Ubuntu support.  We deploy on
 CentOS (so we try to get all the fixes in that we can) and there is a
 lot of activity from the TripleO team as well which does their
 deployments on CentOS which means that the CentOS support is very
 reliable and CI is always sought after.

 However, starting a while back, we started seeing occasional failures
 with Ubuntu deploys which lead us set the job to non-voting.  At the
 moment, the Puppet integration jobs for Ubuntu are always failing
 because of some Tempest issue.  This means that with every Puppet
 change, we're wasting ~80 minutes of CI run time for a job that will
 always fail.

 We've had a lot of support from the packaging team at RDO (which are
 used in Puppet deployments) and they run our integration before
 promoting packages which makes it helpful in finding issues together.
 However, we do not have that with Ubuntu neither has there been anyone
 who is taking initiative to look and investigate those issues.

 I understand that there are users out there who use Ubuntu with Puppet
 OpenStack modules.  We need your help to come and try and clear those
 issues out. We'd be more than happy to give assistance to lead you in
 the right way to help fix those issues.

 Unfortunately, if we don't have any folks stepping up to resolving
 this, we'll be forced to drop all CI for Ubuntu and make a note to
 users that Ubuntu is not fully tested and hope that as users run into
 issues, they can contribute fixes back (or that someone can work on
 getting Ubuntu gating working again).

 Thanks for reading through this, I am quite sad that we'd have to drop
 support for such a major operating system, but there's only so much we
 can do with a much smaller team.

 Thank you,
 Mohammed


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> --
>> Andrew Woodward
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
Currently they are all globally failing in master (we are using pike
still[0] which is probably the problem) in the tempest run[1] due to:
AttributeError: 'module' object has no attribute 'requires_ext'

I've submit a patch[2] to switch UCA to queens. If history is any
indication, it will probably end up with a bunch of failing tests that
will need to be looked at. Feel free to follow along/help with the
switch.

Thanks,
-Alex

[0] 
https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6
[1] 
http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190
[2] https://review.openstack.org/#/c/529657/

On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdin
 wrote:
> Thanks for letting us know!
>
> I can push for time on this if we can get a list.
>
>
> Best regards
>
> Tobias
>
>
> On 12/21/2017 08:04 AM, Andrew Woodward wrote:
>
> Some pointers for perusal as to the observed problems would be helpful,
> Thanks!
>
> On Wed, Dec 20, 2017 at 11:09 AM Chuck Short  wrote:
>>
>> Hi Mohammed,
>>
>> I might be able to help where can I find this info?
>>
>> Thanks
>> chuck
>>
>> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser 
>> wrote:
>>>
>>> Hi everyone,
>>>
>>> I'll get right into the point.
>>>
>>> At the moment, the Puppet OpenStack modules don't have much
>>> contributors which can help maintain the Ubuntu support.  We deploy on
>>> CentOS (so we try to get all the fixes in that we can) and there is a
>>> lot of activity from the TripleO team as well which does their
>>> deployments on CentOS which means that the CentOS support is very
>>> reliable and CI is always sought after.
>>>
>>> However, starting a while back, we started seeing occasional failures
>>> with Ubuntu deploys which lead us set the job to non-voting.  At the
>>> moment, the Puppet integration jobs for Ubuntu are always failing
>>> because of some Tempest issue.  This means that with every Puppet
>>> change, we're wasting ~80 minutes of CI run time for a job that will
>>> always fail.
>>>
>>> We've had a lot of support from the packaging team at RDO (which are
>>> used in Puppet deployments) and they run our integration before
>>> promoting packages which makes it helpful in finding issues together.
>>> However, we do not have that with Ubuntu neither has there been anyone
>>> who is taking initiative to look and investigate those issues.
>>>
>>> I understand that there are users out there who use Ubuntu with Puppet
>>> OpenStack modules.  We need your help to come and try and clear those
>>> issues out. We'd be more than happy to give assistance to lead you in
>>> the right way to help fix those issues.
>>>
>>> Unfortunately, if we don't have any folks stepping up to resolving
>>> this, we'll be forced to drop all CI for Ubuntu and make a note to
>>> users that Ubuntu is not fully tested and hope that as users run into
>>> issues, they can contribute fixes back (or that someone can work on
>>> getting Ubuntu gating working again).
>>>
>>> Thanks for reading through this, I am quite sad that we'd have to drop
>>> support for such a major operating system, but there's only so much we
>>> can do with a much smaller team.
>>>
>>> Thank you,
>>> Mohammed
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Andrew Woodward
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-20 Thread Andrew Woodward
Some pointers for perusal as to the observed problems would be helpful,
Thanks!

On Wed, Dec 20, 2017 at 11:09 AM Chuck Short  wrote:

> Hi Mohammed,
>
> I might be able to help where can I find this info?
>
> Thanks
> chuck
>
> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser 
> wrote:
>
>> Hi everyone,
>>
>> I'll get right into the point.
>>
>> At the moment, the Puppet OpenStack modules don't have much
>> contributors which can help maintain the Ubuntu support.  We deploy on
>> CentOS (so we try to get all the fixes in that we can) and there is a
>> lot of activity from the TripleO team as well which does their
>> deployments on CentOS which means that the CentOS support is very
>> reliable and CI is always sought after.
>>
>> However, starting a while back, we started seeing occasional failures
>> with Ubuntu deploys which lead us set the job to non-voting.  At the
>> moment, the Puppet integration jobs for Ubuntu are always failing
>> because of some Tempest issue.  This means that with every Puppet
>> change, we're wasting ~80 minutes of CI run time for a job that will
>> always fail.
>>
>> We've had a lot of support from the packaging team at RDO (which are
>> used in Puppet deployments) and they run our integration before
>> promoting packages which makes it helpful in finding issues together.
>> However, we do not have that with Ubuntu neither has there been anyone
>> who is taking initiative to look and investigate those issues.
>>
>> I understand that there are users out there who use Ubuntu with Puppet
>> OpenStack modules.  We need your help to come and try and clear those
>> issues out. We'd be more than happy to give assistance to lead you in
>> the right way to help fix those issues.
>>
>> Unfortunately, if we don't have any folks stepping up to resolving
>> this, we'll be forced to drop all CI for Ubuntu and make a note to
>> users that Ubuntu is not fully tested and hope that as users run into
>> issues, they can contribute fixes back (or that someone can work on
>> getting Ubuntu gating working again).
>>
>> Thanks for reading through this, I am quite sad that we'd have to drop
>> support for such a major operating system, but there's only so much we
>> can do with a much smaller team.
>>
>> Thank you,
>> Mohammed
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Andrew Woodward
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-20 Thread Chuck Short
Hi Mohammed,

I might be able to help where can I find this info?

Thanks
chuck

On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser 
wrote:

> Hi everyone,
>
> I'll get right into the point.
>
> At the moment, the Puppet OpenStack modules don't have much
> contributors which can help maintain the Ubuntu support.  We deploy on
> CentOS (so we try to get all the fixes in that we can) and there is a
> lot of activity from the TripleO team as well which does their
> deployments on CentOS which means that the CentOS support is very
> reliable and CI is always sought after.
>
> However, starting a while back, we started seeing occasional failures
> with Ubuntu deploys which lead us set the job to non-voting.  At the
> moment, the Puppet integration jobs for Ubuntu are always failing
> because of some Tempest issue.  This means that with every Puppet
> change, we're wasting ~80 minutes of CI run time for a job that will
> always fail.
>
> We've had a lot of support from the packaging team at RDO (which are
> used in Puppet deployments) and they run our integration before
> promoting packages which makes it helpful in finding issues together.
> However, we do not have that with Ubuntu neither has there been anyone
> who is taking initiative to look and investigate those issues.
>
> I understand that there are users out there who use Ubuntu with Puppet
> OpenStack modules.  We need your help to come and try and clear those
> issues out. We'd be more than happy to give assistance to lead you in
> the right way to help fix those issues.
>
> Unfortunately, if we don't have any folks stepping up to resolving
> this, we'll be forced to drop all CI for Ubuntu and make a note to
> users that Ubuntu is not fully tested and hope that as users run into
> issues, they can contribute fixes back (or that someone can work on
> getting Ubuntu gating working again).
>
> Thanks for reading through this, I am quite sad that we'd have to drop
> support for such a major operating system, but there's only so much we
> can do with a much smaller team.
>
> Thank you,
> Mohammed
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-20 Thread Mohammed Naser
Hi everyone,

I'll get right into the point.

At the moment, the Puppet OpenStack modules don't have much
contributors which can help maintain the Ubuntu support.  We deploy on
CentOS (so we try to get all the fixes in that we can) and there is a
lot of activity from the TripleO team as well which does their
deployments on CentOS which means that the CentOS support is very
reliable and CI is always sought after.

However, starting a while back, we started seeing occasional failures
with Ubuntu deploys which lead us set the job to non-voting.  At the
moment, the Puppet integration jobs for Ubuntu are always failing
because of some Tempest issue.  This means that with every Puppet
change, we're wasting ~80 minutes of CI run time for a job that will
always fail.

We've had a lot of support from the packaging team at RDO (which are
used in Puppet deployments) and they run our integration before
promoting packages which makes it helpful in finding issues together.
However, we do not have that with Ubuntu neither has there been anyone
who is taking initiative to look and investigate those issues.

I understand that there are users out there who use Ubuntu with Puppet
OpenStack modules.  We need your help to come and try and clear those
issues out. We'd be more than happy to give assistance to lead you in
the right way to help fix those issues.

Unfortunately, if we don't have any folks stepping up to resolving
this, we'll be forced to drop all CI for Ubuntu and make a note to
users that Ubuntu is not fully tested and hope that as users run into
issues, they can contribute fixes back (or that someone can work on
getting Ubuntu gating working again).

Thanks for reading through this, I am quite sad that we'd have to drop
support for such a major operating system, but there's only so much we
can do with a much smaller team.

Thank you,
Mohammed

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev