Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-02 Thread Nisha Agarwal
+1

Regards
Nisha

On Mon, Oct 2, 2017 at 11:13 PM, Loo, Ruby  wrote:

> +1, Thx Dmitry for the proposal and Shiv for doing all the work :D
>
>
>
> --ruby
>
>
>
> *From: *Dmitry Tantsur 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Monday, October 2, 2017 at 10:17 AM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [ironic] Proposing Shivanand Tendulker for
> ironic-core
>
>
>
> Hi all!
>
> I would like to propose Shivanand (stendulker) to the core team.
>
> His stats have been consistently high [1]. He has given a lot of
> insightful reviews recently, and his expertise in the iLO driver is also
> very valuable for the team.
>
> As usual, please respond with your comments and objections.
>
> Thanks,
>
> Dmitry
>
>
> [1] http://stackalytics.com/report/contribution/ironic-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] Blazar is now an official OpenStack project

2017-10-02 Thread Masahito MUROI

Hello everyone,


I'd like to announce the great news; Blazar[1], Resource Reservation 
Service, was accepted into the official project last week.


First of all, thank you so much from the bottom of my heart to everyone 
who has joined previous Blazar's activities and current Blazar's 
activities.  I believe the all activities push the project status forwards.


We have a weekly IRC meeting[2] to discuss features, implement and 
review codes. We're improving Blazar project actively now. So if you 
have an interest to Blazar, we welcome everyone to join Blazar project.



1. https://wiki.openstack.org/wiki/Blazar
2. http://eavesdrop.openstack.org/#Blazar_Team_Meeting

best regards,
Masahito


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Ansible testing with OpenVSwitch

2017-10-02 Thread Michael Gale
I use openstack-ansible for all of my OpenStack testing and deployment
needs currently so if there was an AIO that I could use as a reference that
would be awesome. I personally found that having scenarios available in the
AIO helps drive upgrades and adoptions of new features. As an example I
deployed an AIO with the Magnum add-on to do a Kubernetes demo during an
innovation day.

Michael

On Mon, Oct 2, 2017 at 4:13 AM, Jean-Philippe Evrard <
jean-phili...@evrard.me> wrote:

> Well, there are ppl already running OpenVSwitch on openstack-ansible,
> so I guess it's just a question of a few bug fixes and adding a
> scenario to make sure this is working forever :p
>
> On Fri, Sep 29, 2017 at 8:22 AM, Gyorgy Szombathelyi
>  wrote:
> >
> >>
> >> Hello JP,
> >>
> >> Ok, I will do some more testing against the blog post and then hit
> up the
> >> #openstack-ansible channel.
> >>
> >> I need to finish a presentation on SFC first which is why I am looking
> into
> >> OpenVSwitch.
> >
> > Hi Michael,
> >
> > If your goal is not openstack-ansible, here's an AIO installer for Pike
> with OpenVSwitch:
> > https://github.com/DoclerLabs/openstack
> > (needs vagrant and VirtualBox)
> >
> > Br,
> > György
> >
> >>
> >> Thanks
> >> Michael
> >>
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

“The Man who says he can, and the man who says he can not.. Are both
correct”
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-10-02 Thread Renat Akhmerov
On 2 Oct 2017, 21:02 +0700, wrote:

>
> * Zuul Stalls
>
> If you say to yourself "zuul doesn't seem to be doing anything, did I do
> something wrong?", we're having an issue that jeblair and Shrews are
> currently tracking down with intermittent connection issues in the
> backend plumbing.

Hi Monty, does it make sense to recheck patches in this case?

Thanks

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-02 Thread Lance Bragstad
+1,000 to all of what Steve said. It's still tough for me to wrap my
head around all the client/library work you shouldered. Your experience,
perspective, and insight will certainly be missed.

Thanks for being an awesome member of this community and best of luck on
the new gig, they're lucky to have you!

See you in Sydney

On 10/02/2017 09:22 PM, Steve Martinelli wrote:
> It was great working with and getting to know you over the years
> Jamie, you did tremendous work with in keystone, particularly
> maintaining the libraries. I'm sure you'll succeed in your new
> position, I'll miss our late-night-east-coast, early-morning-aus
> chats. Keep in touch. 
>
> On Mon, Oct 2, 2017 at 10:13 PM, Jamie Lennox  > wrote:
>
> Hi All,
>
> I'm really sad to announce that I'll be leaving the OpenStack
> community (at least for a while), I've accepted a new position
> unrelated to OpenStack that'll begin in a few weeks, and am going
> to be mostly on holiday until then.
>
> I want to thank everyone I've had the pleasure of working with
> over the last few years - but particularly the Keystone community.
> I feel we as a team and I personally grew a lot over that time, we
> made some amazing achievements, and I couldn't be prouder to have
> worked with all of you. 
>
> Obviously I'll be around at least during the night for some of the
> Sydney summit and will catch up with some of you there, and
> hopefully see some of you at linux.conf.au .
> To everyone else, thank you and I hope we'll meet again.
>
>
> Jamie Lennox, Stacker.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-02 Thread Steve Martinelli
It was great working with and getting to know you over the years Jamie, you
did tremendous work with in keystone, particularly maintaining the
libraries. I'm sure you'll succeed in your new position, I'll miss our
late-night-east-coast, early-morning-aus chats. Keep in touch.

On Mon, Oct 2, 2017 at 10:13 PM, Jamie Lennox  wrote:

> Hi All,
>
> I'm really sad to announce that I'll be leaving the OpenStack community
> (at least for a while), I've accepted a new position unrelated to OpenStack
> that'll begin in a few weeks, and am going to be mostly on holiday until
> then.
>
> I want to thank everyone I've had the pleasure of working with over the
> last few years - but particularly the Keystone community. I feel we as a
> team and I personally grew a lot over that time, we made some amazing
> achievements, and I couldn't be prouder to have worked with all of you.
>
> Obviously I'll be around at least during the night for some of the Sydney
> summit and will catch up with some of you there, and hopefully see some of
> you at linux.conf.au. To everyone else, thank you and I hope we'll meet
> again.
>
>
> Jamie Lennox, Stacker.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-02 Thread Jamie Lennox
Hi All,

I'm really sad to announce that I'll be leaving the OpenStack community (at
least for a while), I've accepted a new position unrelated to OpenStack
that'll begin in a few weeks, and am going to be mostly on holiday until
then.

I want to thank everyone I've had the pleasure of working with over the
last few years - but particularly the Keystone community. I feel we as a
team and I personally grew a lot over that time, we made some amazing
achievements, and I couldn't be prouder to have worked with all of you.

Obviously I'll be around at least during the night for some of the Sydney
summit and will catch up with some of you there, and hopefully see some of
you at linux.conf.au. To everyone else, thank you and I hope we'll meet
again.


Jamie Lennox, Stacker.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bugs triaging sprint

2017-10-02 Thread Miguel Lavalle
Hi Neutrinos,

I am this week's bugs deputy. In looking at the "Undecided" bugs I realized
that we have a huge number of them dating from several weeks ago. If we
want to catch up soon with this, I think we are going to need a triaging
sprint. How about next week's Wednesday, October 111th? What do folks think?

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators] [nova] Nova-scheduler filter, for domain level isolation

2017-10-02 Thread Matt Riedemann

On 9/20/2017 5:17 AM, Georgios Kaklamanos wrote:

Hello,

Usecase: We have to deploy instances that belong in different domains,
to different compute hosts.

Does anyone else have the same usecase? If so, how did you implement
it?

[The rest of the mail is a more detailed explanation on the question:
what have we tried and probable solutions that we though -- but not
yet implemented.]


First, thanks for starting this discussion upstream rather than just 
assuming you have to use an out of tree filter.




* Details

In our Openstack Deployment (Mitaka), we have to support 3 different
domains (besides the default). We need a way to separate the compute
hosts in three groups (aggregates), so that VMs that belong to users
of domain A, start in group A, etc. Initially we assume that each
compute host will belong only to one group, but that might change.

We have looked at the nova filter scheduler [1] and on the
Aggregate_Multitenacy_Isolation filter [2], which is doing what we
want but it works on project level (as demonstrated here [3]). Given
that in one of our domains, we'll have at least 200 projects, we'd
prefer to leave this as a last choice.

Modifying the above filter to make the check based on the "domain",
isn't possible. The object that the filter receives, and contains the
information, is the RequestSpec Object [4]. The information contained
in its fields, doesn't include the domain_id attribute.

* Possible solutions that we've thought of:

1. Write our own filter: Modify a filter to contain a call to
keystone, where it would send the project_id, and get back it's
domain. But this feels more like a hack than a proper solution. And
it might require storing the admin credentials to the node that the
filters are running (controller?), which we'd like to avoid.


Isn't the domain_id in the RequestContext somewhere? That's the thing 
that holds the user token so I'd expect it has information about the 
domain that the project is in.


https://github.com/openstack/oslo.context/blob/2.19.0/oslo_context/context.py#L180-L182



2. Make the separation on another level (project/flavor/image):
Besides the isolation per project, we could also isolate the hosts,
by providing different images / flavors to the different
users. There are available filters for that (image_props_filter
[6]) , (aggregate_instance_extra_specs [7]). But again, due to the
high number of projects, this would not scale well.


Agree that this sounds complicated and therefore will be error-prone.



3. Modify the RequestSpec object: Finally, we could include the
domain_id field on the object, then modify the
Aggregate_Multitenacy_Isolation filter, to work on that. Of course
this would be the most elegant solution. However, (besides not
knowing how to do that), we don't know what kind of implication
that will have or how to package / deploy it.


Shouldn't have to do this if the request context has the domain in it. 
However, the request context isn't persisted but the request spec is, so 
if you needed the request spec later for other operations, like 
migrating the instance, then you might want to persist the domain. But 
then you probably get into other issues, like can the user/project move 
to another domain in Keystone? If so, what do you do about your host 
aggregate policy in Nova since Nova isn't going to be tracking that 
Keystone change. Maybe there is a policy rule in keystone that you can 
disable updating a user/project domain once it's set?





Is anyone having the same usecase? How would you go about solving it?

It's interesting, since we though that this would be a common usecase,
but as far as I've searched, I only found one request about this
functionality in a mailing list from 2013 [8], but didn't seem to have
progressed.

Thank you for your time,
George


[1]:https://docs.openstack.org/nova/latest/user/filter-scheduler.html
[2]:https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py
[3]:https://www.brad-x.com/2016/01/01/dedicate-compute-hosts-to-projects/
[4]:http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/request-spec-object-mitaka.html#
[5]:https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L46
[6]:https://github.com/openstack/nova/blob/master/nova/scheduler/filters/image_props_filter.py
[7]:https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_instance_extra_specs.py
[8]:https://lists.launchpad.net/openstack/msg23275.html




___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [infra][devstack][congress] zuul3 transition surfaced error: The following LIBS_FROM_GIT were not installed correct: python-congressclient

2017-10-02 Thread Eric K
Ah got it thanks a lot Sam!

On 10/2/17, 12:39 PM, "Sam Matzek"  wrote:

>This is also one of several errors hitting the Trove gate.  This
>review should workaround the issue for now.
>
>https://review.openstack.org/#/c/508344/3
>
>On Mon, Oct 2, 2017 at 1:44 PM, Eric K  wrote:
>> Since the transition to zuul3, this error began and prevents the
>>devstack
>> setup from finishing on Congress gate jobs. I've been working to
>>diagnose
>> it but so far without success.
>>
>> Any suggestions and tips much appreciated!
>>
>> 
>>http://logs.openstack.org/58/493258/5/check/legacy-congress-dsvm-api-mysq
>>l/
>> cfb57ff/logs/devstacklog.txt.gz#_2017-10-01_23_46_16_449
>>
>> ./stack.sh:1392:check_libs_from_git
>> /opt/stack/new/devstack/inc/python:404:die
>> [ERROR] /opt/stack/new/devstack/inc/python:404 The following
>>LIBS_FROM_GIT
>> were not installed correct: python-congressclient
>> Error on exit
>>
>>
>> Note: previous to the transition to zuul3, it was already a problem on
>> python3 devstack setup at gate but never on python2. For example, see
>> these results run pre-zuul3: https://review.openstack.org/#/c/498996/
>>
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-02 Thread Alex Schultz
Hey Dan,

Thanks for sending out a note about this. I have a few questions inline.

On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince  wrote:
> One of the things the TripleO containers team is planning on tackling
> in Queens is fully containerizing the undercloud. At the PTG we created
> an etherpad [1] that contains a list of features that need to be
> implemented to fully replace instack-undercloud.
>

I know we talked about this at the PTG and I was skeptical that this
will land in Queens. With the exception of the Container's team
wanting this, I'm not sure there is an actual end user who is looking
for the feature so I want to make sure we're not just doing more work
because we as developers think it's a good idea. Given that etherpad
appears to contain a pretty big list of features, are we going to be
able to land all of them by M2?  Would it be beneficial to craft a
basic spec related to this to ensure we are not missing additional
things?

> Benefits of this work:
>
>  -Alignment: aligning the undercloud and overcloud installers gets rid
> of dual maintenance of services.
>

I like reusing existing stuff. +1

>  -Composability: tripleo-heat-templates and our new Ansible
> architecture around it are composable. This means any set of services
> can be used to build up your own undercloud. In other words the
> framework here isn't just useful for "underclouds". It is really the
> ability to deploy Tripleo on a single node with no external
> dependencies. Single node TripleO installer. The containers team has
> already been leveraging existing (experimental) undercloud_deploy
> installer to develop services for Pike.
>

Is this something that is actually being asked for or is this just an
added bonus because it allows developers to reduce what is actually
being deployed for testing?

>  -Development: The containerized undercloud is a great development
> tool. It utilizes the same framework as the full overcloud deployment
> but takes about 20 minutes to deploy.  This means faster iterations,
> less waiting, and more testing.  Having this be a first class citizen
> in the ecosystem will ensure this platform is functioning for
> developers to use all the time.
>

Seems to go with the previous question about the re-usability for
people who are not developers.  Has everyone (including non-container
folks) tried this out and attest that it's a better workflow for them?
 Are there use cases that are made worse by switching?

>  -CI resources: better use of CI resources. At the PTG we received
> feedback from the OpenStack infrastructure team that our upstream CI
> resource usage is quite high at times (even as high as 50% of the
> total). Because of the shared framework and single node capabilities we
> can re-architecture much of our upstream CI matrix around single node.
> We no longer require multinode jobs to be able to test many of the
> services in tripleo-heat-templates... we can just use a single cloud VM
> instead. We'll still want multinode undercloud -> overcloud jobs for
> testing things like HA and baremetal provisioning. But we can cover a
> large set of the services (in particular many of the new scenario jobs
> we added in Pike) with single node CI test runs in much less time.
>

I like this idea but would like to see more details around this.
Since this is a new feature we need to make sure that we are properly
covering the containerized undercloud with CI as well.  I think we
need 3 jobs to properly cover this feature before marking it done. I
added them to the etherpad but I think we need to ensure the following
3 jobs are defined and voting by M2 to consider actually switching
from the current instack-undercloud installation to the containerized
version.

1) undercloud-containers - a containerized install, should be voting by m1
2) undercloud-containers-update - minor updates run on containerized
underclouds, should be voting by m2
3) undercloud-containers-upgrade - major upgrade from
non-containerized to containerized undercloud, should be voting by m2.

If we have these jobs, is there anything we can drop or mark as
covered that is currently being covered by an overcloud job?

>  -Containers: There are no plans to containerize the existing instack-
> undercloud work. By moving our undercloud installer to a tripleo-heat-
> templates and Ansible architecture we can leverage containers.
> Interestingly, the same installer also supports baremetal (package)
> installation as well at this point. Like to overcloud however I think
> making containers our undercloud default would better align the TripleO
> tooling.
>
> We are actively working through a few issues with the deployment
> framework Ansible effort to fully integrate that into the undercloud
> installer. We are also reaching out to other teams like the UI and
> Security folks to coordinate the efforts around those components. If
> there are any questions about the effort or you'd like to be involved
> in the 

[openstack-dev] [ironic] this week's priorities and subteam reports

2017-10-02 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. Repair the CI after migrating to Zuul v3
2. BIOS interface spec: https://review.openstack.org/#/c/496481/
3. Client docs update: https://review.openstack.org/#/c/507927/ and 
https://review.openstack.org/#/c/507898/

After we repair the CI:
4. Rolling upgrades missing bit: https://review.openstack.org/#/c/497666/
4.1. check object versions in dbsync tool: 
https://review.openstack.org/#/c/497703/
5. Switch to none auth for standalone mode: 
https://review.openstack.org/#/c/359061/

Vendor priorities
-
cisco-ucs:
 Patchs in works for SDK update, but not posted yet, currently rebuilding 
third party CI infra after a disaster...
idrac:
Dell 3d party CI stability improvement for 13G and 12G servers
https://review.openstack.org/#/c/507942/
ilo:
irmc:
nothing to review this week.
secure boot support for virtual media boot interface is coming soon.
oneview:
   Migrate python-oneviewclient validations to Ironic OneView Drivers - 
https://review.openstack.org/#/c/468428/

Subproject priorities
-
bifrost:
ironic-inspector (or its client):
(dtantsur on behalf of milan): firewall refactoring: 
https://review.openstack.org/#/c/471831/ (milan) +1 for this week to move one 
step closer towards the dnsmasq PXE filter backend
networking-baremetal:
  neutron baremetal agent https://review.openstack.org/#/c/456235/
sushy and the redfish driver:
(dtantsur) implement redfish sessions: https://review.openstack.org/#/c/471942/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 25 Sep 2017 and 02 Oct 2017)
- Ironic: 263 bugs (-6) + 258 wishlist items (+3). 22 new (-7), 202 in progress 
(+2), 0 critical, 34 high (+4) and 34 incomplete
- Inspector: 13 bugs + 30 wishlist items (+1). 2 new, 13 in progress (+2), 0 
critical, 2 high and 3 incomplete
- Nova bugs with Ironic tag: 15 (-1). 0 new, 0 critical, 2 high
- dtantsur had to update the batch sizes used in the bug dashboard. now it's 
more reliable but much slower :(

CI refactoring and missing test coverage

- not considered a priority, it's a 'do it always' thing
- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO
- node take over
- resource classes integration tests: 
https://review.openstack.org/#/c/443628/

Essential Priorities


Ironic client API version negotiation (TheJulia, dtantsur)
--
- status as of 27 Sep 2017:
- not started

External project authentication rework (pas-ha, TheJulia)
-
- gerrit topic: https://review.openstack.org/#/q/topic:bug/1699547
- status as of 27 Sep 2017:
- review needed

Old ironic CLI deprecation (rloo)
-
- rfe: https://bugs.launchpad.net/python-ironicclient/+bug/1700815
- code/doc patch ready for review: Deprecate the ironic CLI: 
https://review.openstack.org/#/c/508218/. Depends on:
- Update README: https://review.openstack.org/#/c/507898/
- Update documentation: https://review.openstack.org/#/c/507927/

Classic drivers deprecation (dtantsur)
--
- spec: 
http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html
- status as of 02 Oct 2017:
- dev documentation for hardware types: TODO
- finish migration guide for all drivers: TODO
- switch documentation to hardware types: TODO

Reference architecture guide (dtantsur, sambetts)
-
- status as of 02 Oct 2017:
- Common bits: https://review.openstack.org/487410 needs review
- list of cases from 
https://etherpad.openstack.org/p/ironic-queens-ptg-open-discussion
- Admin-only provisioner
- small and/or rare: TODO
- large and/or frequent: TODO
- Bare metal cloud for end users
- smaller single-site: TODO
- larger single-site: TODO
- larger multi-site: TODO

High Priorities
===

Neutron event processing (vdrok, vsaienk0, sambetts)

- status as of 27 Sep 2017:
- spec at 

Re: [openstack-dev] [infra][devstack][congress] zuul3 transition surfaced error: The following LIBS_FROM_GIT were not installed correct: python-congressclient

2017-10-02 Thread Sam Matzek
This is also one of several errors hitting the Trove gate.  This
review should workaround the issue for now.

https://review.openstack.org/#/c/508344/3

On Mon, Oct 2, 2017 at 1:44 PM, Eric K  wrote:
> Since the transition to zuul3, this error began and prevents the devstack
> setup from finishing on Congress gate jobs. I've been working to diagnose
> it but so far without success.
>
> Any suggestions and tips much appreciated!
>
> http://logs.openstack.org/58/493258/5/check/legacy-congress-dsvm-api-mysql/
> cfb57ff/logs/devstacklog.txt.gz#_2017-10-01_23_46_16_449
>
> ./stack.sh:1392:check_libs_from_git
> /opt/stack/new/devstack/inc/python:404:die
> [ERROR] /opt/stack/new/devstack/inc/python:404 The following LIBS_FROM_GIT
> were not installed correct: python-congressclient
> Error on exit
>
>
> Note: previous to the transition to zuul3, it was already a problem on
> python3 devstack setup at gate but never on python2. For example, see
> these results run pre-zuul3: https://review.openstack.org/#/c/498996/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][devstack][congress] zuul3 transition surfaced error: The following LIBS_FROM_GIT were not installed correct: python-congressclient

2017-10-02 Thread Eric K
Since the transition to zuul3, this error began and prevents the devstack
setup from finishing on Congress gate jobs. I've been working to diagnose
it but so far without success.

Any suggestions and tips much appreciated!

http://logs.openstack.org/58/493258/5/check/legacy-congress-dsvm-api-mysql/
cfb57ff/logs/devstacklog.txt.gz#_2017-10-01_23_46_16_449

./stack.sh:1392:check_libs_from_git
/opt/stack/new/devstack/inc/python:404:die
[ERROR] /opt/stack/new/devstack/inc/python:404 The following LIBS_FROM_GIT
were not installed correct: python-congressclient
Error on exit


Note: previous to the transition to zuul3, it was already a problem on
python3 devstack setup at gate but never on python2. For example, see
these results run pre-zuul3: https://review.openstack.org/#/c/498996/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] vPTG schedule

2017-10-02 Thread Antoni Segura Puimedon
On Mon, Oct 2, 2017 at 12:52 PM, Daniel Mellado
 wrote:
> Hi Hongbin,
>
> It seems we messed up with the etherpad times, please do follow the
> invite schedule for that.
>
> Today's session will be held at 12:00 CET and 13:00 UTC (we have kist
> corrected the etherpad).
>
> Sorry for the noise!
>
> Daniel
>
> On 10/02/2017 03:30 AM, Hongbin Lu wrote:
>> Hi Toni,
>>
>> The time of a few proposed sessions look inconsistent with the etherpad.
>> Could you double check?
>>
>> On Thu, Sep 28, 2017 at 5:48 AM, Antoni Segura Puimedon
>> > wrote:
>>
>> Hi fellow Kuryrs!
>>
>> It's that time of the cycle again where we hold our virtual project team
>> gathering[0]. The dates this time are:
>>
>> October 2nd, 3rd and 4th
>>
>> The proposed sessions are:
>>
>> October 2nd 13:00utc: Scale discussion
>> In this session we'll talk about the recent scale testing we
>> have performed
>> in a 112 node cluster. From this starting point. We'll work on
>> identifying
>> and prioritizing several initiatives to improve the performance
>> of the
>> pod-in-VM and the baremetal scenarios.
>>
>> October 2nd 14:00utc: Scenario testing
>> The September 27th's release of zuulv3 opens the gates for
>> better scenario
>> testing, specially regarding multinode scenarios. We'll discuss
>> the tasks
>> and outstanding challenges to achieve good scenario testing
>> coverage and
>> document well how to write these tests in our tempest plugin.
>>
>> October 3rd 13:00utc: Multi networks
>> As the Kubernetes community Network SIG draws near to having a
>> consensus on
>> multi network implementations, we must elaborate a plan on a PoC
>> that takes
>> the upstream Kubernetes consensus and implements it with
>> Kuryr-Kubernetes
>> in a way that we can serve normal overlay and accelerated
>> networking.
>>
>> October 4th 14:00utc: Network Policy
>> Each cycle we aim to narrow the gap between Kubernetes
>> networking entities
>> and our translations. In this cycle, apart from the Loadbalancer
>> service
>> type support, we'll be tackling how we map Network Policy to Neutron
>> networking. This session will first lay out Network Policy and
>> its use and
>> then discuss about one or more mappings.

Due to the general strike tomorrow in Barcelona, the multi networks
discussion and
Network policy will be moved to 9th

Multi network October 9th 12utc
Network policy October 9th 13utc

>>
>> October 5th 13:00utc: Kuryr-libnetwork
>>
>> This session is Oct 4th in the etherpad.
>>
>> We'll do the cycle planing for Kuryr-libnetwork. Blueprints and
>> bugs and
>> general discussion.
>>
>> October 6th 14:00utc: Fuxi
>>
>> This session is Oct 4th in the etherpad.
>>
>> In this session we'll discuss everything related to storage,
>> both in the
>> Docker and in the Kubernetes worlds.
>>
>>
>> I'll put the links to the bluejeans sessions in the etherpad[0].
>>
>>
>> [0] https://etherpad.openstack.org/p/kuryr-queens-vPTG
>> 
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [election] NON nomination for TC

2017-10-02 Thread Jay Pipes

On 10/02/2017 01:44 PM, Jeremy Stanley wrote:

On 2017-10-02 12:03:44 -0400 (-0400), Sean Dague wrote:

I'd like to announce that after 4 years serving on the OpenStack
Technical Committee, I will not be running in this fall's
election.

[...]

You've served as a stellar role model, and provided many amazing
insights. Thanks so much for everything! It was a pleasure to serve
alongside you.


Ditto.

All the best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [election] NON nomination for TC

2017-10-02 Thread Jeremy Stanley
On 2017-10-02 12:03:44 -0400 (-0400), Sean Dague wrote:
> I'd like to announce that after 4 years serving on the OpenStack
> Technical Committee, I will not be running in this fall's
> election.
[...]

You've served as a stellar role model, and provided many amazing
insights. Thanks so much for everything! It was a pleasure to serve
alongside you.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-02 Thread Loo, Ruby
+1, Thx Dmitry for the proposal and Shiv for doing all the work :D

--ruby

From: Dmitry Tantsur 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, October 2, 2017 at 10:17 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

Hi all!
I would like to propose Shivanand (stendulker) to the core team.

His stats have been consistently high [1]. He has given a lot of insightful 
reviews recently, and his expertise in the iLO driver is also very valuable for 
the team.
As usual, please respond with your comments and objections.
Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-10-02 Thread Joshua Harlow

Impossible to list it all, lol

Doug Hellmann wrote:

Or https://github.com/jazzband

Now we need a project to list all of the organizations full of
unmaintained software...


On Oct 2, 2017, at 12:12 PM, Joshua Harlow > wrote:

Yup, +1 from me also,

Honestly might just be better to put pylockfile under ownership of:

https://github.com/pycontribs

I think the above is made for these kinds of things (vs oslo),

Thoughts?

-Josh

Ben Nemec wrote:

+1. I believe the work we had originally intended to put into pylockfile
ended up in the fasteners library instead.

On 09/29/2017 10:07 PM, ChangBo Guo wrote:

pylockfile was deprecated about two years ago in [1] and it is not
used in any OpenStack Projects now [2] , we would like to retire it
according to steps of retiring a project[3].


[1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112

[2] http://codesearch.openstack.org/?q=pylockfile=nope==
[3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

--
ChangBo Guo(gcb)
Community Director @EasyStack


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][neutron] About Notifier

2017-10-02 Thread Ihar Hrachyshka
I don't think it is supposed to be pluggable, so no guarantees it
won't break for you if you decide to implement an out-of-tree
notifier. But I think Zun could instead listen for notifications that
are fed into ceilometer. We already issue those notifications, so it
would make sense to retrieve them from there instead of via a custom
interface.

Ihar

On Sun, Oct 1, 2017 at 7:15 PM, Hongbin Lu  wrote:
> Hi Neutron team,
>
>
>
> I saw neutron has a Nova notifier [1] that is able to notify Nova via REST
> API when a certain set of events happen. I think Zun would like to be
> notified as how Nova is. For example, we would like to receive notification
> whenever a port assigned to a container has been associated with a floating
> IP. If I propose a Zun notifier (preferably out-of-tree) for that, will you
> accept the patch? Or anyone has an alternative suggestion to stratify our
> use case.
>
>
>
> [1]
> https://github.com/openstack/neutron/blob/master/neutron/notifiers/nova.py
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [election] NON nomination for TC

2017-10-02 Thread Amy Marrich
Sean,

Thank you for all your hard work over the last 4 years!

Amy (spotz)

On Mon, Oct 2, 2017 at 11:03 AM, Sean Dague  wrote:

> I'd like to announce that after 4 years serving on the OpenStack
> Technical Committee, I will not be running in this fall's
> election. Over the last 4 years we've navigated some rough seas
> together, including the transition to more inclusion of projects, the
> dive off the hype curve, the emergence of real interoperability
> between clouds, and the beginnings of a new vision of OpenStack
> pairing with more technologies beyond our community.
>
> There remains a ton of good work to be done. But it's also important
> that we have a wide range of leaders to do that. Those opportunities
> only exist if we make space for new leaders to emerge. Rotational
> leadership is part of what makes OpenStack great, and is part of what
> will ensure that this community lasts far beyond any individuals
> within it.
>
> I plan to still be around in the community, and contribute where
> needed. So this is not farewell. However it will be good to see new
> faces among the folks leading the next steps in the community.
>
> I would encourage all members of the community that are interested in
> contributing to the future of OpenStack to step forward and run. It's
> important to realize what the TC is and can be. This remains a
> community driven by consensus, and the TC reflects that. Being a
> member of the TC does raise your community visibility, but it does not
> replace the need to listen, understand, communicate clearly, and
> realize that hard work comes through compromise.
>
> Good luck to all our candidates this fall, and thanks for letting me
> represent you the past 4 years.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [election] NON nomination for TC

2017-10-02 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-10-02 12:03:44 -0400:
> I'd like to announce that after 4 years serving on the OpenStack
> Technical Committee, I will not be running in this fall's
> election. Over the last 4 years we've navigated some rough seas
> together, including the transition to more inclusion of projects, the
> dive off the hype curve, the emergence of real interoperability
> between clouds, and the beginnings of a new vision of OpenStack
> pairing with more technologies beyond our community.
> 
> There remains a ton of good work to be done. But it's also important
> that we have a wide range of leaders to do that. Those opportunities
> only exist if we make space for new leaders to emerge. Rotational
> leadership is part of what makes OpenStack great, and is part of what
> will ensure that this community lasts far beyond any individuals
> within it.
> 
> I plan to still be around in the community, and contribute where
> needed. So this is not farewell. However it will be good to see new
> faces among the folks leading the next steps in the community.
> 
> I would encourage all members of the community that are interested in
> contributing to the future of OpenStack to step forward and run. It's
> important to realize what the TC is and can be. This remains a
> community driven by consensus, and the TC reflects that. Being a
> member of the TC does raise your community visibility, but it does not
> replace the need to listen, understand, communicate clearly, and
> realize that hard work comes through compromise.
> 
> Good luck to all our candidates this fall, and thanks for letting me
> represent you the past 4 years.
> 
> -Sean
> 

Thanks for everything you've done, Sean. I've learned a lot working
with you on the TC, and I know that OpenStack is better off because
of your contributions.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-10-02 Thread Doug Hellmann
Or https://github.com/jazzband 

Now we need a project to list all of the organizations full of unmaintained 
software...

> On Oct 2, 2017, at 12:12 PM, Joshua Harlow  wrote:
> 
> Yup, +1 from me also,
> 
> Honestly might just be better to put pylockfile under ownership of:
> 
> https://github.com/pycontribs
> 
> I think the above is made for these kinds of things (vs oslo),
> 
> Thoughts?
> 
> -Josh
> 
> Ben Nemec wrote:
>> +1. I believe the work we had originally intended to put into pylockfile
>> ended up in the fasteners library instead.
>> 
>> On 09/29/2017 10:07 PM, ChangBo Guo wrote:
>>> pylockfile was deprecated about two years ago in [1] and it is not
>>> used in any OpenStack Projects now [2] , we would like to retire it
>>> according to steps of retiring a project[3].
>>> 
>>> 
>>> [1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112
>>> 
>>> [2] http://codesearch.openstack.org/?q=pylockfile=nope==
>>> [3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
>>> 
>>> --
>>> ChangBo Guo(gcb)
>>> Community Director @EasyStack
>>> 
>>> 
>>> __
>>> 
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Non-critical, non-gate patches freeze during migration to zuul3

2017-10-02 Thread Michał Jastrzębski
Hello,

As you all know, Zuul v3 is on! Unfortunate side effect was that it
broke our gates. For that reason I submitted patch removing legacy
jobs at all and we will do quick migration to zuulv3 compatible, local
jobs. That means between this patch merges [1] and we finish that, we
will be without effective CI. For that reason I want us to not merge
any patches that aren't critical bugfixes or gate-related work.

Patches that migrates us to zuul v3 are [2] [3], please prioritize them.

Regards,
Michal

[1] https://review.openstack.org/#/c/508944/
[2] https://review.openstack.org/#/c/508661/
[3] https://review.openstack.org/#/c/508376/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Rotating bugs deputy schedule

2017-10-02 Thread Miguel Lavalle
Dear Neutrinos,

Thank you very much to all the team members who volunteered to be bug
deputy for one week. Given your excellent response, each one of us will
only have bug deputy duty once every three months and a half, as can be
seen in the scheduled rotation in our weekly meeting agenda:
https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy.

I will keep the schedule fresh every week, so everyone knows all the time
when his / her next duty week is. I also encourage other team members to
volunteer. Please feel free to add your name at the end of the rotation.

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][classifier] CCF Meeting

2017-10-02 Thread Shaughnessy, David
Hi everyone.
Reminder that the Common Classification Framework meeting is at 14:00 UTC 
tomorrow.
We will be discussing topics raised at the PTG and other updates.
The Agenda can be found here: 
https://wiki.openstack.org/wiki/Neutron/CommonClassificationFramework#All_Meetings.27_Agendas
Regards.
David.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-10-02 Thread Joshua Harlow

Yup, +1 from me also,

Honestly might just be better to put pylockfile under ownership of:

https://github.com/pycontribs

I think the above is made for these kinds of things (vs oslo),

Thoughts?

-Josh

Ben Nemec wrote:

+1. I believe the work we had originally intended to put into pylockfile
ended up in the fasteners library instead.

On 09/29/2017 10:07 PM, ChangBo Guo wrote:

pylockfile was deprecated about two years ago in [1] and it is not
used in any OpenStack Projects now [2] , we would like to retire it
according to steps of retiring a project[3].


[1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112

[2] http://codesearch.openstack.org/?q=pylockfile=nope==
[3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

--
ChangBo Guo(gcb)
Community Director @EasyStack


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-02 Thread Julia Kreger
> I would like to propose Shivanand (stendulker) to the core team.
+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [election] NON nomination for TC

2017-10-02 Thread Sean Dague
I'd like to announce that after 4 years serving on the OpenStack
Technical Committee, I will not be running in this fall's
election. Over the last 4 years we've navigated some rough seas
together, including the transition to more inclusion of projects, the
dive off the hype curve, the emergence of real interoperability
between clouds, and the beginnings of a new vision of OpenStack
pairing with more technologies beyond our community.

There remains a ton of good work to be done. But it's also important
that we have a wide range of leaders to do that. Those opportunities
only exist if we make space for new leaders to emerge. Rotational
leadership is part of what makes OpenStack great, and is part of what
will ensure that this community lasts far beyond any individuals
within it.

I plan to still be around in the community, and contribute where
needed. So this is not farewell. However it will be good to see new
faces among the folks leading the next steps in the community.

I would encourage all members of the community that are interested in
contributing to the future of OpenStack to step forward and run. It's
important to realize what the TC is and can be. This remains a
community driven by consensus, and the TC reflects that. Being a
member of the TC does raise your community visibility, but it does not
replace the need to listen, understand, communicate clearly, and
realize that hard work comes through compromise.

Good luck to all our candidates this fall, and thanks for letting me
represent you the past 4 years.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-10-02 Thread Ben Nemec
+1.  I believe the work we had originally intended to put into 
pylockfile ended up in the fasteners library instead.


On 09/29/2017 10:07 PM, ChangBo Guo wrote:
pylockfile was  deprecated  about two years ago in [1] and it is not 
used in any OpenStack Projects now [2] , we would like to retire it 
according  to steps of retiring a project[3].



[1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112
[2] http://codesearch.openstack.org/?q=pylockfile=nope==
[3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
--
ChangBo Guo(gcb)
Community Director @EasyStack


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Tong Liu
The workaround [1] has not landed yet. I saw it has +1 workflow but has not
been merged.

Thanks,
Tong
[1] https://review.openstack.org/#/c/508344/

On Mon, Oct 2, 2017 at 6:51 AM, Mehdi Abaakouk  wrote:

> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue
> on telemetry integration jobs:
>
>  http://logs.openstack.org/32/508132/1/check/legacy-telemetr
> y-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
>
>
> On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:
>
>> On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
>>
>>> 2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
>>>
 We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

 http://logs.openstack.org/32/508132/1/check/legacy-telemetry
 -dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-
 setup-workspace-new.txt

>>>
>>> That looks similar to what Ian fixed in [1], seems like your job needs
>>> a corresponding patch.
>>>
>>
>> Thanks, I have proposed the same kind of patch for telemetry [1]
>>
>> [1] https://review.openstack.org/508448
>>
>> --
>> Mehdi Abaakouk
>> mail: sil...@sileht.net
>> irc: sileht
>>
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-10-02 Thread Mooney, Sean K


> -Original Message-
> From: Dan Smith [mailto:d...@danplanet.com]
> Sent: Monday, October 2, 2017 3:53 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] vGPUs support for Nova - Implementation
> 
> >> I also think there is value in exposing vGPU in a generic way,
> irrespective of the underlying implementation (whether it is DEMU,
> mdev, SR-IOV or whatever approach Hyper-V/VMWare use).
> >
> > That is a big ask. To start with, all GPUs are not created equal, and
> > various vGPU functionality as designed by the GPU vendors is not
> > consistent, never mind the quirks added between different hypervisor
> > implementations. So I feel like trying to expose this in a generic
> > manner is, at least asking for problems, and more likely bound for
> > failure.
> 
> I feel the opposite. IMHO, Nova’s role in life is not to expose all the
> quirks of the underlying platform, but rather to provide a useful
> abstraction on top of those things. In spite of them.
[Mooney, Sean K] I have to agree with dan here.
vGPUs are a great example of where nova can add value by abstracting
the hypervisor specifics and provide a abstract api to allow requesting
vGPUS without having to encode the semantics of that api provide by the
hypervisor or hardware vendor in what we expose to the tenant.
> 
> > Nova already exposes plenty of hypervisor-specific functionality (or
> > functionality only implemented for one hypervisor), and that's fine.
> 
> And those bits of functionality are some of the most problematic we
> have. Among other reasons, they make it difficult for us to expose
> Thing 2.0, when we’ve encoded Thing 1.0 into our API so rigidly. This
> happens even within one virt driver where Thing 2.0 is significantly
> different than Thing 1.0.
> 
> The vGPU stuff seems well-suited for the generic modeling work that
> we’ve spent the last few years working on, and is a perfect example of
> an area where we can avoid piling on more debt to a not-abstract-enough
> “model” and move forward with the new one. That’s certainly my
> preference, and I think it’s actually less work than the debt-ridden
> way.
> 
> -—Dan
[Mooney, Sean K] I also agree that its likely less work to start fresh with 
the correct generic solution now, then try to adapt the pci passthough code
we have today to support vGPUs with out breaking the current
sriov and passthrough support. how vGPUs are virtualized is GPU vendor specific
so even within a single host you may neen to support multiple methods 
(sriov/mdev...) in a single virt dirver. For example a cloud/host with both amd 
and nvidia
Gpus which uses Libvirt would have to support generating the correct xml for 
both solutions.
> 
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mooney, Sean K
This also broke the legacy-tempest-dsvm-nova-os-vif gate job
http://logs.openstack.org/98/508498/1/check/legacy-tempest-dsvm-nova-os-vif/8fdf055/logs/devstacklog.txt.gz#_2017-09-29_14_15_41_961

> -Original Message-
> From: Mehdi Abaakouk [mailto:sil...@sileht.net]
> Sent: Monday, October 2, 2017 2:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [devstack] zuulv3 gate status;
> LIBS_FROM_GIT failures
> 
> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue on telemetry integration jobs:
> 
>   http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
> 
> On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:
> >On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
> >>2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> >>>We also have our legacy-telemetry-dsvm-integration-ceilometer
> broken:
> >>>
> >>>http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> int
> >>>egration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-
> new.tx
> >>>t
> >>
> >>That looks similar to what Ian fixed in [1], seems like your job
> needs
> >>a corresponding patch.
> >
> >Thanks, I have proposed the same kind of patch for telemetry [1]
> >
> >[1] https://review.openstack.org/508448
> >
> >--
> >Mehdi Abaakouk
> >mail: sil...@sileht.net
> >irc: sileht
> 
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Sydney Community Contributor Awards!

2017-10-02 Thread Kendall Nelson
Hello Everyone :)

With about a month left till the Sydney Summit, I'd like to kickoff another
round of Community Contributor Awards!

If you know someone that is awesome at replying to emails 24 hours a day-
nominate them! [1]

If you know someone that always puts the new contributors first and
dedicates time to getting them up to speed- nominate them! [1]

If you know someone that somehow is always able to fix the problem no
matter how ugly- nominate them! [1]

Basically nominate anyone you think deserves an award :) [1] There are a
lot of people out there that could use an extra pat on the back and this is
a good opportunity to get them a little extra recognition.

Please submit your candidates by
*October 27th. *

Winners will be announced at the feedback session in Sydney.

-Kendall Nelson (diablo_rojo)

[1] https://openstackfoundation.formstack.com/forms/cca_nominations_syd
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-10-02 Thread Dan Smith
>> I also think there is value in exposing vGPU in a generic way, irrespective 
>> of the underlying implementation (whether it is DEMU, mdev, SR-IOV or 
>> whatever approach Hyper-V/VMWare use).
> 
> That is a big ask. To start with, all GPUs are not created equal, and
> various vGPU functionality as designed by the GPU vendors is not
> consistent, never mind the quirks added between different hypervisor
> implementations. So I feel like trying to expose this in a generic
> manner is, at least asking for problems, and more likely bound for
> failure.

I feel the opposite. IMHO, Nova’s role in life is not to expose all the quirks 
of the underlying platform, but rather to provide a useful abstraction on top 
of those things. In spite of them.

> Nova already exposes plenty of hypervisor-specific functionality (or
> functionality only implemented for one hypervisor), and that's fine.

And those bits of functionality are some of the most problematic we have. Among 
other reasons, they make it difficult for us to expose Thing 2.0, when we’ve 
encoded Thing 1.0 into our API so rigidly. This happens even within one virt 
driver where Thing 2.0 is significantly different than Thing 1.0.

The vGPU stuff seems well-suited for the generic modeling work that we’ve spent 
the last few years working on, and is a perfect example of an area where we can 
avoid piling on more debt to a not-abstract-enough “model” and move forward 
with the new one. That’s certainly my preference, and I think it’s actually 
less work than the debt-ridden way.

-—Dan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-10-02 Thread Sahid Orentino Ferdjaoui
On Fri, Sep 29, 2017 at 04:51:10PM +, Bob Ball wrote:
> Hi Sahid,
> 
> > > a second device emulator along-side QEMU.  There is no mdev 
> > > integration.  I'm concerned about how much mdev-specific functionality 
> > > would have to be faked up in the XenServer-specific driver for vGPU to 
> > > be used in this way.
> >
> > What you are refering with your DEMU it's what QEMU/KVM have with its 
> > vfio-pci. XenServer is
> > reading through MDEV since the vendors provide drivers on *Linux* using the 
> > MDEV framework.
> > MDEV is a kernel layer, used to expose hardwares, it's not hypervisor 
> > specific.
> 
> It is possible that the vendor's userspace libraries use mdev,
> however DEMU has no concept of mdev at all.  If the vendor's
> userspace libraries do use mdev then this is entirely abstracted
> from XenServer's integration.  While I don't have access to the
> vendors source for the userspace libraries or the kernel module my
> understanding was that the kernel module in XenServer's integration
> is for the userspace libraries to talk to the kernel module and for
> IOCTLS.  My reading of mdev implies that /sys/class/mdev_bus should
> exist for it to be used?  It does not exist in XenServer, which to
> me implies that the vendor's driver for XenServer do not use mdev?

I shared our discussion to Alex Williamson, it's response:

> Hi Sahid,
>
> XenServer does not use mdev for vGPU support.  The mdev/vfio
> infrastructure was developed in response to DEMU used on XenServer,
> which we felt was not an upstream acceptable solution.  There has
> been cursory interest in porting vfio to Xen, so it's possible that
> they might use the same mechanism some day, but for now they are
> different solutions, the vifo/mdev solution being the only one
> accepted upstream so far. Thanks,
>
> Alex

It's my mistake. It seems clear now that XenSever can't take the
benefice of that mdev support I have added in /pci module. The support
of vGPUs for Xen will have to wait for the generic device management I
guess.

>
> Bob
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-02 Thread Dmitry Tantsur
Hi all!

I would like to propose Shivanand (stendulker) to the core team.

His stats have been consistently high [1]. He has given a lot of insightful
reviews recently, and his expertise in the iLO driver is also very valuable
for the team.

As usual, please respond with your comments and objections.

Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-10-02 Thread Vega Cai
Hi Mohammed,

Thanks for your suggestion. I have submitted a patch [1] to try to fix the
job configuration, and used [2] that depends on it to test whether the fix
works.

[1] https://review.openstack.org/#/c/508824/
[2] https://review.openstack.org/#/c/508496/

On Sat, 30 Sep 2017 at 20:31 Mohammed Naser  wrote:

> Hi Vega,
>
> Please check the document. Some jobs were migrated with incorrect nodesets
> and have to be switched to multinode in the job definition in
> openstack-zuul-jobs
>
> Good luck
> Mohammed
>
> Sent from my iPhone
>
> On Sep 30, 2017, at 7:35 AM, Vega Cai  wrote:
>
> Hi,
>
> In Tricircle we use the "multinode" topology to setup a test environment
> with three regions, "CentralRegion" and "RegionOne" in one node, and
> "RegionTwo" in the other node. I notice that the job definition has been
> migrated to
> openstack-zuul-jobs/blob/master/playbooks/legacy/tricircle-dsvm-multiregion/run.yaml,
> but the job fails with the error that "public endpoint for image service in
> RegionTwo region not found", so I guess the node of "RegionTwo" is not
> correctly running. Since the original log folder for the second
> "subnode-2/" is missing in the job report, I also cannot figure out what
> the wrong is with the second node.
>
> Any hints to debug this problem?
>
>
> On Fri, 29 Sep 2017 at 22:59 Monty Taylor  wrote:
>
>> Hey everybody!
>>
>> tl;dr - If you're having issues with your jobs, check the FAQ, this
>> email and followups on this thread for mentions of them. If it's an
>> issue with your job and you can spot it (bad config) just submit a patch
>> with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
>> to ask that you send a follow up email to this thread so that we can
>> ensure we've got them all and so that others can see it too.
>>
>> ** Zuul v3 Migration Status **
>>
>> If you haven't noticed the Zuul v3 migration - awesome, that means it's
>> working perfectly for you.
>>
>> If you have - sorry for the disruption. It turns out we have a REALLY
>> complicated array of job content you've all created. Hopefully the pain
>> of the moment will be offset by the ability for you to all take direct
>> ownership of your awesome content... so bear with us, your patience is
>> appreciated.
>>
>> If you find yourself with some extra time on your hands while you wait
>> on something, you may find it helpful to read:
>>
>>https://docs.openstack.org/infra/manual/zuulv3.html
>>
>> We're adding content to it as issues arise. Unfortunately, one of the
>> issues is that the infra manual publication job stopped working.
>>
>> While the infra manual publication is being fixed, we're collecting FAQ
>> content for it in an etherpad:
>>
>>https://etherpad.openstack.org/p/zuulv3-migration-faq
>>
>> If you have a job issue, check it first to see if we've got an entry for
>> it. Once manual publication is fixed, we'll update the etherpad to point
>> to the FAQ section of the manual.
>>
>> ** Global Issues **
>>
>> There are a number of outstanding issues that are being worked. As of
>> right now, there are a few major/systemic ones that we're looking in to
>> that are worth noting:
>>
>> * Zuul Stalls
>>
>> If you say to yourself "zuul doesn't seem to be doing anything, did I do
>> something wrong?", we're having an issue that jeblair and Shrews are
>> currently tracking down with intermittent connection issues in the
>> backend plumbing.
>>
>> When it happens it's an across the board issue, so fixing it is our
>> number one priority.
>>
>> * Incorrect node type
>>
>> We've got reports of things running on trusty that should be running on
>> xenial. The job definitions look correct, so this is also under
>> investigation.
>>
>> * Multinode jobs having POST FAILURE
>>
>> There is a bug in the log collection trying to collect from all nodes
>> while the old jobs were designed to only collect from the 'primary'.
>> Patches are up to fix this and should be fixed soon.
>>
>> * Branch Exclusions being ignored
>>
>> This has been reported and its cause is currently unknown.
>>
>> Thank you all again for your patience! This is a giant rollout with a
>> bunch of changes in it, so we really do appreciate everyone's
>> understanding as we work through it all.
>>
>> Monty
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> BR
> Zhiyuan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> 

Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-10-02 Thread Blair Bethwaite
On 29 September 2017 at 22:26, Bob Ball  wrote:
> The concepts of PCI and SR-IOV are, of course, generic, but I think out of 
> principal we should avoid a hypervisor-specific integration for vGPU (indeed 
> Citrix has been clear from the beginning that the vGPU integration we are 
> proposing is intentionally hypervisor agnostic)

To be fair, what this proposal is doing is piggy-backing on Nova's
existing PCI functionality to expose Linux/KVM VFIO mdev, it just so
happens mdev was created for vGPU, but it was designed to extend to
other devices/things too.

> I also think there is value in exposing vGPU in a generic way, irrespective 
> of the underlying implementation (whether it is DEMU, mdev, SR-IOV or 
> whatever approach Hyper-V/VMWare use).

That is a big ask. To start with, all GPUs are not created equal, and
various vGPU functionality as designed by the GPU vendors is not
consistent, never mind the quirks added between different hypervisor
implementations. So I feel like trying to expose this in a generic
manner is, at least asking for problems, and more likely bound for
failure.

Nova already exposes plenty of hypervisor-specific functionality (or
functionality only implemented for one hypervisor), and that's fine.
Maybe there should be a something in OpenStack that would generically
manage vGPU-graphics and/or vGPU-compute etc, but I'm pretty sure it
would never be allowed into Nova :-).

Anyway, take all that with a grain of salt, because frankly I would
love to see this in sooner rather than later - even if it did have a
big "this might change in non-upgradeable ways" sticker on it.

-- 
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mehdi Abaakouk

Looks like the LIBS_FROM_GIT workarounds have landed, but I still have some 
issue
on telemetry integration jobs:

 
http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz

On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Queens spec review sprint next week

2017-10-02 Thread Matt Riedemann

On 9/28/2017 6:45 PM, Matt Riedemann wrote:

Let's do a Queens spec review sprint.

What day works for people that review specs?

Monday came up in the team meeting today, but Tuesday could be good too 
since Monday's are generally evil.




Let's do the Queens spec review on Tuesday October 3rd. If you have a 
spec up for review, please try to be in the #openstack-nova channel on 
freenode IRC in case reviewers have questions about your proposal.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] attn: No approvals for stable/newton right now

2017-10-02 Thread Matt Riedemann

On 9/29/2017 7:31 PM, Dan Smith wrote:

Hi all,

Due to a zuulv3 bug, we're running an old nova-network test job on 
master and, as you would expect, failing hard. As a workaround in the 
meantime, we're[0] going to disable that job entirely so that it runs 
nowhere. This makes it not run on master (good) but also not run on 
stable/newton (not so good).


So, please don't approve anything new for stable/newton until we turn 
this job back on. That will happen when this patch lands:


   https://review.openstack.org/#/c/508638

Thanks!

--Dan

[0]: Note that this is all magic and dedication from the infra people, 
all I did was stand around and applaud. I'm including myself in the "we" 
here because I like to feel included by standing next to smart people, 
not because I did any work.


If it makes you feel any better, we can't merge anything on 
stable/newton right now anyway. There were some things that needed 
fixing in devstack-gate (which are now merged) but also some things that 
need fixing in devstack, which are not yet merged on master:


https://review.openstack.org/#/c/508366/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-02 Thread Doug Hellmann
Excerpts from Jesse Pretorius's message of 2017-10-02 08:38:06 +:
> On 9/29/17, 6:26 PM, "Jeremy Stanley"  wrote:
> 
> On 2017-09-29 18:39:18 +0200 (+0200), Thomas Bechtold wrote:
> > There is /etc [1]
> [...]
> 
> >Not really, no, because the system-context data_files path has to be
> >relative to /usr or /usr/local unless we want to have modules going
> >into /lib and entrypoints in /bin now.
> 
> Right – that’s exactly why I this it would be better to stick with a relative 
> path, but make the implementation consistent.
> 
> So, given a relative path being used – which is better: etc or share?
> 
> To me, etc seems more intuitive given that these are configuration files. 
> Using etc benefits those building and consuming wheels by being an intuitive 
> placement (putting the files into the etc directory of the venv). Each 
> packaging system has their own conventions so I do not think we’re going to 
> be able to come to a common consensus that pleases everyone, so I’d like to 
> rather focus on attaining a consistent path across services so that packagers 
> can adapt their scripts appropriately to cater for their individual quirks, 
> while everyone using wheels gets the benefit of the files being a part of the 
> package.
> 

etc implies they should be edited, though, and we're trying to move away
from that at least for the paste.ini files in most projects. So we may
need to decide on a case-by-case basis, unless we declare all of these
files as "sample" files that should be copied into the right place
before being edited.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-02 Thread Luigi Toscano
On Monday, 2 October 2017 13:28:17 CEST Thomas Goirand wrote:
> On 09/28/2017 04:50 PM, Jesse Pretorius wrote:
> > There’s some history around this discussion [1], but times have changed
> > and the purpose of the patches I’m submitting is slightly different [2]
> > as far as I can see – it’s a little more focused and less intrusive.
> > 
> >  
> > 
> > The projects which deploy OpenStack from source or using python wheels
> > currently have to either carry templates for api-paste, policy and
> > rootwrap files or need to source them from git during deployment. This
> > results in some rather complex mechanisms which could be radically
> > simplified by simply ensuring that all the same files are included in
> > the built wheel. Distribution packagers typically also have mechanisms
> > in place to fetch the files from the source repo when building the
> > packages – including the files through pbr’s data_files for packagers
> > may or may not be beneficial, depending on how the packagers do their
> > build processes.
> > 
> >  
> > 
> > In neutron [3], glance [4], designate [5] and sahara [6] the use of the
> > data_files option in the files section of setup.cfg is established and
> > has been that way for some time. However, there have been issues in the
> > past implementing something similar – for example in keystone there has
> > been a bit of a yoyo situation where a patch was submitted, then reverted.
> > 
> >  
> > 
> > I’ve been proposing patches [7] to try to make the implementation across
> > projects consistent and projects have, for the most part, been happy to
> > go ahead and merge them. However concern has been raised that we may end
> > up going through another yo-yo experience and therefore I’ve been asked
> > to raise this on the ML.
> > 
> >  
> > 
> > Do any packagers or deployment projects have issues with this
> > implementation? If there are any issues, what’re your suggestions to
> > resolve them?
> 
> I still have the issue that adding stuff in etc, at packaging time, push
> them in /usr/etc, which is obviously wrong. We tried to push for a PBR
> patch, but failed, as Robert Collins wrote it had to be fixed in
> setuptools. Which is why all patches have been reverted so far.
> 
> While I may agree with Robert, I think we had to choose for a pragmatic
> (even temporary) solution, and I don't understand why it's been years
> that this stays unfixed, especially when we have an easy solution. [1]
> 
> So, until that is fixed, please do not propose this type of patches.

Why not? Even if it does not fix the issue for proper installations,
- it does not provent people from copying the files somewhere else (it 
happened in sahara for how long I can remember, we have been using data_files)
- it fixes the deployment when the package is installed in a virtualenv;
- it introduces consistency: the day data_files starts to do the right thing, 
everything will work; if it's not possible to fix it with data_files, it's 
easy to spot which files should be fixed because all handled by data_files.

So definitely go for it.

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-02 Thread Dan Prince
One of the things the TripleO containers team is planning on tackling
in Queens is fully containerizing the undercloud. At the PTG we created
an etherpad [1] that contains a list of features that need to be
implemented to fully replace instack-undercloud.

Benefits of this work:

 -Alignment: aligning the undercloud and overcloud installers gets rid
of dual maintenance of services.

 -Composability: tripleo-heat-templates and our new Ansible
architecture around it are composable. This means any set of services
can be used to build up your own undercloud. In other words the
framework here isn't just useful for "underclouds". It is really the
ability to deploy Tripleo on a single node with no external
dependencies. Single node TripleO installer. The containers team has
already been leveraging existing (experimental) undercloud_deploy
installer to develop services for Pike.

 -Development: The containerized undercloud is a great development
tool. It utilizes the same framework as the full overcloud deployment
but takes about 20 minutes to deploy.  This means faster iterations,
less waiting, and more testing.  Having this be a first class citizen
in the ecosystem will ensure this platform is functioning for
developers to use all the time.

 -CI resources: better use of CI resources. At the PTG we received
feedback from the OpenStack infrastructure team that our upstream CI
resource usage is quite high at times (even as high as 50% of the
total). Because of the shared framework and single node capabilities we
can re-architecture much of our upstream CI matrix around single node.
We no longer require multinode jobs to be able to test many of the
services in tripleo-heat-templates... we can just use a single cloud VM
instead. We'll still want multinode undercloud -> overcloud jobs for
testing things like HA and baremetal provisioning. But we can cover a
large set of the services (in particular many of the new scenario jobs
we added in Pike) with single node CI test runs in much less time.

 -Containers: There are no plans to containerize the existing instack-
undercloud work. By moving our undercloud installer to a tripleo-heat-
templates and Ansible architecture we can leverage containers.
Interestingly, the same installer also supports baremetal (package)
installation as well at this point. Like to overcloud however I think
making containers our undercloud default would better align the TripleO
tooling.

We are actively working through a few issues with the deployment
framework Ansible effort to fully integrate that into the undercloud
installer. We are also reaching out to other teams like the UI and
Security folks to coordinate the efforts around those components. If
there are any questions about the effort or you'd like to be involved
in the implementation let us know. Stay tuned for more specific updates
as we organize to get as much of this in M1 and M2 as possible.

On behalf of the containers team,

Dan

[1] https://etherpad.openstack.org/p/tripleo-queens-undercloud-containe
rs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Report on TC activity for the May-Oct 2017 membership

2017-10-02 Thread Thierry Carrez
Hi everyone,

The election process to renew 6 of our Technical Committee members was
started[1], with the self-nomination period running this week.

Information on what the Technical Committee is and what it does is
generally available on the governance website[2]. However you may wonder
what the current Technical Committee membership achieved (to this day)
since its election in May. If so, read on!

During that period the TC made a number of decisions and passed a number
of resolutions to shape the future of OpenStack.

One of them is the publication of a top "help wanted" list to more
clearly communicate where our community is struggling and where extra
contributions can really make a difference. That list is now published
with entries for "doc owners", "infra sysadmins", and "Glance
contributors" (and more coming up).

Another area is adapting to changes in our ecosystem: we added etcd as
an OpenStack base service to encourage all projects to take advantage of
etcd for coordination. We also passed guidelines for managing releases
of binary artifacts, which can be applied to Go executables or Docker
images.

We evolved our project team list, removing Fuel and more recently adding
Blazar and Cyborg.

Other policies and clarifications adopted by the current membership
included a clarification about the current state of PostgreSQL in
OpenStack, a description of what upstream supports means, the addition
of a "assert:supports-api-interoperability" tag, and selecting community
goals for the Queens release cycle.

But perhaps the work the most relevant for the long term was the
publication of the TC 2019 vision, painting a picture of a desirable
future for the Technical Committee and by extension for the OpenStack
community.

One of the areas defined in this vision was to achieve better community
diversity and inclusivity. To achieve geographical diversity (and in
particular better tap into the potential contributors in China) we need
to be less reliant on regular synchronous team meetings on IRC. The TC
decided to lead by example there and drop its traditional reliance on
weekly meetings to make progress. Most of the work is now done
asynchronously, and we transitioned our IRC public presence to "office
hours" in various timezones in a dedicated channel (#openstack-tc). We
also passed resolutions to stop *requiring* IRC meetings for project
teams (and allow them to host meetings in any logged IRC channel).
Community diversity is also about engaging people coming from an
OpenStack operator background to be more directly involved with upstream
development. TC and UC members collaborated to create a new form of
workgroups (called SIGs) that should help in eliminating artificial
barriers to contribution and organize everyone around common topics of
interest.

Another area in the vision is how we engage with adjacent communities.
Members of the TC are actively engaged in reaching out and sharing
experiences, in particular with the Kubernetes and the Ansible communities.

But there are areas in the vision where the TC needs to make progress in
the near future, in particular the definition of "constellations", and
growing the next generation of OpenStack leaders. If you're interested
in helping, please throw your name in the hat ! The Technical Committee
is just a bunch of humans interested in the welfare of OpenStack as a
whole. The activity of the TC also doesn't stop with elected members:
you can help, draft, propose, and discuss changes without being formally
elected. Join us on #openstack-tc !

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-October/122942.html
[2] https://governance.openstack.org/tc/

-- 
Thierry Carrez (ttx)
Chair, OpenStack Technical Committee

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]May I run iscsiadm --op show & update 100 times?

2017-10-02 Thread Gorka Eguileor
On 02/10, Rikimaru Honjo wrote:
> Hello,
>
> I'd like to discuss about the following bug of os-brick.
>
> * os-brick's iscsi initiator unexpectedly reverts node.startup from 
> "automatic" to "manual".
>   https://bugs.launchpad.net/os-brick/+bug/1670237
>
> The important point of this bug is:
>
> When os-brick initializes iscsi connections:
> 1. os-brick will run "iscsiadm -m discovery" command if we use iscsi 
> multipath.

This only happens with a small number of cinder drivers, since most
drivers try to avoid the discovery path due to the number of
disadvantages it presents for a reliable deployment.  The most notorious
issue is that the path to the discovery portal on the attaching node is
down you cannot attach the volume no matter how many of the other paths
are up.



> 2. os-brick will update node.startup values to "automatic" if we use iscsi.
> 3. "iscsiadm -m discovery" command will recreate iscsi node repositories.[1]
>As a result, node.startup values of already attached volumes will be revert
>to default(=manual).
>
> Gorka Eguileor and I discussed how do I fix this bug[2].
> Our idea is this:
>
> 1. Confirm node.startup values of all the iscsi targets before running 
> discovery.
> 2. Re-update node.startup values of all the iscsi targets after running 
> discovery.
>
> But, I afraid that this operation will take a long time.
> I ran showing & updating node.startup values 100 times for researching.
> As a result, it took about 4 seconds.
> When I ran 200 times, it took about 8 seconds.
> I think this is a little long.
>
> If we use multipath and attach 25 volumes, 100 targets will be created.
> I think that updating 100 times is a possible use case.
>
> How do you think about it?
> Can I implement the above idea?
>

The approach I proposed is on the review is valid, the flaw is in the
specific implementation, you are doing 100 request where 4 would
suffice.

You don't need to do a request for each target-portal tuple, you only
need to do 1 request per portal, which reduces the number of calls to
iscsiadm from 100 to 4 in the case you mention.

You can check all targets for an IP with:
  iscsiadm -m node -p IP

This means that the performance hit from having 100 or 200 targets
should be negligible.

Cheers,
Gorka.



> [1]This is correct behavior of iscsiadm.
>https://github.com/open-iscsi/open-iscsi/issues/58#issuecomment-325528315
> [2]https://bugs.launchpad.net/os-brick/+bug/1670237
> --
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
> Rikimaru Honjo
> E-mail:honjo.rikim...@po.ntt-tx.co.jp
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 40

2017-10-02 Thread Balazs Gibizer

Hi,

Here is the status update / focus settings mail for w40.

Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault
notification is never emitted
We still have to figure out what is the expected behavior here based on:
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html
We requested information of possible users of api.fault notification 
[1] and it seems that rackspace does not use api.fault notification 
[2][3].


[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-September/014267.html
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-09-26.log.html#t2017-09-26T19:57:03
[3] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-09-27.log.html#t2017-09-27T08:30:57


Patch is proposed to remove the legacy api.fault notification dead code 
and configuration:
* https://review.openstack.org/#/c/505164/ Remove dead code of 
api.fault notification sending


[Medium] https://bugs.launchpad.net/nova/+bug/1718485 
instance.live.migration.force.complete is not a versioned notification 
and not whitelisted
Solution merged to master: https://review.openstack.org/#/c/506104/ and 
now it needs to be backported to stable branches.


[Undecided] https://bugs.launchpad.net/nova/+bug/1718226 bdm is 
wastefully loaded for versioned instance notifications
This is a bug to follow up of the closed bp 
https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight
The fix removes lot of unnecessary BDM loading from the notification 
code path: https://review.openstack.org/#/q/topic:bug/1718226


[High] https://bugs.launchpad.net/nova/+bug/1706563
TestRPC.test_cleanup_notifier_null fails with timeou
[High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error:
Cannot recover from stack overflow. - in py35 unit test job
The first bug is just a duplicate of the second. It seems the TetRPC
test suite has a way to end up in an infinite recusion.
Last week two compeeting patches were proposed that might help us 
getting at least logs from the failing test cases:

* https://review.openstack.org/#/c/507253/
* https://review.openstack.org/#/c/507239/

[Medium] https://bugs.launchpad.net/nova/+bug/1719915 
test_live_migrate_delete race fail when checking allocations: 
MismatchError: 2 != 1
It turned out that the original failure reported in the bug report was 
only seen in two patches that are actually changing behavior of the 
live migration. However during the investigation we found that there is 
a
real race that affects the test_live_migrate_delete test case and a 
patch is proposed and already on the gate.

https://review.openstack.org/#/c/507911/


Versioned notification transformation
-
Let's try to merge the same 3 transformation patches than last week. 
All the three only needs a second core to look at:
* https://review.openstack.org/#/c/396210 Transform aggregate.add_host 
notification
* https://review.openstack.org/#/c/396211 Transform 
aggregate.remove_host notification
* https://review.openstack.org/#/c/503089 Add instance.interface_attach 
notification


Versioned notification burndown chart
=
Last week I realized that the current notification burndown chart [1] 
needs to be moved to another hosting solution as OpenShift2 is retired. 
Chris made a generous offer to host it so the chart is now moved to 
[2]. Thank you Chris!


[1] https://vntburndown-gibi.rhcloud.com/index.html
[2] http://burndown.peermore.com/nova-notification/


Small improvements
--
* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data.
Takashi pointed out in the review that the current proposal actually 
changes the content of the sample appearing in our documentations. The 
reason is that some fields of the common sample fragment is overridden 
only during the functional test run and not during the doc generation. 
On the last subteam meeting we agreed with Matt to try to make the 
override in a more clever way that applies to both the functional test 
and the doc generation. See more in meeting log: 
http://eavesdrop.openstack.org/meetings/nova_notification/2017/nova_notification.2017-09-26-17.00.log.html#l-63



Weekly meeting
--
Next subteam meeting will be held on 3nd of October, Tuesday 17:00 UTC 
on openstack-meeting-4.

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171003T17


Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-02 Thread Thomas Goirand
On 09/28/2017 04:50 PM, Jesse Pretorius wrote:
> There’s some history around this discussion [1], but times have changed
> and the purpose of the patches I’m submitting is slightly different [2]
> as far as I can see – it’s a little more focused and less intrusive.
> 
>  
> 
> The projects which deploy OpenStack from source or using python wheels
> currently have to either carry templates for api-paste, policy and
> rootwrap files or need to source them from git during deployment. This
> results in some rather complex mechanisms which could be radically
> simplified by simply ensuring that all the same files are included in
> the built wheel. Distribution packagers typically also have mechanisms
> in place to fetch the files from the source repo when building the
> packages – including the files through pbr’s data_files for packagers
> may or may not be beneficial, depending on how the packagers do their
> build processes.
> 
>  
> 
> In neutron [3], glance [4], designate [5] and sahara [6] the use of the
> data_files option in the files section of setup.cfg is established and
> has been that way for some time. However, there have been issues in the
> past implementing something similar – for example in keystone there has
> been a bit of a yoyo situation where a patch was submitted, then reverted.
> 
>  
> 
> I’ve been proposing patches [7] to try to make the implementation across
> projects consistent and projects have, for the most part, been happy to
> go ahead and merge them. However concern has been raised that we may end
> up going through another yo-yo experience and therefore I’ve been asked
> to raise this on the ML.
> 
>  
> 
> Do any packagers or deployment projects have issues with this
> implementation? If there are any issues, what’re your suggestions to
> resolve them?

I still have the issue that adding stuff in etc, at packaging time, push
them in /usr/etc, which is obviously wrong. We tried to push for a PBR
patch, but failed, as Robert Collins wrote it had to be fixed in
setuptools. Which is why all patches have been reverted so far.

While I may agree with Robert, I think we had to choose for a pragmatic
(even temporary) solution, and I don't understand why it's been years
that this stays unfixed, especially when we have an easy solution. [1]

So, until that is fixed, please do not propose this type of patches.

Cheers,

Thomas Goirand (zigo)

[1] https://review.openstack.org/#/c/274077/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't send correctly authorization request to Oslo policy

2017-10-02 Thread Brian Rosmaita
Thanks Doug.

Ruan, please put an item on the Glance meeting agenda.  The meeting is
14:00 UTC on Thursday [0].

thanks,
brian

[0] http://eavesdrop.openstack.org/#Glance_Team_Meeting

On Fri, Sep 29, 2017 at 11:49 AM, Doug Hellmann  wrote:
> The Glance team has weekly meetings just like the Oslo team. You’ll find the
> details about the time and agenda on eavesdrop.openstack.org. I think it
> would make sense to add an item to the agenda for their next meeting to
> discuss this issue, and ask for someone to help guide you in fixing it. If
> the Oslo team needs to get involved after there is someone from Glance
> helping, then we can find the right person.
>
> Brian Rosmaita (rosmaita on IRC) is the Glance team PTL. I’ve copied him on
> this email to make sure he notices this thread.
>
> Doug
>
> On Sep 29, 2017, at 11:24 AM, ruan...@orange.com wrote:
>
> Not yet, we are not familiar with the Glance team.
> Ruan
>
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: vendredi 29 septembre 2017 16:26
> To: openstack-dev
> Subject: Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't
> send correctly authorization request to Oslo policy
>
> Excerpts from ruan.he's message of 2017-09-29 12:56:12 +:
>
> Hi folks,
> We are testing the http_check function in Oslo policy, and we figure out a
> bug: https://bugs.launchpad.net/glance/+bug/1720354.
> We believe that this is due to the Glance part since it doesn't well prepare
> the authorization request (body) to Oslo policy.
> Can we put this topic for the next Oslo meeting?
> Thanks,
> Ruan HE
>
>
> Do you have someone from the Glance team helping already?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Dell Ironic CI Migration

2017-10-02 Thread Dmitry Tantsur

Hi!

Thanks for letting us know. Next time please update the whiteboard with such 
information. I've updated it for you now, but I don't know the estimated end of 
downtime, could you please fill it in?


On 09/29/2017 08:34 PM, rajini.kart...@dell.com wrote:

Hi all

Dell Ironic 3^rd party CI is gearing up for hardware migration next week. We 
expect a week of down time as it is a larger change for us.


Will send out another mail when it is done.

Thank you

Rajini



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] vPTG schedule

2017-10-02 Thread Daniel Mellado
Hi Hongbin,

It seems we messed up with the etherpad times, please do follow the
invite schedule for that.

Today's session will be held at 12:00 CET and 13:00 UTC (we have kist
corrected the etherpad).

Sorry for the noise!

Daniel

On 10/02/2017 03:30 AM, Hongbin Lu wrote:
> Hi Toni,
> 
> The time of a few proposed sessions look inconsistent with the etherpad.
> Could you double check?
> 
> On Thu, Sep 28, 2017 at 5:48 AM, Antoni Segura Puimedon
> > wrote:
> 
> Hi fellow Kuryrs!
> 
> It's that time of the cycle again where we hold our virtual project team
> gathering[0]. The dates this time are:
> 
> October 2nd, 3rd and 4th
> 
> The proposed sessions are:
> 
> October 2nd 13:00utc: Scale discussion
>     In this session we'll talk about the recent scale testing we
> have performed
>     in a 112 node cluster. From this starting point. We'll work on
> identifying
>     and prioritizing several initiatives to improve the performance
> of the
>     pod-in-VM and the baremetal scenarios.
> 
> October 2nd 14:00utc: Scenario testing
>     The September 27th's release of zuulv3 opens the gates for
> better scenario
>     testing, specially regarding multinode scenarios. We'll discuss
> the tasks
>     and outstanding challenges to achieve good scenario testing
> coverage and
>     document well how to write these tests in our tempest plugin.
> 
> October 3rd 13:00utc: Multi networks
>     As the Kubernetes community Network SIG draws near to having a
> consensus on
>     multi network implementations, we must elaborate a plan on a PoC
> that takes
>     the upstream Kubernetes consensus and implements it with
> Kuryr-Kubernetes
>     in a way that we can serve normal overlay and accelerated
> networking.
> 
> October 4th 14:00utc: Network Policy
>     Each cycle we aim to narrow the gap between Kubernetes
> networking entities
>     and our translations. In this cycle, apart from the Loadbalancer
> service
>     type support, we'll be tackling how we map Network Policy to Neutron
>     networking. This session will first lay out Network Policy and
> its use and
>     then discuss about one or more mappings.
> 
> October 5th 13:00utc: Kuryr-libnetwork
> 
> This session is Oct 4th in the etherpad. 
> 
>     We'll do the cycle planing for Kuryr-libnetwork. Blueprints and
> bugs and
>     general discussion.
> 
> October 6th 14:00utc: Fuxi
> 
> This session is Oct 4th in the etherpad. 
> 
>     In this session we'll discuss everything related to storage,
> both in the
>     Docker and in the Kubernetes worlds.
> 
> 
> I'll put the links to the bluejeans sessions in the etherpad[0].
> 
> 
> [0] https://etherpad.openstack.org/p/kuryr-queens-vPTG
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] vPTG schedule

2017-10-02 Thread Antoni Segura Puimedon
You are right Hongbin. Sorry about that, somehow I counted with CET
instead of CEST.

I just corrected the Etherpad https://etherpad.openstack.org/p/kuryr-queens-vPTG

On Mon, Oct 2, 2017 at 3:30 AM, Hongbin Lu  wrote:
> Hi Toni,
>
> The time of a few proposed sessions look inconsistent with the etherpad.
> Could you double check?
>
> On Thu, Sep 28, 2017 at 5:48 AM, Antoni Segura Puimedon 
> wrote:
>>
>> Hi fellow Kuryrs!
>>
>> It's that time of the cycle again where we hold our virtual project team
>> gathering[0]. The dates this time are:
>>
>> October 2nd, 3rd and 4th
>>
>> The proposed sessions are:
>>
>> October 2nd 13:00utc: Scale discussion
>> In this session we'll talk about the recent scale testing we have
>> performed
>> in a 112 node cluster. From this starting point. We'll work on
>> identifying
>> and prioritizing several initiatives to improve the performance of the
>> pod-in-VM and the baremetal scenarios.
>>
>> October 2nd 14:00utc: Scenario testing
>> The September 27th's release of zuulv3 opens the gates for better
>> scenario
>> testing, specially regarding multinode scenarios. We'll discuss the
>> tasks
>> and outstanding challenges to achieve good scenario testing coverage
>> and
>> document well how to write these tests in our tempest plugin.
>>
>> October 3rd 13:00utc: Multi networks
>> As the Kubernetes community Network SIG draws near to having a
>> consensus on
>> multi network implementations, we must elaborate a plan on a PoC that
>> takes
>> the upstream Kubernetes consensus and implements it with
>> Kuryr-Kubernetes
>> in a way that we can serve normal overlay and accelerated networking.
>>
>> October 4th 14:00utc: Network Policy
>> Each cycle we aim to narrow the gap between Kubernetes networking
>> entities
>> and our translations. In this cycle, apart from the Loadbalancer
>> service
>> type support, we'll be tackling how we map Network Policy to Neutron
>> networking. This session will first lay out Network Policy and its use
>> and
>> then discuss about one or more mappings.
>>
>> October 5th 13:00utc: Kuryr-libnetwork
>
> This session is Oct 4th in the etherpad.
>>
>> We'll do the cycle planing for Kuryr-libnetwork. Blueprints and bugs
>> and
>> general discussion.
>>
>> October 6th 14:00utc: Fuxi
>
> This session is Oct 4th in the etherpad.
>>
>> In this session we'll discuss everything related to storage, both in
>> the
>> Docker and in the Kubernetes worlds.
>>
>>
>> I'll put the links to the bluejeans sessions in the etherpad[0].
>>
>>
>> [0] https://etherpad.openstack.org/p/kuryr-queens-vPTG
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [self-healing] When shall have self-healing meeting?

2017-10-02 Thread Lajos Katona

Hi,

Is there any news, rumors anything about this self-healing meeting and SIG?
Any information is appreciated.

Regards
Lajos

On 2017-09-22 08:08, Lajos Katona wrote:

Hi,

In Denver there was a session on self-healing and how to give 
direction to the ambitions around the topic, and some agreement that 
bi-weekly meeting should be organized.
Is there anybody who knows some details about that? Doddle, irc 
channel etc?


Thanks in advance for the answer.
Regards
Lajos

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Ansible testing with OpenVSwitch

2017-10-02 Thread Jean-Philippe Evrard
Well, there are ppl already running OpenVSwitch on openstack-ansible,
so I guess it's just a question of a few bug fixes and adding a
scenario to make sure this is working forever :p

On Fri, Sep 29, 2017 at 8:22 AM, Gyorgy Szombathelyi
 wrote:
>
>>
>> Hello JP,
>>
>> Ok, I will do some more testing against the blog post and then hit up the
>> #openstack-ansible channel.
>>
>> I need to finish a presentation on SFC first which is why I am looking into
>> OpenVSwitch.
>
> Hi Michael,
>
> If your goal is not openstack-ansible, here's an AIO installer for Pike with 
> OpenVSwitch:
> https://github.com/DoclerLabs/openstack
> (needs vagrant and VirtualBox)
>
> Br,
> György
>
>>
>> Thanks
>> Michael
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] Improving the process for release marketing

2017-10-02 Thread Jean-Philippe Evrard
On Fri, Sep 29, 2017 at 11:16 PM, Mike Perez  wrote:
> On 14:33 Sep 26, Anne Bertucio wrote:
>> Release marketing is a critical part of sharing what’s new in each release,
>> and we want to rework how the marketing community and projects work together
>> to make the release communications happen.
>>
>> Having multiple, repetetive demands to summarize "top features" during
>> release time can be pestering and having to recollect the information each
>> time isn't an effective use of time. Being asked to make polished,
>> "press-friendly" messages out of release notes can feel too far outside of
>> the PTL's focus areas or skills. At the same time, for technical content
>> marketers, attempting to find the key features from release notes, ML posts,
>> specs, Roadmap, etc., means interesting features are sometimes overlooked.
>> Marketing teams don't have the latest on what features landed and with what
>> caveats.
>>
>> To address this gap, the Release team and Foundation marketing team propose
>> collecting information as part of the release tagging process. Similar to the
>> existing (unused) "highlights" field for an individual tag, we will collect
>> some text in the deliverable file to provide highlights for the series (about
>> 3 items). That text will then be used to build a landing page on
>> release.openstack.org that shows the "key features" flagged by PTLs that
>> marketing teams should be looking at during release communication times. The
>> page will link to the release notes, so marketers can start there to gather
>> additional information, eliminating repetitive asks of PTLs. The "pre
>> selection" of features means marketers can spend more time diving into
>> release note details and less sifting through them.
>>
>> To supplement the written information, the marketing community is also going
>> to work together to consolidate follow up questions and deliver them in
>> "press corps" style (i.e. a single phone call to be asked questions from
>> multiple parties vs. multiple phone calls from individuals).
>>
>> We will provide more details about the implementation for the highlights page
>> when that is ready, but want to gather feedback about both aspects of the
>> plan early.
>
> As being someone who participates in building out that page, I  welcome this 
> to
> represent highlights from the community itself better.
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

It's a good thing. Thanks for the feature!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-10-02 Thread Sahid Orentino Ferdjaoui
On Fri, Sep 29, 2017 at 11:16:43AM -0400, Jay Pipes wrote:
> Hi Sahid, comments inline. :)
> 
> On 09/29/2017 04:53 AM, Sahid Orentino Ferdjaoui wrote:
> > On Thu, Sep 28, 2017 at 05:06:16PM -0400, Jay Pipes wrote:
> > > On 09/28/2017 11:37 AM, Sahid Orentino Ferdjaoui wrote:
> > > > Please consider the support of MDEV for the /pci framework which
> > > > provides support for vGPUs [0].
> > > > 
> > > > Accordingly to the discussion [1]
> > > > 
> > > > With this first implementation which could be used as a skeleton for
> > > > implementing PCI Devices in Resource Tracker
> > > 
> > > I'm not entirely sure what you're referring to above as "implementing PCI
> > > devices in Resource Tracker". Could you elaborate? The resource tracker
> > > already embeds a PciManager object that manages PCI devices, as you know.
> > > Perhaps you meant "implement PCI devices as Resource Providers"?
> > 
> > A PciManager? I know that we have a field PCI_DEVICE :) - I guess a
> > virt driver can return inventory with total of PCI devices. Talking
> > about manager, not sure.
> 
> I'm referring to this:
> 
> https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L33
>
> [SNIP]
> 
> It is that piece that Eric and myself have been talking about standardizing
> into a "generic device management" interface that would have an
> update_inventory() method that accepts a ProviderTree object [1]

Jay, all of that looks to me perfectly sane even it's not clear what
you want make so generic. That part of code is for the virt layers and
you can't make it like just considering GPU or NET as a generic piece,
they have characteristic which are requirements for virt layers.

In that method 'update_inventory(provider_tree)' which you are going
to introduce for /pci/PciManager, a first step would be to convert the
objects to a understable dict for the whole logic, right, or do you
have an other plan?

In all cases from my POV I don't see any blocker, both work can
co-exist without any pain. And adding features in the current /pci
module is not going to add heavy work but is going to give to us a
clear view of what is needed.

> [1]
> https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py
> 
> and would add resource providers corresponding to devices that are made
> available to guests for use.
> 
> > You still have to define "traits", basically for physical network
> > devices, the users want to select device according physical network,
> > to select device according the placement on host (NUMA), to select the
> > device according the bandwidth capability... For GPU it's same
> > story. *And I do not have mentioned devices which support virtual
> > functions.*
> 
> Yes, the generic device manager would be responsible for associating traits
> to the resource providers it adds to the ProviderTree provided to it in the
> update_inventory() call.
> 
> > So that is what you plan to do for this release :) - Reasonably I
> > don't think we are close to have something ready for production.
> 
> I don't disagree with you that this is a huge amount of refactoring to
> undertake over the next couple releases. :)

Yes and that is the point. We are going to block the work on /pci
module during a period where we can see a large interest around such
support.

> > Jay, I have question, Why you don't start by exposing NUMA ?
> 
> I believe you're asking here why we don't start by modeling NUMA nodes as
> child resource providers of the compute node? Instead of starting by
> modeling PCI devices as child providers of the compute node? If that's not
> what you're asking, please do clarify...
> 
> We're starting with modeling PCI devices as child providers of the compute
> node because they are easier to deal with as a whole than NUMA nodes and we
> have the potential of being able to remove the PciPassthroughFilter from the
> scheduler in Queens.
> 
> I don't see us being able to remove the NUMATopologyFilter from the
> scheduler in Queens because of the complexity involved in how coupled the
> NUMA topology resource handling is to CPU pinning, huge page support, and IO
> emulation thread pinning.
> 
> Hope that answers that question; again, lemme know if that's not the
> question you were asking! :)

Yes it was the question and you perfectly responded, thanks. I will
try to be more clear in the future :)

As you have noticed the support of NUMA will be quite difficult and it
is not in the TODO right now, which let me think that we are going to
block development on pci module and more of that at the end provide
less support (no NUMA awareness). Is that reasonable ?

> > > For the record, I have zero confidence in any existing "functional" tests
> > > for NUMA, SR-IOV, CPU pinning, huge pages, and the like. Unfortunately, 
> > > due
> > > to the fact that these features often require hardware that either the
> > > upstream community CI lacks or that depends on libraries, drivers and 
> > > kernel
> > > versions that really aren't 

[openstack-dev] [rdo][tripleo][kolla] Routine patch maintenance on trunk.rdoproject.org, Tue Oct 3rd

2017-10-02 Thread Javier Pena
Hi,

We need to do some routine patching on trunk.rdoproject.org on Oct 3rd, at 8:00 
UTC. There will be a brief downtime for a reboot, where jobs using packages 
from RDO Trunk can fail. Sorry for the inconveniences.

If you need additional information, please do not hesitate to contact us.

Regards,
Javier

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-02 Thread Jesse Pretorius
On 9/29/17, 6:26 PM, "Jeremy Stanley"  wrote:

On 2017-09-29 18:39:18 +0200 (+0200), Thomas Bechtold wrote:
> There is /etc [1]
[...]

>Not really, no, because the system-context data_files path has to be
>relative to /usr or /usr/local unless we want to have modules going
>into /lib and entrypoints in /bin now.

Right – that’s exactly why I this it would be better to stick with a relative 
path, but make the implementation consistent.

So, given a relative path being used – which is better: etc or share?

To me, etc seems more intuitive given that these are configuration files. Using 
etc benefits those building and consuming wheels by being an intuitive 
placement (putting the files into the etc directory of the venv). Each 
packaging system has their own conventions so I do not think we’re going to be 
able to come to a common consensus that pleases everyone, so I’d like to rather 
focus on attaining a consistent path across services so that packagers can 
adapt their scripts appropriately to cater for their individual quirks, while 
everyone using wheels gets the benefit of the files being a part of the package.



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX]request about [Alarm Difinications] window

2017-10-02 Thread Yamaguchi, Shinsa
Hi, I'm newbie of Monasca.
I'm very grateful that if you consider my request.

・Background
I tried to create a new alarm on [Alarm Difinications] window.
But it was failed and displayed the message "No metric available" at 
[Expression].
I realized that I need to create metrics before create Alarms,
however there is no information about creating metrics.

・Request
I'm very grateful if you create the guide to webpages about creating metrics at 
the window displayed when the cursor is over the question mark.

・Environment
OpenStack Monasca(Pike)


regards,

Shinsa Yamaguchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] pypi publishing

2017-10-02 Thread Gary Kotton


On 10/1/17, 9:40 PM, "Paul Belanger"  wrote:

On Sun, Oct 01, 2017 at 05:02:00AM -0700, Clark Boylan wrote:
> On Sat, Sep 30, 2017, at 10:28 PM, Gary Kotton wrote:
> > Hi,
> > Any idea why latest packages are not being published to pypi.
> > Examples are:
> > vmware-nsxlib 10.0.2 (latest stable/ocata)
> > vmware-nsxlib 11.0.1 (latest stable/pike)
> > vmware-nsxlib 11.1.0 (latest queens)
> > Did we miss a configuration that we needed to do in the infra projects?
> > Thanks
> > Gary
> 
> Looks like these are all new tags pushed within the last day. Looking at
> logs for 11.1.1 we see the tarball artifact creation failed [0] due to
> what is likely a bug in the new zuulv3 jobs.
> 
> [0]
> 
http://logs.openstack.org/e5/e5a2189276396201ad88a6c47360c90447c91589/release/publish-openstack-python-tarball/2bdd521/ara/result/6ec8ae45-7266-40a9-8fd5-3fb4abcde677/
> 
> We'll need to get the jobs debugged.
> 
This is broken because of: https://review.openstack.org/508563/

[Gary] Do we need to revert this? Once the issue is fixed how do we get the 
packages published?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev